Mapper Node
Mapper Node is designed to help you effortlessly transform an existing input field array into a well-defined output structure—be it an object or an array of objects.
The Mapper Node is crucial for scenarios like structuring data for list views, preparing data for API-based integrations (such as employee master synchronisation), and ensuring data consistency across your workflows.
Key Features
- Flexible Node Creation:
- Drag and drop the Mapper Node onto the canvas for asynchronous operations.
- Add it within a sync block for synchronous data transformations.
- Structured Output: Define a clear output structure (object or array of objects) using reusable Data Models. Data Model Integration: Leverage globally stored Data Models to define the schema for your output array, ensuring consistency and reusability.
- One-to-One Mapping: Intuitively map keys from your input field array to the fields defined in your chosen Data Model.
- Transformation Functions: Apply pre-defined functions (e.g., date transformations) to your data during the mapping process.
- Built-in Testing: Test your entire mapping configuration with sample input data to ensure correctness before deployment.
- Clear Output: The transformed output field array will be readily available in the Data Center, just like any other field array.
Configuration & Usage
- Adding and Configuring the Mapper Node
- Drag the Mapper Node onto the workflow canvas.
- Select Input Field Array:
-
Open the Data Center.
-
Choose the source field array you want to transform. The Data Center will filter to show only "Field Array" type keys.
-
- Select Data Model:
-
Choose a pre-existing Data Model from a single-select dropdown. Data Models define the structure (schema) of your output.
-
If no suitable Data Model exists, you'll be guided to the admin section to create a new one.
-
- Data Model Management (Admin Section): Data Models are global and reusable schemas for your structured data.
- Creation:
- Add Manually: Define fields one by one, specifying their Name (unique identifier), Label, and Data Type (e.g., String, Object). You can nest fields, for example, adding a String within an Object.
- Add via JSON: Upload a JSON schema to define your Data Model. You can also download an existing schema, modify it, and re-upload.
- Naming:
- Provide a unique Data Model Name (Label/ID character limit: 50).
- Allowed special characters: underscore (_), hyphen (-), Space.
- Validations & Management:
-
A Data Model cannot be edited if it's currently used in any active applications.
-
You can view references to see where a Data Model is being used.
-
For Data Models in use, you can only add new keys or update labels of existing keys. Deleting keys that are in use is not permitted.
-
- Creation:
- Creating the Mapping: Once your input field array and Data Model are selected in the Mapper Node:
- Define Output Key Mapping: For each field in your chosen Data Model (which represents your desired output structure), you'll map an input key to it.
- The output key (from the Data Model) will be displayed.
- Select Input Key to Map:
- Open the Data Center to select a key from your chosen input field array.
- This input can also be a hybrid, allowing you to combine Data Center keys with static text if needed.
- Apply Transformation Functions (Optional):
- For each mapped key, you can apply pre-defined transformation functions (e.g., date formatting, string operations) to the input data before it's placed into the output structure.
- The system supports String-to-String and Object-to-Object mapping. For objects, you'll need to map their nested keys further.
- Define Output Key Mapping: For each field in your chosen Data Model (which represents your desired output structure), you'll map an input key to it.
- Testing Your Mapping
- Before saving, use the "Test Mapping" feature.
- Provide a sample JSON input field array that matches the structure of your selected input field array.
- You'll be prompted if your test input doesn't conform to the expected configuration.
- The test will return the transformed output field array based on your mapping logic and transformations.
- Output & Monitoring
- Data Center Visibility: The newly structured output field array generated by the Mapper Node will be available in the Data Center for use in subsequent workflow steps, similar to any other field array.
- Progress Path & Statuses: Track the Mapper Node's execution with the following statuses:
- Upcoming: Node is yet to be triggered.
- Skipped (Skipped due to Logic): Node was not triggered due to workflow logic.
- In progress: Node is currently executing.
- Successful: Node executed successfully, and the data is transformed.
- Failed: Node execution failed.
Known Failure Scenarios
The Mapper Node may enter a "Failed" state if:
- The structure of the input field array at runtime does not match the configuration defined during setup.
- A JavaScript transformation function applied to a key fails during execution.
Updated about 1 month ago
