Mapper Node

Mapper Node is designed to help you effortlessly transform an existing input field array into a well-defined output structure—be it an object or an array of objects.

The Mapper Node is crucial for scenarios like structuring data for list views, preparing data for API-based integrations (such as employee master synchronisation), and ensuring data consistency across your workflows.

Key Features

  • Flexible Node Creation:
    • Drag and drop the Mapper Node onto the canvas for asynchronous operations.
    • Add it within a sync block for synchronous data transformations.
  • Structured Output: Define a clear output structure (object or array of objects) using reusable Data Models. Data Model Integration: Leverage globally stored Data Models to define the schema for your output array, ensuring consistency and reusability.
  • One-to-One Mapping: Intuitively map keys from your input field array to the fields defined in your chosen Data Model.
  • Transformation Functions: Apply pre-defined functions (e.g., date transformations) to your data during the mapping process.
  • Built-in Testing: Test your entire mapping configuration with sample input data to ensure correctness before deployment.
  • Clear Output: The transformed output field array will be readily available in the Data Center, just like any other field array.

Configuration & Usage

  1. Adding and Configuring the Mapper Node
    1. Drag the Mapper Node onto the workflow canvas.
    2. Select Input Field Array:
      1. Open the Data Center.

      2. Choose the source field array you want to transform. The Data Center will filter to show only "Field Array" type keys.

    3. Select Data Model:
      1. Choose a pre-existing Data Model from a single-select dropdown. Data Models define the structure (schema) of your output.

      2. If no suitable Data Model exists, you'll be guided to the admin section to create a new one.

  2. Data Model Management (Admin Section): Data Models are global and reusable schemas for your structured data.
    1. Creation:
      1. Add Manually: Define fields one by one, specifying their Name (unique identifier), Label, and Data Type (e.g., String, Object). You can nest fields, for example, adding a String within an Object.
      2. Add via JSON: Upload a JSON schema to define your Data Model. You can also download an existing schema, modify it, and re-upload.
    2. Naming:
      1. Provide a unique Data Model Name (Label/ID character limit: 50).
      2. Allowed special characters: underscore (_), hyphen (-), Space.
    3. Validations & Management:
      1. A Data Model cannot be edited if it's currently used in any active applications.

      2. You can view references to see where a Data Model is being used.

      3. For Data Models in use, you can only add new keys or update labels of existing keys. Deleting keys that are in use is not permitted.

  3. Creating the Mapping: Once your input field array and Data Model are selected in the Mapper Node:
    1. Define Output Key Mapping: For each field in your chosen Data Model (which represents your desired output structure), you'll map an input key to it.
      1. The output key (from the Data Model) will be displayed.
    2. Select Input Key to Map:
      1. Open the Data Center to select a key from your chosen input field array.
      2. This input can also be a hybrid, allowing you to combine Data Center keys with static text if needed.
    3. Apply Transformation Functions (Optional):
      1. For each mapped key, you can apply pre-defined transformation functions (e.g., date formatting, string operations) to the input data before it's placed into the output structure.
      2. The system supports String-to-String and Object-to-Object mapping. For objects, you'll need to map their nested keys further.
  4. Testing Your Mapping
    1. Before saving, use the "Test Mapping" feature.
    2. Provide a sample JSON input field array that matches the structure of your selected input field array.
    3. You'll be prompted if your test input doesn't conform to the expected configuration.
    4. The test will return the transformed output field array based on your mapping logic and transformations.
  5. Output & Monitoring
    1. Data Center Visibility: The newly structured output field array generated by the Mapper Node will be available in the Data Center for use in subsequent workflow steps, similar to any other field array.
    2. Progress Path & Statuses: Track the Mapper Node's execution with the following statuses:
    3. Upcoming: Node is yet to be triggered.
    4. Skipped (Skipped due to Logic): Node was not triggered due to workflow logic.
    5. In progress: Node is currently executing.
    6. Successful: Node executed successfully, and the data is transformed.
    7. Failed: Node execution failed.

Known Failure Scenarios

The Mapper Node may enter a "Failed" state if:

  • The structure of the input field array at runtime does not match the configuration defined during setup.
  • A JavaScript transformation function applied to a key fails during execution.