Mapper Node
The Mapper Node transforms an array of data from one structure into another by applying a set of mapping rules to each item. It takes a source array, iterates through every element, extracts values based on configured source paths, and produces a brand-new array of objects shaped according to your destination paths. The original array is not modified — the output is a cleanly structured copy.
This is a single-step, declarative transformation. There are no loops to manage, no scripting, and no side effects. You define what to extract and where to put it, and the node handles the rest.
When to Use
Use a Mapper Node when your workflow needs to:
- Reshape API response data before displaying it in a table or List View — for example, transforming
[{ "product_id": 123, "name": "Gadget" }]into[{ "sku": 123, "productName": "Gadget" }]. - Normalize data from different sources into a common schema — when two integrations return employee records with different field names and you need a unified format for downstream processing.
- Strip unnecessary fields from a large payload — an API returns 30 fields per record but your workflow only needs 5.
- Prepare data for a For Each Loop — transform the array first, then iterate over the clean version.
- Power dynamic dropdown options — transform a source list into
{ label, value }pairs for use in form dropdowns. - Feed data into a List View — the Mapper is the standard data-processing step in List View workflows, sitting between the Trigger and End nodes.
How It Appears on the Canvas
The Mapper Node is a standard single node (not a block). It has one incoming edge and one outgoing edge. It can be duplicated on the canvas (canDuplicate: true).
Configuration
The Mapper Node has three core configuration areas: the source array, the data model, and the mapping rules.
Source Array (Array Data)
Specifies which array in the Data Center the Mapper will iterate over.
| Field | Description |
|---|---|
| Path | The Data Center path to the source array — for example, nodes.<actionNodeId>.executionData.data.records. Selected via the Data Center browser. |
| Display | A human-readable label for the selected path. |
| Data Type | Must resolve to array. The system validates this at save time by checking the path's data type in the Data Center config. If the path does not point to an array, the save fails with an "Array field not selected" error. |
Data Model
The Data Model defines the output schema — it tells the system what shape the transformed objects should have, so that downstream nodes can see and reference the Mapper's output fields in the Data Center.
| Setting | Description |
|---|---|
| Data Model | A reference to a Workflow Data Model (by ID or name). The Data Model contains a dataCentreConfig that describes the output fields (names, paths, data types). When the node is saved, the system fetches the Data Model's config and registers it as the node's output in the Data Center, wrapped under a data path with showIndex: true. |
If no Data Model is selected, the save fails with a "Data model not selected" error. If the referenced Data Model doesn't exist, the save fails with a "Data model not found" error.
Special built-in Data Models:
| Variant | Purpose |
|---|---|
| Dynamic Dropdown Utility | Used when the Mapper powers a dynamic dropdown. The output schema is fixed: { label: string, value: string }. The system provides a static dataCentreConfig for this case instead of looking up a Data Model from the database. |
| List View Utility | Used when the Mapper powers a List View table. The output schema is derived from the List View table's own dataCentreConfig instead of a standalone Data Model. |
Data Models
A Data Model defines the output schema for a Mapper Node. It tells the system what fields the transformed array will contain, so downstream nodes can reference those fields in the Data Center. Data Models are reusable — a single Data Model can be linked to multiple Mapper Nodes across different workflows.
What a Data Model Contains
| Field | Description |
|---|---|
| Name | A unique identifier within the bot. Used to reference the Data Model by name or by its database ID. Names must be unique — creating a second model with the same name fails with a "Data model already exists" error. |
| Description | Optional human-readable description. |
| Config | A JSON Schema-style definition of the output structure. This is the primary input — everything else is auto-generated from it. The root type must be object. |
| Sample Object | Auto-generated. A concrete example object produced from the config (strings become "a", numbers become 1.1, integers become 1, booleans become true). Used internally for Data Center config generation. |
| Data Centre Config | Auto-generated. A hierarchical array of IDataCentreConfig entries that the Data Center browser uses to display available fields for selection. |
| Keys-Path Mapping | Auto-generated. A flat array of { key, path } pairs representing every leaf field in the config. Used to auto-generate the mapping rule UI fields in the Mapper Node configuration panel. |
Config Structure
The config follows a recursive JSON Schema-like format using the IDataModelConfig interface:
| Property | Type | Description |
|---|---|---|
| type | string, number, integer, boolean, array, object | The data type of this field. |
| properties | { [key]: IDataModelConfig } | Required when type is object. Each key is a field name, each value defines that field's type recursively. |
| items | IDataModelConfig | Required when type is array. Defines the type of elements in the array. |
Example config for an output like { sku: 123, productName: "Gadget", price: 9.99 }:
{
"type": "object",
"properties": {
"sku": { "type": "integer" },
"productName": { "type": "string" },
"price": { "type": "number" }
}
}Constraint: Arrays of objects are not supported. If the config produces a sample object that contains an array with object elements, the validation fails with "Array of object unsupported in data model." Arrays of primitives (strings, numbers, etc.) are fine.
Constraint: Object keys cannot contain dots (.). A key like "price.final" is rejected because dots are used as path separators in the mapping system.
Constraint: Duplicate paths are rejected. If two fields resolve to the same path in the keys-path mapping, the save fails with a "Duplicate key" error.
Creating a Data Model
Data Models are created from the Data Models management screen, not from the Mapper Node itself.
What you provide: name (required), description (optional), and config (the JSON Schema definition).
What the system generates: When you save, the system auto-generates three derived fields from your config:
- The
sampleObject— a concrete example of what data matching this schema looks like. - The
dataCentreConfig— the hierarchical config used by the Data Center browser. - The
keysPathMapping— the flat list of leaf fields used to build the Mapper UI.
Auto-generating config from JSON: If you have a sample JSON object but don't want to write the config schema by hand, you can use the "Generate from JSON" feature. Paste a raw JSON object (like an API response) and the system generates the IDataModelConfig automatically by inferring types — strings become string, whole numbers become integer, decimals become number, booleans become boolean, arrays are detected with their element type, and nested objects are recursed. The generated config gets title: "root" at the top level.
Updating a Data Model
When you update a Data Model's config, the system re-validates and re-generates all derived fields (sampleObject, dataCentreConfig, keysPathMapping).
If the Data Model is not in use (no active Mapper Nodes reference it), you can freely change the config in any way — add fields, remove fields, rename fields, change types.
If the Data Model is in use (one or more active Mapper Nodes in active applications reference it), the system enforces a safe-update rule: you can add new fields, but you cannot remove existing fields. If the new config's keysPathMapping is missing any paths that exist in the current version, the update is rejected with a "Data model already in use" error, along with a list of references showing which applications and nodes use it.
This prevents breaking downstream workflows that depend on the current output fields.
Checking References
Before editing a Data Model, you can check where it's being used. The system finds all non-deleted Mapper Nodes that reference this Data Model (by ID or by name), then filters to only those belonging to active applications. The result is a list of references showing the application name, node name, and node type for each usage.
How the Mapper Node Uses the Data Model
When a Mapper Node is saved with a selected Data Model:
- The system fetches the Data Model's
dataCentreConfigfrom the database. - It wraps the config under a
datapath withdataType: "array"andshowIndex: true. - It registers this as the Mapper Node's output in the Data Center via
saveNodeConfig.
This is what makes the Mapper's output fields appear in the Data Center browser for downstream nodes. If you change the Data Model, the next save of the Mapper Node updates its Data Center registration accordingly.
When building mapping rules in the Mapper Node's configuration panel, the system uses the Data Model's keysPathMapping to generate the UI. Each key-path pair becomes a mapping row where the destination path (mappingKeyPath) is pre-filled and the label is derived from the path (dots replaced with arrows, e.g., product.name displays as product→name). You only need to fill in the source path (dataCentrePath) for each field.
Mapping Rules
Each mapping rule defines how to extract a value from a source array item and where to place it in the output object.
| Property | Description |
|---|---|
| Type | Either text or function. A text mapping does a direct value extraction. A function mapping can apply transformation functions to the extracted value before placing it. |
| Source Path (dataCentrePath) | A template-style path that extracts a value from each item in the source array. The TemplateParserService resolves this against each array element at runtime. For example, if the source item is { "details": { "price": 99 } }, a source path of {{details.price}} extracts 99. |
| Destination Path (mappingKeyPath) | A dot-notation path where the extracted value is placed in the output object. For example, product.cost creates { "product": { "cost": 99 } }. Uses Lodash _.set for nested path creation. |
| Label (mappingKeyLabel) | A human-readable label used in the UI and in execution logs. |
| Applicable Functions | An optional list of function names to apply to the extracted value before setting it. Only relevant when the mapping type is function. |
You can define as many mapping rules as needed. Each rule runs independently for every item in the source array.
How It Works at Runtime
1. Array extraction. The node reads the arrayData.path from its configuration. It fetches the current Data Center state via the DataCenterRepository, then uses the TemplateParserService to resolve the path and extract the source array.
2. Empty/invalid check. If the resolved value is not a valid array or is empty, the node completes immediately with response.data set to an empty array [] and an empty mapperExecutionMappingData log. No error is thrown — an empty input produces an empty output.
3. Transformation. If the array is valid, the node delegates to the MapperHelperService:
- For each item in the source array:
- For each mapping rule:
- The
TemplateParserServiceextracts the value from the current item using thedataCentrePath. - The extracted value is placed into a new output object at the
mappingKeyPathusing_.set. - A log entry is created recording the mapping label, path, type, and resolved value.
- The
- The completed output object is added to the result array.
- For each mapping rule:
4. Output. The node returns:
response.data— the new array of transformed objects.mapperExecutionMappingData— a detailed per-item, per-rule execution log showing what was extracted, where it was placed, and the resolved values.
The transformed array is stored in the Data Center under the node's output path (nodes.<mapperNodeId>.executionData.data) and is available to all downstream nodes.
Data Center Output
The Mapper Node's output is registered in the Data Center based on the selected Data Model's dataCentreConfig, wrapped under a data array path.
Structure:
nodes.<mapperNodeId>.executionData.data → Array of transformed objects
nodes.<mapperNodeId>.executionData.data[0].field → First item's field value
nodes.<mapperNodeId>.executionData.data[1].field → Second item's field value
...
The fields available inside each array element depend entirely on what the selected Data Model defines. For example, if the Data Model has fields sku, productName, and inventory, those exact paths will be available for selection in the Data Center browser for downstream nodes.
The showIndex flag is set to true on the output config, which means the Data Center browser displays array elements with their index for easier reference.
Execution Logs
Every Mapper execution produces a detailed mapperExecutionMappingData log. This is an array with one entry per source item, and each entry contains a responseMapping array with one log per mapping rule:
| Log Field | Description |
|---|---|
| index | The 0-based index of the source array item. |
| mappingKeyPath | The destination path in the output object. |
| mappingKeyLabel | The human-readable label for this mapping. |
| type | text or function. |
| value | The final resolved value that was set. |
| intermediateValues | For function type mappings, an array of intermediate values produced during transformation (currently returned as an empty array). |
These logs are stored as part of the node's execution state and are useful for debugging when the output doesn't match expectations.
Testing Mapper Configurations
The platform provides a dedicated testing endpoint (testWithMapperConfig) that lets you validate mapping rules without running a full workflow.
How it works:
You provide sample input data and your mapping configuration. The system instantiates the same MapperHelperService used in live execution, runs the transformation, and returns the result immediately.
Payload:
| Field | Type | Description |
|---|---|---|
| data | Object or Array | Sample source data. If a single object is provided, it's wrapped in an array automatically. |
| mappings | Array | The mapping rules to test — same format as the node's mappings configuration (type, dataCentrePath, mappingKeyPath). |
Response: The finalResult (transformed array) and mappingResponse (detailed execution log), exactly as they would appear in a live execution.
This test uses the same core engine as the real node, so results are guaranteed to match production behavior.
Progress Path
The Mapper Node uses the default progress path configuration since it doesn't have a custom one.
| Status | Displayed Fields |
|---|---|
| Default | Triggered at |
Visibility settings:
| Setting | Description |
|---|---|
| Hide | Completely hides the node from the progress tracker. |
| Hide if Skipped | Hides the node only if it was skipped during execution. |
Validation Rules
At save time:
| Check | Behavior |
|---|---|
| Array path | Must be selected and must point to a field with data type array in the Data Center config. If not, the save fails. |
| Data Model | Must be selected and must exist in the database (or be one of the special built-in models). If not, the save fails. |
| Mappings | Each mapping rule is validated against the MapperNodeMappingConfigSchema: type must be text or function, dataCentrePath must be a string, mappingKeyPath must be a string, and applicableFunctions (if present) must be an array of strings. |
At runtime:
The node does not throw errors for empty or invalid source data — it simply returns an empty array. This is by design, so workflows that handle optional data don't fail unnecessarily.
Edge validation:
The Mapper Node uses the default (generic) node validation strategy. There are no special edge count requirements beyond the standard: it needs at least one incoming and one outgoing edge.
Best Practices
- Choose the right Data Model early. The Data Model defines what downstream nodes can see from your Mapper's output. If you change the Data Model later, any downstream references to the old fields will break. Plan your output schema before building the rest of the workflow.
- Use the test endpoint to validate complex mappings. For mappings with deeply nested source paths or many rules, test with sample data before deploying. The test uses the exact same engine as production.
- Keep mappings simple and flat when possible. Deeply nested
mappingKeyPathvalues (likeproduct.details.pricing.final.amount) create complex output objects that are harder to debug and reference downstream. Flatten where you can. - Handle empty arrays gracefully. The Mapper silently returns
[]for empty or invalid input. If your downstream logic depends on the Mapper producing data, add a Decision Node after the Mapper to check whetherresponse.datais empty before proceeding. - Prefer Mapper over Function Node for pure data reshaping. If all you need is to rename and rearrange fields, a Mapper is more maintainable and debuggable than writing JavaScript in a Function Node. The execution logs give you field-by-field visibility into what happened.
- Use meaningful labels. The
mappingKeyLabelappears in execution logs. Good labels make debugging straightforward — use names like "Employee Full Name" instead of "field1". - Understand the source path scope. The
dataCentrePathin each mapping rule is resolved against individual array items, not against the full Data Center. If you need to reference a value from outside the array (like a global constant), you'll need to inject it into the array items first (via a Function Node) or use a different approach.
Updated 20 days ago
