Message Service
Purpose
The Message Service is the producer side of layline.io's internal publish/subscribe messaging system. It publishes messages to topics defined in a Message Source. Messages can be published from JavaScript or Python processors, making it useful for:
- Triggering downstream Workflows based on events
- Broadcasting data to multiple consumers simultaneously
- Decoupling processing stages across Workflows
Architecture
The Message Service is the counterpart to the Message Source (consumer). Together they form a publish/subscribe system:
A single Message Service can reference multiple Message Sources. For each Message Source, you define functions that publish to specific topics.
This Asset can be used by:
| Asset type | Link |
|---|---|
| Processors | JavaScript Processor |
| Python Processor |
Related Asset
| Asset | Description |
|---|---|
| Message Source | Defines the topics that this service publishes to |
Configuration
Name & Description
-
Service Name: Name of the Asset. Spaces are not allowed in the name. -
Service Description: Enter a description.
The Asset Usage box shows how many times this Asset is used and which parts are referencing it.
Click to expand and then click to follow, if any.
Required Roles
In case you are deploying to a Cluster which is running (a) Reactive Engine Nodes which have (b) specific Roles
configured, then you can restrict use of this Asset to those Nodes with matching roles.
If you want this restriction, then enter the names of the Required Roles here. Otherwise, leave empty to match all
Nodes (no restriction).
Sources
Under Sources, add references to the Message Sources whose topics this service publishes to. Each entry links a Message Source asset to this service.
| Column | Description |
|---|---|
| Source | Select a Message Source asset from the project |
Click Add source to add a new Message Source reference.
Functions

Functions define the publish operations available to this service. Each function maps to a topic in the referenced Message Source(s).
Click Add Function to create a new function.
Function Name & Description
-
Function name: Name of the function. Must be unique within the service. Must not contain spaces. -
Description: Optional description of the function.
Request Type
Request type: The data dictionary type of the message payload that will be published. This defines the structure of the data sent when calling this function.
Response Type
Response type: The data dictionary type of the response (if any). Optional. If not set, the function returns no response.
Auto-Generated Function Names
Message Service functions are auto-generated and invoked from processors using the pattern:
services.<ServiceName>.<FunctionName>
For example, a service named OrderMessageService with a function named PublishOrder would be called as:
services.OrderMessageService.PublishOrder({ ... })
Function Parameters
When calling a Message Service function from a processor, the following parameters are available:
| Parameter | Description |
|---|---|
Topic | The name of the topic to publish to (as defined in the Message Source) |
PartitionKey | Optional. Used to determine which partition the message goes to (for ordering guarantees within a key) |
Request | The message payload, matching the Request type defined above |
Source | Optional. The name of the Message Source to publish to. Required if the service references more than one Message Source |
Data Dictionary
The Data Dictionary allows you to define complex data structures which can be mapped onto external data types and vice versa. This is necessary whenever an asset needs to exchange structured data with an external system — for example, when reading from or writing to a database, an HTTP API, a message queue, or any other format that carries typed fields.
Rather than hard-coding external field names and types into your Workflow, you define your own internal data types here. These internal types are then mapped to the external system's fields at the asset level. This means your Workflow scripts work with consistent, self-documenting data structures regardless of which external system the data came from.
When you need it
Whenever you configure an asset that exchanges structured data — a JDBC Service, a DynamoDB Service, an HTTP endpoint, an MQ message, a database Resource — you use the Data Dictionary to declare the types that represent:
- Request parameters — the data your Workflow sends to the external system
- Result data — the data the external system returns to your Workflow
- Intermediate structures — types that hold data during a transformation
Entity Types
The Data Dictionary is organized as a tree of typed entities. The available entity types are:
| Entity | Description |
|---|---|
| Namespace | Groups related types. Optional. If you reuse a namespace name that already exists in the Project, the two namespaces merge. |
| Sequence | An ordered list of typed members. Members are accessed by name, e.g. MyNamespace.Customer.Name. |
| Enumeration | A fixed set of named integer constants. |
| Choice | A type that holds exactly one of several possible member types. |
| Array | A sequence of elements of a single contained type. |
Defining Types — Step by Step
The following walkthrough shows how to build a data structure using the Data Dictionary editor. The example assumes a SQL customer table with columns id, name, and address — but the same pattern applies whenever you need to declare types for any asset.
1. Declare a new type
Click Declare Root Type in the toolbar to add a top-level entity.
2. Declare a namespace (optional)
Namespaces organize related types. To add one, right-click an existing node and select Add Sibling, then set the element type to Namespace.

-
Name— The name of the namespace. If a namespace with this name already exists elsewhere in the Project, their contents merge automatically. Otherwise the name must be unique and may not contain spaces. -
Type— Pick the entity type. For a namespace, selectNamespace. -
Description— Optional free-text description.
3. Declare a Sequence under the namespace
Right-click the namespace and choose Add Child to add a child element.

Click the arrow next to the namespace name and select Add child. Then fill in the element details:

-
Name— The name of the element, e.g.Customer. -
Type— SelectSequenceas the element type. You will add individual fields (members) in the next step. -
Extendable Sequence— When checked, layline.io can dynamically extend the sequence's member list if incoming data contains fields that are not explicitly defined. Leave unchecked if all fields are known in advance.
4. Add members to the Sequence
With the Sequence selected, click Add Child to add individual fields:

Each member maps to a column in the external data source. You can reference any member by its full path — for example, MyNamespace.Customer.Name — from your Workflow scripts.
Common Entity Fields
These fields are available on all entity types:
| Field | Description |
|---|---|
| Name | Unique identifier within the namespace. Reusing a namespace name from another part of the Project merges the two. |
| Type | The entity kind: Namespace, Sequence, Enumeration, Choice, or Array |
| Description | Optional free-text description |
| Extendable Sequence | (Sequence only) Allows the member list to be extended dynamically at runtime |
| Members | (Sequence) Ordered list of typed fields — click Add Child to add each one |
| Elements | (Enumeration) Named integer constants making up the enumeration |
Advanced Features
Inheritance and Override
Entities inherited from a parent format or resource appear in the tree in a distinct inherited style. These are read-only unless overridden. Click Reset to Parent on an overridden entity to restore the inherited definition.
Copy and Paste
Use the toolbar buttons to copy a complete entity subtree and paste it elsewhere in the tree. All members and nested entities travel with it.
Filter and Sort
Use the Filter field to search entities by name. The sort buttons order nodes ascending or descending alphabetically.
See Also
- Data Dictionary Format Asset — standalone Data Dictionary asset
- DynamoDB Service — Data Dictionary in context of a DynamoDB Service
- JDBC Service — worked example mapping Data Dictionary types to SQL columns
Using the Message Service from a Script Processor
Publishing a Message
- JavaScript
- Python
/**
* Publish an order confirmation message
* @param orderData Order confirmation data
*/
function publishOrderConfirmation(orderData) {
services.OrderMessageService.PublishOrderConfirmation({
Topic: "order-confirmations",
PartitionKey: orderData.orderId,
Request: orderData
});
}
def publish_order_confirmation(order_data):
"""Publish an order confirmation message.
@param order_data: Order confirmation data
"""
services.OrderMessageService.PublishOrderConfirmation({
'Topic': 'order-confirmations',
'PartitionKey': order_data.order_id,
'Request': order_data
})
Publishing with Automatic Source Selection
If the service references only one Message Source, the Source parameter can be omitted:
- JavaScript
- Python
/**
* Publish a notification event
* @param eventData The event to publish
*/
function publishNotification(eventData) {
services.NotificationService.SendNotification({
Topic: "notifications",
Request: eventData
});
}
def publish_notification(event_data):
"""Publish a notification event.
@param event_data: The event to publish
"""
services.NotificationService.SendNotification({
'Topic': 'notifications',
'Request': event_data
})
Publishing with Explicit Source
If the service references multiple Message Sources, specify which one to use:
- JavaScript
- Python
/**
* Publish to a specific Message Source
* @param data The data to publish
*/
function publishToSpecificSource(data) {
services.MultiSourceService.PublishData({
Source: "CustomerDataSource",
Topic: "customer-updates",
PartitionKey: data.customerId,
Request: data
});
}
def publish_to_specific_source(data):
"""Publish to a specific Message Source.
@param data: The data to publish
"""
services.MultiSourceService.PublishData({
'Source': 'CustomerDataSource',
'Topic': 'customer-updates',
'PartitionKey': data.customer_id,
'Request': data
})
Full Example: Order Processing Pipeline
A complete example showing how Message Source and Message Service work together:
- Define a Message Source named
OrderSourcewith a topicorder-confirmations - Define a Message Service named
OrderMessageServicethat referencesOrderSource - Create a function
PublishOrderConfirmationwith aRequest typeofOrderConfirmation - In a script processor, call the service to publish when an order is confirmed:
- JavaScript
- Python
/**
* Called when an order has been confirmed
* @param orderId The ID of the confirmed order
* @param orderDetails The order details
*/
function onOrderConfirmed(orderId, orderDetails) {
let confirmationData = dataDictionary.createMessage(
dataDictionary.type.OrderConfirmation
);
confirmationData.data.orderId = orderId;
confirmationData.data.customerId = orderDetails.customerId;
confirmationData.data.timestamp = DateTime.now().toString();
services.OrderMessageService.PublishOrderConfirmation({
Topic: "order-confirmations",
PartitionKey: orderId,
Request: confirmationData
});
}
def on_order_confirmed(order_id, order_details):
"""Called when an order has been confirmed.
@param order_id: The ID of the confirmed order
@param order_details: The order details
"""
confirmation_data = data_dictionary.createMessage(
data_dictionary.type.OrderConfirmation
)
confirmation_data.data.order_id = order_id
confirmation_data.data.customer_id = order_details.customer_id
confirmation_data.data.timestamp = str(datetime.now())
services.OrderMessageService.PublishOrderConfirmation({
'Topic': 'order-confirmations',
'PartitionKey': order_id,
'Request': confirmation_data
})
A downstream Workflow that references OrderSource will receive this message via its Input Processor and can process the confirmation further.
For more on processors, see JavaScript Processor and Python Processor.
Service Testing
layline.io provides a test facility for testing your Services before you deploy them. In this way, you save time and effort by testing your Services without having to deploy and activate a whole Project with Workflows.
Once you have configured your Service(s), you can test them:
Within your Asset Configuration tab (1), switch to the Test tab (2) to test your Service.

Test Facility Toolbar
The toolbar provides the following options:

The Testing tab provides two major views:
- Testcase configuration: This is where you define the testcases to be executed.
- Testcase execution: This is where you can execute the testcases and see the results.
You switch between these two views by clicking on the leftmost icon in the toolbar (1).
Let's start with the Testcase configuration view.
Testcase Configuration
The concept of the Testing is to define a set of Testcases which can be executed in a batch or individually. For this purpose, you can define multiple Testcases and configure them individually. I.e. each Testcase groups a number of indidivual tests which can be executed individually or in a batch.
Adding a Testcase
Click Add Testcase in the toolbar to add a new testcase:

A new Testcase is added.
It is automatically named New<Service Asset Name>Test (3) and added to the list of Testcases (2).
Service test name(3): You can change the name of the Testcase here.Service test description(4): You can add a description to the Testcase here.
Test Case Setup
Basics
In this section you define the individual tests to be executed for this Testcase.
To start, click # END in the toolbar:
A new test is added to the list of tests (1), and the test is opened for configuration (2).

Next we fill in the details:
-
Test name(3): You can change the name of the Test here. -
Test description(4): You can add a description to the Test here. -
Service function to test(5): Select the Service function to test here.This list contains all Service functions which are defined in the Service Asset. Pick the one you want to test.

Once a Service function is selected, the system will automatically create a skeleton to fill in the respective parameters for the selected Service function.

Service Function Input Parameters
-
Service Function Input Parameters(6): Fill in the respective parameters for the selected Service function.In our example we have a function
GetAlertsForSitewhich takes two parametersbaseurlandriskId. If we click onAdd memberin the skeleton table the system will allow you to select the respective parameter from the list of available parameters:
Once you have selected the parameter, the system will automatically add the respective parameter name. You then add the respective value for the parameter:

Service Function Evaluation Parameters
To automatically evaluate the result, you can add a script which analyzes the results.
Testcase Execution
Once you have configured your Testcases, you can execute them.
There are two ways on how to trigger execution:
-
Option 1: Select
Run selected testin the toolbar (1) to execute the currently selected Testcase.
Executing a test this way will switch the tab to the Testcase execution view, execute the test and show the results.
-
Option 2: Switch to the Testcase execution view by clicking on the leftmost icon in the toolbar (1) select the test to execute, and then hit the
playbutton next to the test.
Each option will take us to the Testcase execution view:

In this view you can find the Testcase (1) and the Tests (2) we have created.
If we had created additional tests for this Testcase, they would be listed here as well.
Question marks indicate that the test has not yet been executed.
We can now either execute all tests, or run them individually:
-
Run all Tests(1): Click this button to execute all tests. -
Run Testcase(2): Click this button to a Testcase with all its underlying individual tests.
-
Run individual Test(3): Click this button next to a test to execute this individual test.
Once a test has been executed, the question mark will be replaced by a green check mark or a red cross depending on whether the test was successful or not.
The right hand-panel will show the results of the test execution respectively:

In case of errors, the system will show the error message for further investigation.
Please note, that the creation of the online documentation is Work-In-Progress. It is constantly being updated. should you have questions or suggestions, please don't hesitate to contact us at support@layline.io .