Skip to main content

Cassandra Service

Purpose

Define a service to interface with a Cassandra or an Cassandra compatible store (e.g.AWS Keyspaces).

Prerequisites

None

Configuration

Name & Description

  • Name : Name of the Asset. Spaces are not allowed in the name.

  • Description : Enter a description.

The Asset Usage box shows how many times this Asset is used and which parts are referencing it. Click to expand and then click to follow, if any.

Required roles

In case you are deploying to a Cluster which is running (a) Reactive Engine Nodes which have (b) specific Roles configured, then you can restrict use of this Asset to those Nodes with matching roles. If you want this restriction, then enter the names of the Required Roles here. Otherwise, leave empty to match all Nodes (no restriction).

Contact Points

Enter the list of cluster seed nodes. It should contain IPs or hostnames of Cassandra cluster nodes, optionally with ports if different than the default Cassandra port.

Data Center, Keyspace, Parallelism

  • Local data center : Name of the Cassandra local datacenter. To find your datacenter name, check in your Cassandra node's cassandra-rackdc.properties file. If you are using AWS Keyspaces, then enter the region name here, then set the value for local-datacenter to the Region you're connecting to. For example, if you are connecting to cassandra.us-east-2.amazonaws.com, then set the local data center to us-east-2. For all available AWS Regions, see Service endpoints for Amazon Keyspaces.

  • Keyspace : Name of the Keyspace.

  • Parallelism : This is a performance tuning parameter. Enter a number which defines how many requests can be performed in parallel. Leave empty for no parallelism.

Service Functions

Services are accessed from other Assets via invocations of Functions. This is where you define such functions. In the context of Cassandra, a Service Function encapsulates any valid DML (data manipulation) or even DDL (data definition) statement. Typically, you will be using INSERT, SELECT and UPDATE CQL-statements here.

Let’s assume we would only want to read the customer data Customer in our example. This would require its own Service Function.

Create Service Function

First create a new Function (1):

Next fill out the details:

  • Function name (1): The name of the function. Must not have whitespaces.

  • Function description : Something that describes the function (optional).

  • CQL Statement (2): The actual CQL-Statement to access execute against the Cassandra/Keyspaces data source. Please note the use of the :Id bind-variable in the example above. The variables you can use here must have been defined in the data dictionary and assigned via the Parameter type. See the next section to learn how to do this.

  • Parameter type (3): Reference to a data dictionary type which you must have defined below. All members of this type can be used as bind-variables in the SQL-Statement.

  • Result type (4): Reference to a data dictionary type which you must have defined below. All members of this type can be used as result variables in the SQL-Statement. Note, that this can be the same type as used for the Parameter type. In our example they share the same variables.

  • Mappings (5): Define how you map the results from the SQL-Statement to your Result Type data structure. On the left you enter (assisted) the bind-variable names to which members of the Result Type should be mapped. Member names are always preceded with result. and then followed by the member name. On the right-hand side, enter the original field names used in your SQL-Statement.

Typesafe Configuration

Access to Cassandra and Keyspaces data sources relies on the Datastax-Driver. This driver allows for a wide number of configuration options. You can configure these options here. As to which configuration options are available, please refer to the Datastax-Driver documentation.

Example:

HOCON format

Please note that the notation follows HOCON format.

Use of macros

As you can see from the example above, you can use Macros within the Typesafe Configuration.

Data Dictionary

The Data Dictionary allows you to define complex data structures which can be mapped onto external data types and vice versa. This is necessary whenever an asset needs to exchange structured data with an external system — for example, when reading from or writing to a database, an HTTP API, a message queue, or any other format that carries typed fields.

Rather than hard-coding external field names and types into your Workflow, you define your own internal data types here. These internal types are then mapped to the external system's fields at the asset level. This means your Workflow scripts work with consistent, self-documenting data structures regardless of which external system the data came from.

When you need it

Whenever you configure an asset that exchanges structured data — a JDBC Service, a DynamoDB Service, an HTTP endpoint, an MQ message, a database Resource — you use the Data Dictionary to declare the types that represent:

  • Request parameters — the data your Workflow sends to the external system
  • Result data — the data the external system returns to your Workflow
  • Intermediate structures — types that hold data during a transformation

Entity Types

The Data Dictionary is organized as a tree of typed entities. The available entity types are:

EntityDescription
NamespaceGroups related types. Optional. If you reuse a namespace name that already exists in the Project, the two namespaces merge.
SequenceAn ordered list of typed members. Members are accessed by name, e.g. MyNamespace.Customer.Name.
EnumerationA fixed set of named integer constants.
ChoiceA type that holds exactly one of several possible member types.
ArrayA sequence of elements of a single contained type.

Defining Types — Step by Step

The following walkthrough shows how to build a data structure using the Data Dictionary editor. The example assumes a SQL customer table with columns id, name, and address — but the same pattern applies whenever you need to declare types for any asset.

1. Declare a new type

Click Declare Root Type in the toolbar to add a top-level entity.

Declare root type

2. Declare a namespace (optional)

Namespaces organize related types. To add one, right-click an existing node and select Add Sibling, then set the element type to Namespace.

Declare namespace

  • Name — The name of the namespace. If a namespace with this name already exists elsewhere in the Project, their contents merge automatically. Otherwise the name must be unique and may not contain spaces.

  • Type — Pick the entity type. For a namespace, select Namespace.

  • Description — Optional free-text description.

3. Declare a Sequence under the namespace

Right-click the namespace and choose Add Child to add a child element.

Add child to namespace

Click the arrow next to the namespace name and select Add child. Then fill in the element details:

Declare sequence

  • Name — The name of the element, e.g. Customer.

  • Type — Select Sequence as the element type. You will add individual fields (members) in the next step.

  • Extendable Sequence — When checked, layline.io can dynamically extend the sequence's member list if incoming data contains fields that are not explicitly defined. Leave unchecked if all fields are known in advance.

4. Add members to the Sequence

With the Sequence selected, click Add Child to add individual fields:

Add sequence members

Each member maps to a column in the external data source. You can reference any member by its full path — for example, MyNamespace.Customer.Name — from your Workflow scripts.

Common Entity Fields

These fields are available on all entity types:

FieldDescription
NameUnique identifier within the namespace. Reusing a namespace name from another part of the Project merges the two.
TypeThe entity kind: Namespace, Sequence, Enumeration, Choice, or Array
DescriptionOptional free-text description
Extendable Sequence(Sequence only) Allows the member list to be extended dynamically at runtime
Members(Sequence) Ordered list of typed fields — click Add Child to add each one
Elements(Enumeration) Named integer constants making up the enumeration

Advanced Features

Inheritance and Override
Entities inherited from a parent format or resource appear in the tree in a distinct inherited style. These are read-only unless overridden. Click Reset to Parent on an overridden entity to restore the inherited definition.

Copy and Paste
Use the toolbar buttons to copy a complete entity subtree and paste it elsewhere in the tree. All members and nested entities travel with it.

Filter and Sort
Use the Filter field to search entities by name. The sort buttons order nodes ascending or descending alphabetically.

See Also

Example: Using the Cassandra Service

The Cassandra Service can be used from within a JavaScript Asset. In our example we have a simple Workflow which reads a file with customer-related data (1), then in a next step (2) reads the corresponding customer date from a Cassandra source, and simply outputs this data to the log. There is no other purpose in this Workflow than to demonstrate how to use the Service.

In the middle of the Workflow we find a JavaScript Processor by the name of “EnrichCustomer”. This Processor reads additional customer information from a Cassandra source using the Cassandra Service.

How is it configured?

To use the Cassandra Service in the JavaScript Processor, we first have to assign the Service within the JavaScript Processor like so:

  • Physical Service (1): The Cassandra Service, which we have configured above.

  • Logical Service Name (2): The name by which we want to use the Service within JavaScript. This could be the exact same name as the Service or a name which you can choose. Must not include whitespaces.

Access the Service from within a Script Processor

Now let's use the service within a script processor:

Reading from Cassandra Source
let cassandraData = null; // will receive a message type
let customer_id = 1234;
try {
// Invoke service function.
// Service access defined as synchronous. Therefore no promise syntax here
cassandraData = services.MyCassandraService.SelectCustomerById(
{Id: customer_id}
);
// services: fixed internal term to access linked services
// MyCassandraService: The logical name of the service which we have given to it
// SelectCustomerById: Collection function to read the customer data with the given customer_id
} catch (error) {
// handle error
}

// Output the customer data to the processor log
if (cassandraData && cassandraData.data.length > 0) {
processor.logInfo('Name: ' + cassandraData.data[0].Name);
processor.logInfo('Address: ' + cassandraData.data[0].Address);
} else {
processor.logInfo('No customer data found for customer ID ' + customer_id);
}
Note: Service functions return a Message

Note how the Service function returns a Message as a result type.

Since SQL-queries always return arrays, you can find the results in message.data as an array. If we are only expecting one row as a result we can test it with cassandraData.data.length > 0 and access the first row with cassandraData.data[0].

Insert/Update to Cassandra

Let's assume we also had defined a function WriteCustomerData which inserts a new customer:

insert into customer
values id = :Id, name = :Name, address = :Address;

We could then invoke this function and pass values to it like so:

try {
services.MyCassandraService.WriteCustomerData(
{
Id: 1235,
Name: 'John Doe',
Address: 'Main Street',
}
)
} catch (error) {
// handle error
}

Service Testing

layline.io provides a test facility for testing your Services before you deploy them. In this way, you save time and effort by testing your Services without having to deploy and activate a whole Project with Workflows.

Once you have configured your Service(s), you can test them: Within your Asset Configuration tab (1), switch to the Test tab (2) to test your Service.

Test Facility Toolbar

The toolbar provides the following options:

The Testing tab provides two major views:

  1. Testcase configuration: This is where you define the testcases to be executed.
  2. Testcase execution: This is where you can execute the testcases and see the results.

You switch between these two views by clicking on the leftmost icon in the toolbar (1).

Let's start with the Testcase configuration view.

Testcase Configuration

The concept of the Testing is to define a set of Testcases which can be executed in a batch or individually. For this purpose, you can define multiple Testcases and configure them individually. I.e. each Testcase groups a number of indidivual tests which can be executed individually or in a batch.

Adding a Testcase

Click Add Testcase in the toolbar to add a new testcase:

A new Testcase is added. It is automatically named New<Service Asset Name>Test (3) and added to the list of Testcases (2).

  • Service test name (3): You can change the name of the Testcase here.
  • Service test description (4): You can add a description to the Testcase here.

Test Case Setup

Basics

In this section you define the individual tests to be executed for this Testcase.

To start, click # END in the toolbar:

A new test is added to the list of tests (1), and the test is opened for configuration (2).

Next we fill in the details:

  • Test name (3): You can change the name of the Test here.

  • Test description (4): You can add a description to the Test here.

  • Service function to test (5): Select the Service function to test here.

    This list contains all Service functions which are defined in the Service Asset. Pick the one you want to test.

    Once a Service function is selected, the system will automatically create a skeleton to fill in the respective parameters for the selected Service function.

Service Function Input Parameters
  • Service Function Input Parameters (6): Fill in the respective parameters for the selected Service function.

    In our example we have a function GetAlertsForSite which takes two parameters baseurl and riskId. If we click on Add member in the skeleton table the system will allow you to select the respective parameter from the list of available parameters:

    Once you have selected the parameter, the system will automatically add the respective parameter name. You then add the respective value for the parameter:

Service Function Evaluation Parameters

To automatically evaluate the result, you can add a script which analyzes the results.

Testcase Execution

Once you have configured your Testcases, you can execute them.

There are two ways on how to trigger execution:

  • Option 1: Select Run selected test in the toolbar (1) to execute the currently selected Testcase.

    Executing a test this way will switch the tab to the Testcase execution view, execute the test and show the results.

  • Option 2: Switch to the Testcase execution view by clicking on the leftmost icon in the toolbar (1) select the test to execute, and then hit the play button next to the test.

Each option will take us to the Testcase execution view:

In this view you can find the Testcase (1) and the Tests (2) we have created. If we had created additional tests for this Testcase, they would be listed here as well.

Question marks indicate that the test has not yet been executed.

We can now either execute all tests, or run them individually:

  • Run all Tests (1): Click this button to execute all tests.

  • Run Testcase (2): Click this button to a Testcase with all its underlying individual tests.

  • Run individual Test (3): Click this button next to a test to execute this individual test.

Once a test has been executed, the question mark will be replaced by a green check mark or a red cross depending on whether the test was successful or not.

The right hand-panel will show the results of the test execution respectively:

In case of errors, the system will show the error message for further investigation.


Can't find what you are looking for?

Please note, that the creation of the online documentation is Work-In-Progress. It is constantly being updated. should you have questions or suggestions, please don't hesitate to contact us at support@layline.io .