Source S3
Purpose
Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. Next to Amazon's S3 there are now various object storage providers which grant S3 compatible access to their storage solutions as well (e.g. Google Cloud Storage, IONOS, et al).
This UI helps to define the specific bucket and folder source of an S3 connected endpoint.
This Asset can be used by:
| Asset type | Link |
|---|---|
| Input Processors | Stream Input Processor |
Prerequisite
You need:
Configuration
Name & Description
Name : Name of the Asset. Spaces are not allowed in the name.
Description : Enter a description.
The Asset Usage box shows how many times this Asset is used and which parts are referencing it. Click to expand and then click to follow, if any.
Required roles
In case you are deploying to a Cluster which is running (a) Reactive Engine Nodes which have (b) specific Roles configured, then you can restrict use of this Asset to those Nodes with matching roles.
If you want this restriction, then enter the names of the Required Roles here. Otherwise, leave empty to match all Nodes (no restriction).
Throttling & Failure Handling

Throttling
These parameters control the maximum number of new stream creations per given time period.
Max. new streams — Maximum number of streams this source is allowed to open or process within the given time period.
Per — Time interval unit for the Max. new streams value.
Configuration values for this parameter depend on the use case scenario. Assuming your data arrives in low frequency cycles, these values are negligible. In scenarios with many objects arriving in short time frames, it is recommended to review and adapt the default values accordingly.
Backoff Failure Handling
These parameters define backoff timing intervals in case of failures. The system will progressively throttle down the processing cycle based on the configured minimum and maximum failure backoff boundaries.
Min. failure backoff — The minimum backoff time before retrying after a failure.
Unit — Time unit for the minimum backoff value.
Max. failure backoff — The maximum backoff time before retrying after a failure.
Unit — Time unit for the maximum backoff value.
Based on these values, the next processing attempt is delayed: starting at the minimum failure backoff interval, the wait time increases step by step up to the maximum failure backoff.
Reset after number of successful streams — Resets the failure backoff throttling after this many successful stream processing attempts.
Reset after time without failure streams — Resets the failure backoff throttling after this amount of time passes without any failures.
Unit — Time unit for the time-based backoff reset.
Whatever comes first — the stream count or the time threshold — resets the failure throttling after the system returns to successful stream processing.
Polling & Processing

This source does not reflect a stream, but an object-based storage source which does not signal the existence of new objects to observers. We therefore need to define how often we want to look up (poll) the source for new objects to process.
You can choose between Fixed rate polling and Cron tab style polling.
Fixed rate
Use Fixed rate if you want to poll at constant and frequent intervals.
Polling interval [sec] — The interval in seconds at which the configured source is queried for new objects.
Cron tab
Use Cron tab if you want to poll at specific scheduled times. The Cron tab expression follows the cron tab style convention. Learn more about crontab syntax at the Quartz Scheduler documentation.
You can also use the built-in Cron expression editor — click the calendar symbol on the right hand side:

Configure your expression using the editor. The Next trigger times display at the top helps you visualize when the next triggers will fire. Press OK to store the values.
Polling timeout
Polling timeout [sec] — The time in seconds to wait before a polling request is considered failed. Set this high enough to account for endpoint responsiveness under normal operation.
Stable time
Stable time [sec] — The number of seconds that file statistics must remain unchanged before the file is considered stable for processing. Configuring this value enables stability checks before processing.
Ordering
When listing objects from the source for processing, you can define the order in which they are processed:
- Alphabetically, ascending
- Alphabetically, descending
- Last modified, ascending
- Last modified, descending
Reprocessing mode
The Reprocessing mode setting controls how layline.io's Access Coordinator handles previously processed sources that are re-ingested.

- Manual access coordinator reset — Any source element processed and stored in layline.io's history requires a manual reset in the Sources Coordinator before reprocessing occurs (default mode).
- Automatic access coordinator reset — Allows automatic reprocessing of already processed and re-ingested sources as soon as the respective input source has been moved into the configured done or error directory.
- When input changed — Behaves like
Manual access coordinator reset, but also checks whether the source has potentially changed — i.e., the name is identical but the content differs. If the content has changed, reprocessing starts without manual intervention.
Wait for processing clearance
When Wait for processing clearance is activated, new input sources remain unprocessed in the input directory until either:
- A manual clearance is given through Operations, or
- A JavaScript processor executes
AccessCoordinator.giveClearance(source, stream, timeout?)
S3 Connection
Select the previously configured AWS Connection to use for this Source.
S3 Bucket
-
S3 bucket name (1): Once you picked a S3 Connection above the system will try to test the connection and list bucket names it can find. These will be available for selection here. You can check how many buckets could be found at (5) -
Prefix (2): If you pick a valid bucket (1), then available prefixes (folders) will be available for selection here. You can check how many prefixes could be found for a given bucket at (6) -
Use path style bucket access (3): The S3 API allows accessing objects via legacy "path style" or "virtual hosted style". Check this box if you want to access objects via legacy path style access. You can read more about this here -
Include sub folders (4): Check this box if you want sub-folders to be included when querying the source. This means that all objects from sub-folders within a bucket/prefix combination will be considered for processing.
While you are entering and changing S3 bucket parameters, layline.io frequently tries to connect to the endpoint and retrieve bucket and prefix information. The status of these attempts is displayed at the bottom of the group box.
In case of error, you can hover the mouse over the red output and view what the problem is:
This usually helps to resolve the issue.
Related Topics
Internal
External
Potential problems
Please note, that the creation of the online documentation is Work-In-Progress. It is constantly being updated. should you have questions or suggestions, please don't hesitate to contact us at support@layline.io .