Deploy Elastic Serverless Forwarder directlyedit

For more customisation options during deployment, from version 1.6.0 and above you can deploy the Elastic Serverless Forwarder directly to your AWS Account without using SAR. This enables you to customize the event source settings for the inputs (i.e. triggers) one-by-one.

To deploy the forwarder directly, you have to:

Create publish-config.yaml for the publishing scriptedit

To deploy the forwarder directly, you need to define a publish-config.yaml file and pass this as an argument in the publishing script.

Save the following YAML content as publish-config.yaml and edit as required before running the publishing script. You should remove any inputs or arguments you are not using.

kinesis-data-stream:
    - arn: "arn:aws:kinesis:%REGION%:%ACCOUNT%:stream/%STREAMNAME%"
      batch_size: 10
      batching_window_in_second: 0
      starting_position: TRIM_HORIZON
      starting_position_timestamp: 0
      parallelization_factor: 1
sqs:
    - arn: "arn:aws:sqs:%REGION%:%ACCOUNT%:%QUEUENAME%"
      batch_size: 10
      batching_window_in_second: 0
s3-sqs:
    - arn: "arn:aws:sqs:%REGION%:%ACCOUNT%:%QUEUENAME%"
      batch_size: 10
      batching_window_in_second: 0
cloudwatch-logs:
    - arn: "arn:aws:logs:%AWS_REGION%:%AWS_ACCOUNT_ID%:log-group:%LOG_GROUP_NAME%:*"
    - arn: "arn:aws:logs:%AWS_REGION%:%AWS_ACCOUNT_ID%:log-group:%LOG_GROUP_NAME%:log-stream:%LOG_STREAM_NAME%"
ssm-secrets:
  - "arn:aws:secretsmanager:%AWS_REGION%:%AWS_ACCOUNT_ID%:secret:%SECRET_NAME%"
kms-keys:
    - "arn:aws:kms:%AWS_REGION%:%AWS_ACCOUNT_ID%:key/%KMS_KEY_UUID%"
s3-buckets:
    - "arn:aws:s3:::%BUCKET_NAME%"
subnets:
    - "%SUBNET_ID%"
security-groups:
    - "%SECURITY_ID%"
s3-config-file: "s3://%S3_CONFIG_BUCKET_NAME%/%S3_CONFIG_OBJECT_KEY%"
continuing-queue:
    batch_size: 10
    batching_window_in_second: 0
Fieldsedit

kinesis-data-stream.[]

List of Amazon Kinesis Data Streams (i.e. triggers) for the forwarder, matching those defined in your Create and upload config.yaml to S3 bucket.

kinesis-data-stream.[].arn

ARN of the AWS Kinesis Data Stream.

kinesis-data-stream.[].batch_size

Set this value above the default (10) if you experience ingestion delays in your output and GetRecords.IteratorAgeMilliseconds and IncomingRecords Kinesis CloudWatch metrics for the Amazon Kinesis Data Streams keep increasing and the average execution time of the forwarder is below 14 minutes. This will increase the number of records the forwarder will process in a single execution for the Amazon Kinesis Data Streams.

kinesis-data-stream.[].batching_window_in_second

Set this value above the default (0) if you experience ingestion delays in your output and GetRecords.IteratorAgeMilliseconds and IncomingRecords Kinesis CloudWatch metrics for the Amazon Kinesis Data Streams keep increasing and the average execution time of the forwarder is below 14 minutes. This will increase the number of records the forwarder will process in a single execution for the Amazon Kinesis Data Streams.

kinesis-data-stream.[].starting_position

Change this value from the default (TRIM_HORIZON) if you want to change the starting position of the records processed by the forwarder for the Amazon Kinesis Data Streams.

kinesis-data-stream.[].starting_position_timestamp

Set this value to the time from which to start reading (in Unix time seconds) if you set ElasticServerlessForwarderKinesisStartingPosition to "AT_TIMESTAMP".

kinesis-data-stream.[].parallelization_factor

Defines the number of forwarder functions that can run concurrently per shard (default is 1). Increase this value if you experience ingestion delays in your output and GetRecords.IteratorAgeMilliseconds and IncomingRecords Kinesis CloudWatch metrics for the Amazon Kinesis Data Streams keep increasing and the average execution time of the forwarder is below 14 minutes. This will increase the number of records processed concurrently for Amazon Kinesis Data Streams. For more info, refer to AWS Kinesis docs.

sqs.[]

List of Amazon SQS message payload (i.e. triggers) for the forwarder, matching those defined in your Create and upload config.yaml to S3 bucket.

sqs.[].arn

ARN of the AWS SQS queue trigger input.

sqs.[].batch_size

Set this value above the default (10) if you experience ingestion delays in your output and ApproximateNumberOfMessagesVisible and ApproximateAgeOfOldestMessage SQS CloudWatch metrics for the Amazon SQS message payload keep increasing and the average execution time of the forwarder is below 14 minutes. This will increase the number of messages the forwarder will process in a single execution for the Amazon SQS message payload.

sqs.[].batching_window_in_second

Set this value above the default (0) if you experience ingestion delays in your output and ApproximateNumberOfMessagesVisible and ApproximateAgeOfOldestMessage SQS CloudWatch metrics for the Amazon SQS message payload keep increasing and the average execution time of the forwarder is below 14 minutes. This will increase the number of messages the forwarder will process in a single execution for the Amazon SQS message payload.

s3-sqs.[]

List of Amazon S3 (via SQS event notifications) (i.e. triggers) for the forwarder, matching those defined in your Create and upload config.yaml to S3 bucket.

s3-sqs.[].arn

ARN of the AWS SQS queue receiving S3 Notifications as trigger input.

s3-sqs.[].batch_size

Set this value above the default (10) if you experience ingestion delays in your output and ApproximateNumberOfMessagesVisible and ApproximateAgeOfOldestMessage SQS CloudWatch metrics for the Amazon S3 (via SQS event notifications) keep increasing and the average execution time of the forwarder is below 14 minutes. This will increase the number of messages the forwarder will process in a single execution for the Amazon S3 (via SQS event notifications).

s3-sqs.[].batching_window_in_second

Set this value above the default (0) if you experience ingestion delays in your output and ApproximateNumberOfMessagesVisible and ApproximateAgeOfOldestMessage SQS CloudWatch metrics for the Amazon S3 (via SQS event notifications) keep increasing and the average execution time of the forwarder is below 14 minutes. This will increase the number of messages the forwarder will process in a single execution for the Amazon S3 (via SQS event notifications).

cloudwatch-logs.[]

List of Amazon CloudWatch Logs subscription filters (i.e. triggers) for the forwarder, matching those defined in your Create and upload config.yaml to S3 bucket.

cloudwatch-logs.[].arn

ARN of the AWS CloudWatch Logs trigger input (accepts both CloudWatch Logs Log Group and CloudWatch Logs Log Stream ARNs).

ssm-secrets.[]

List of AWS SSM Secrets ARNs used in your config.yml (if any).

kms-keys.[]

List of AWS KMS Keys ARNs to be used for decrypting AWS SSM Secrets, Kinesis Data Streams or SQS queues (if any).

s3-buckets.[]

List of S3 bucket ARNs that are sources for the S3 SQS Event Notifications (if any).

subnets.[]

A list of subnets IDs for the forwarder. Along with security-groups.[], these settings will define the AWS VPC the forwarder will belong to. Leave blank if you don’t want the forwarder to belong to any specific AWS VPC.

security-groups.[]

List of security group IDs to attach to the forwarder. Along with subnets.[], these settings will define the AWS VPC the forwarder will belong to. Leave blank if you don’t want to have the forwarder belong to any specific AWS VPC.

s3-config-file

Set this value to the location of your forwarder configuration file in S3 URL format: s3://bucket-name/config-file-name. This will populate the S3_CONFIG_FILE environment variable for the forwarder.

continuing-queue.batch_size

Set this value above the default (10) if you experience ingestion delays in your output and ApproximateNumberOfMessagesVisible and ApproximateAgeOfOldestMessage SQS CloudWatch metrics for the Continuing queue keep increasing and the average execution time of the forwarder is below 14 minutes. This will increase the number of messages the forwarder will process in a single execution for the Continuing queue.

continuing-queue.batching_window_in_second

Set this value above the default (0) if you experience ingestion delays in your output and ApproximateNumberOfMessagesVisible and ApproximateAgeOfOldestMessage SQS CloudWatch metrics for the Continuing queue keep increasing and the average execution time of the forwarder is below 14 minutes. This will increase the number of messages the forwarder will process in a single execution for the Continuing queue.

Run the publishing scriptedit

A bash script for publishing the Elastic Serverless Forwarder directly to your AWS account is available from the Elastic Serverless Forwarder repository.

Download the publish_lambda.sh script and follow the instructions below.

Script argumentsedit
 $ ./publish_lambda.sh
    AWS CLI (https://aws.amazon.com/cli/), SAM (https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/install-sam-cli.html) and Python3.9 with pip3 required
    Please, before launching the tool execute "$ pip3 install ruamel.yaml"
Usage: ./publish_lambda.sh config-path lambda-name forwarder-tag bucket-name region
    Arguments:
    config-path: full path to the publish configuration
    lambda-name: name of the lambda to be published in the account
    forwarder-tag: tag of the elastic serverless forwarder to publish
    bucket-name: bucket name where to store the zip artifact for the lambda
                 (it will be created if it doesn't exists, otherwise
                  you need already to have proper access to it)
    region: region where to publish in
Prerequisitesedit
$ pip3 install awscli aws-sam-cli ruamel.yaml
Running the scriptedit

Assuming publish-config.yaml in saved in the same directory you intend to run publish_lambda.sh from, here’s an example:

$ ./publish_lambda.sh publish-config.yaml forwarder-lambda lambda-v1.6.0 s3-lambda-artifact-bucket-name eu-central-1
Updating to a new version via scriptedit

You can update the version of a published Elastic Serverless Forwarder without changing its configuration by running the publishing script again and passing a new forwarder-tag:

$ ./publish_lambda.sh publish-config.yaml forwarder-lambda lambda-v1.7.0 s3-lambda-artifact-bucket-name eu-central-1

The above examples show the forwarder being updated from lambda-v1.6.0 to lambda-v1.7.0.

Changing configuration via scriptedit

If you want to change the configuration of a published Elastic Serverless Forwarder without changing its version, you can update the publish-config.yaml and run the script again using the same forwarder-tag:

$ ./publish_lambda.sh publish-config.yaml forwarder-lambda lambda-v1.6.0 s3-lambda-artifact-bucket-name eu-central-1

The above example shows an existing lambda-v1.6.0 configuration being updated without changing version.

Using the script for multiple deploymentsedit

If you want to use the publish script for deploying the forwarder with different configurations, create two different publish-config.yaml files with unique names and run the publishing script twice, with correct references to the config-path and lambda-name:

$ ./publish_lambda.sh publish-config-for-first-lambda.yaml first-lambda lambda-v1.6.0 s3-lambda-artifact-bucket-name eu-central-1

$ ./publish_lambda.sh publish-config-for-second-lambda.yaml second-lambda lambda-v1.6.0 ss3-lambda-artifact-bucket-name eu-central-1

The above example publishes two versions of the forwarder, each with different configurations i.e. publish-config-for-first-lambda.yaml and first-lambda vs. publish-config-for-second-lambda.yaml and second-lambda.