Functionbeat runs as a function in your serverless environment.
Before deploying Functionbeat, you need to configure one or more functions and specify details about the services that will trigger the functions.
You configure the functions in the the
functionbeat.yml configuration file.
When you’re done, you can deploy the functions
to your serverless environment.
The following example configures two functions:
cloudwatch function collects events from CloudWatch Logs. The
collects messages from Amazon Simple Queue Service (SQS). Both functions forward
the events to Elasticsearch.
functionbeat.provider.aws.endpoint: "s3.amazonaws.com" functionbeat.provider.aws.deploy_bucket: "functionbeat-deploy" functionbeat.provider.aws.functions: - name: cloudwatch enabled: true type: cloudwatch_logs description: "lambda function for cloudwatch logs" triggers: - log_group_name: /aws/lambda/my-lambda-function #filter_pattern: mylog_ - name: sqs enabled: true type: sqs description: "lambda function for SQS events" triggers: - event_source_arn: arn:aws:sqs:us-east-1:123456789012:myevents cloud.id: "MyESDeployment:SomeLongString==" cloud.auth: "elastic:SomeLongString" processors: - add_host_metadata: ~ - add_cloud_metadata: ~
You can specify the following options to configure the functions that you want to deploy.
If you change the configuration after deploying the function, use
update command to update your deployment.
AWS endpoint to use in the URL template to load functions.
A unique name for the S3 bucket that the Lambda artifact will be uploaded to.
A unique name for the Lambda function. This is the name of the function as it will appear in the Lambda console on AWS.
The type of service to monitor. For this release, the supported types are:
Collects events from CloudWatch logs.
Collects data from Amazon Simple Queue Service (SQS).
Collects data from a Kinesis stream.
A description of the function. This description is useful when you are running multiple functions and need more context about how each function is used.
A list of triggers that will cause the function to execute. The list of valid
triggers depends on the
cloudwatch_logs, specify a list of log groups. Because the AWS limit is one subscription filter per CloudWatch log group, the log groups specified here must have no other subscription filters, or deployment will fail. For more information, see Deployment to AWS fails with "resource limit exceeded".
kinesis, specify a list of Amazon Resource Names (ARNs).
A regular expression that matches the events you want to collect. Setting this option may reduce execution costs because the function only executes if there is data that matches the pattern.
The reserved number of instances for the function. Setting this option may reduce execution costs by limiting the number of functions that can execute in your serverless environment. The default is unreserved.
The maximum amount of memory to allocate for this function. Specify a value that is a factor of 64. There is a hard limit of 3008 MiB for each function. The default is 128 MiB.
The custom execution role to use for the deployed function. For example:
Make sure the custom role has the permissions required to run the function. For more information, see IAM permissions required for deployment.
role is not specified, the function uses the default role and policy
created during deployment.
Specifies additional settings required to connect to private resources in an Amazon Virtual Private Cloud (VPC). For example:
virtual_private_cloud: security_group_ids: - mySecurityGroup - anotherSecurityGroup subnet_ids: - myUniqueID
The dead letter queue to use for messages that can’t be processed successfully. Set this option to an ARN that points to an SQS queue.
The number of events to read from a Kinesis stream, the minimal values is 100 and the maximun is 10000. The default is 100.
The starting position to read from a Kinesis stream, valids values are
The default is trim_horizon.