Elastic S3 connector referenceedit

The Elastic S3 connector is a connector for Amazon S3 data sources.

Availability and prerequisitesedit

This connector is available as a connector client from the Python connectors framework. This connector client is compatible with Elastic versions 8.6.0+. To use this connector, satisfy all connector client requirements.

This connector is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features.


To use this connector as a connector client, see Connector clients and frameworks.

For additional operations, see Usage.

S3 users will also need to:

Create an IAM identityedit

Users need to create an IAM identity to use this connector as a connector client. Refer to the AWS documentation.

The policy associated with the IAM identity must have the following AWS permissions:

  • ListAllMyBuckets
  • ListBucket
  • GetBucketLocation
  • GetObject

Set up AWS CLIedit

Users need to install the official AWS CLI tool to use this connector as a connector client.

Add the following values:

  • aws_access_key (specifies the AWS identity to be used)
  • aws_secret_key
  • region


Currently the connector does not support S3-compatible vendors.


When using the connector client workflow, these fields will use the default configuration set in the connector source code. These configurable fields will be rendered with their respective labels in the Kibana UI. Once connected, you’ll be able to update these values in Kibana.

The following configuration fields are required to set up the connector:


List of S3 bucket names. * will fetch data from all buckets. Examples:

  • testbucket, prodbucket
  • testbucket
  • *
The read_timeout for Amazon S3. Default value is 90.
Connection timeout for crawling S3. Default value is 90.
Maximum retry attempts. Default value is 5.
Page size for iterating bucket objects in Amazon S3. Default value is 100.

Deployment using Dockeredit

Follow these instructions to deploy the Amazon S3 connector using Docker.

Step 1: Download sample configuration file

Download the sample configuration file. You can either download it manually or run the following command:

curl https://raw.githubusercontent.com/elastic/connectors-python/main/config.yml --output ~/connectors-python-config/config.yml

Remember to update the --output argument value if your directory name is different, or you want to use a different config file name.

Step 2: Update the configuration file for your self-managed connector

Update the configuration file with the following settings to match your environment:

  • elasticsearch.host
  • elasticsearch.password
  • connector_id
  • service_type

Use s3 as the service_type value. Don’t forget to uncomment "s3" in the sources section of the yaml file.

If you’re running the connector service against a Dockerized version of Elasticsearch and Kibana, your config file will look like this:

  host: http://host.docker.internal:9200
  username: elastic
  password: <YOUR_PASSWORD>

service_type: s3

  # UNCOMMENT "s3" below to enable the Amazon S3 connector

  #mongodb: connectors.sources.mongo:MongoDataSource
  #s3: connectors.sources.s3:S3DataSource
  #dir: connectors.sources.directory:DirectoryDataSource
  #mysql: connectors.sources.mysql:MySqlDataSource
  #network_drive: connectors.sources.network_drive:NASDataSource
  #google_cloud_storage: connectors.sources.google_cloud_storage:GoogleCloudStorageDataSource
  #azure_blob_storage: connectors.sources.azure_blob_storage:AzureBlobStorageDataSource
  #postgresql: connectors.sources.postgresql:PostgreSQLDataSource
  #oracle: connectors.sources.oracle:OracleDataSource
  #mssql: connectors.sources.mssql:MSSQLDataSource

Note that the config file you downloaded might contain more entries, so you will need to manually copy/change the settings that apply to you. Normally you’ll only need to update elasticsearch.host, elasticsearch.password, connector_id and service_type to run the connector service.

Step 3: Run the Docker image

Run the Docker image with the Connector Service using the following command:

docker run \
-v ~/connectors-python-config:/config \
--network "elastic" \
--tty \
--rm \
docker.elastic.co/enterprise-search/elastic-connectors: \
/app/bin/elastic-ingest \
-c /config/config.yml

Refer to this guide in the Python framework repository for more details.

Sync rulesedit

  • Files bigger than 10 MB won’t be extracted.
  • Permissions are not synced.
  • Filtering rules are not available in the current version, because filtering is controlled by ingest pipelines.

Content extractionedit

See Content extraction.

End-to-end testingedit

The connector framework enables operators to run functional tests against a real data source. Refer to Connector testing for more details.

To execute a functional test for the Amazon S3 connector client, run the following command:

make ftest NAME=s3

By default, this will use a medium-sized dataset. To make the test faster add the DATA_SIZE=small argument:

make ftest NAME=s3 DATA_SIZE=small

Known issuesedit

There are no known issues for this connector.

See Known issues for any issues affecting all connectors.


See Troubleshooting.


See Security.

Framework and sourceedit

This connector is included in the Python connectors framework.

View the source code for this connector (branch 8.8, compatible with Elastic 8.8).