Connectingedit

This page contains the information you need to connect the Client with Elasticsearch.

Connecting to Elastic Cloudedit

Elastic Cloud is the easiest way to get started with Elasticsearch. When connecting to Elastic Cloud with the Python Elasticsearch client you should always use the cloud_id parameter to connect. You can find this value within the "Manage Deployment" page after you’ve created a cluster (look in the top-left if you’re in Kibana).

We recommend using a Cloud ID whenever possible because your client will be automatically configured for optimal use with Elastic Cloud including HTTPS and HTTP compression.

from elasticsearch import Elasticsearch

# Password for the 'elastic' user generated by Elasticsearch
ELASTIC_PASSWORD = "<password>"

# Found in the 'Manage Deployment' page
CLOUD_ID = "deployment-name:dXMtZWFzdDQuZ2Nw..."

# Create the client instance
client = Elasticsearch(
    cloud_id=CLOUD_ID,
    basic_auth=("elastic", ELASTIC_PASSWORD)
)

# Successful response!
client.info()
# {'name': 'instance-0000000000', 'cluster_name': ...}

Connecting to a self-managed clusteredit

By default Elasticsearch will start with security features like authentication and TLS enabled. To connect to the Elasticsearch cluster you’ll need to configure the Python Elasticsearch client to use HTTPS with the generated CA certificate in order to make requests successfully.

If you’re just getting started with Elasticsearch we recommend reading the documentation on configuring and starting Elasticsearch to ensure your cluster is running as expected.

When you start Elasticsearch for the first time you’ll see a distinct block like the one below in the output from Elasticsearch (you may have to scroll up if it’s been a while):

----------------------------------------------------------------
-> Elasticsearch security features have been automatically configured!
-> Authentication is enabled and cluster connections are encrypted.

->  Password for the elastic user (reset with `bin/elasticsearch-reset-password -u elastic`):
  lhQpLELkjkrawaBoaz0Q

->  HTTP CA certificate SHA-256 fingerprint:
  a52dd93511e8c6045e21f16654b77c9ee0f34aea26d9f40320b531c474676228
...
----------------------------------------------------------------

Note down the elastic user password and HTTP CA fingerprint for the next sections. In the examples below they will be stored in the variables ELASTIC_PASSWORD and CERT_FINGERPRINT respectively.

Depending on the circumstances there are two options for verifying the HTTPS connection, either verifying with the CA certificate itself or via the HTTP CA certificate fingerprint.

Verifying HTTPS with CA certificatesedit

Using the ca_certs option is the default way the Python Elasticsearch client verifies an HTTPS connection.

The generated root CA certificate can be found in the certs directory in your Elasticsearch config location ($ES_CONF_PATH/certs/http_ca.crt). If you’re running Elasticsearch in Docker there is additional documentation for retrieving the CA certificate.

Once you have the http_ca.crt file somewhere accessible pass the path to the client via ca_certs:

from elasticsearch import Elasticsearch

# Password for the 'elastic' user generated by Elasticsearch
ELASTIC_PASSWORD = "<password>"

# Create the client instance
client = Elasticsearch(
    "https://localhost:9200",
    ca_certs="/path/to/http_ca.crt",
    basic_auth=("elastic", ELASTIC_PASSWORD)
)

# Successful response!
client.info()
# {'name': 'instance-0000000000', 'cluster_name': ...}

If you don’t specify ca_certs or ssl_assert_fingerprint then the certifi package will be used for ca_certs by default if available.

Verifying HTTPS with certificate fingerprints (Python 3.10 or later)edit

Using this method requires using Python 3.10 or later and isn’t available when using the aiohttp HTTP client library so can’t be used with AsyncElasticsearch.

This method of verifying the HTTPS connection takes advantage of the certificate fingerprint value noted down earlier. Take this SHA256 fingerprint value and pass it to the Python Elasticsearch client via ssl_assert_fingerprint:

from elasticsearch import Elasticsearch

# Fingerprint either from Elasticsearch startup or above script.
# Colons and uppercase/lowercase don't matter when using
# the 'ssl_assert_fingerprint' parameter
CERT_FINGERPRINT = "A5:2D:D9:35:11:E8:C6:04:5E:21:F1:66:54:B7:7C:9E:E0:F3:4A:EA:26:D9:F4:03:20:B5:31:C4:74:67:62:28"

# Password for the 'elastic' user generated by Elasticsearch
ELASTIC_PASSWORD = "<password>"

client = Elasticsearch(
    "https://localhost:9200",
    ssl_assert_fingerprint=CERT_FINGERPRINT,
    basic_auth=("elastic", ELASTIC_PASSWORD)
)

# Successful response!
client.info()
# {'name': 'instance-0000000000', 'cluster_name': ...}

The certificate fingerprint can be calculated using openssl x509 with the certificate file:

openssl x509 -fingerprint -sha256 -noout -in /path/to/http_ca.crt

If you don’t have access to the generated CA file from Elasticsearch you can use the following script to output the root CA fingerprint of the Elasticsearch instance with openssl s_client:

# Replace the values of 'localhost' and '9200' to the
# corresponding host and port values for the cluster.
openssl s_client -connect localhost:9200 -servername localhost -showcerts </dev/null 2>/dev/null \
  | openssl x509 -fingerprint -sha256 -noout -in /dev/stdin

The output of openssl x509 will look something like this:

SHA256 Fingerprint=A5:2D:D9:35:11:E8:C6:04:5E:21:F1:66:54:B7:7C:9E:E0:F3:4A:EA:26:D9:F4:03:20:B5:31:C4:74:67:62:28

Connecting without security enablededit

Running Elasticsearch without security enabled is not recommended.

If your cluster is configured with security explicitly disabled then you can connect via HTTP:

from elasticsearch import Elasticsearch

# Create the client instance
client = Elasticsearch("http://localhost:9200")

# Successful response!
client.info()
# {'name': 'instance-0000000000', 'cluster_name': ...}

Connecting to multiple nodesedit

The Python Elasticsearch client supports sending API requests to multiple nodes in the cluster. This means that work will be more evenly spread across the cluster instead of hammering the same node over and over with requests. To configure the client with multiple nodes you can pass a list of URLs, each URL will be used as a separate node in the pool.

from elasticsearch import Elasticsearch

# List of nodes to connect use with different hosts and ports.
NODES = [
    "https://localhost:9200",
    "https://localhost:9201",
    "https://localhost:9202",
]

# Password for the 'elastic' user generated by Elasticsearch
ELASTIC_PASSWORD = "<password>"

client = Elasticsearch(
    NODES,
    ca_certs="/path/to/http_ca.crt",
    basic_auth=("elastic", ELASTIC_PASSWORD)
)

By default nodes are selected using round-robin, but alternate node selection strategies can be configured with node_selector_class parameter.

If your Elasticsearch cluster is behind a load balancer like when using Elastic Cloud you won’t need to configure multiple nodes. Instead use the load balancer host and port.

Authenticationedit

This section contains code snippets to show you how to connect to various Elasticsearch providers. All authentication methods are supported on the client constructor or via the per-request .options() method:

from elasticsearch import Elasticsearch

# Authenticate from the constructor
es = Elasticsearch(
    "https://localhost:9200",
    ca_certs="/path/to/http_ca.crt",
    basic_auth=("username", "password")
)

# Authenticate via the .options() method:
es.options(
    basic_auth=("username", "password")
).indices.get(index="*")

# You can persist the authenticated client to use
# later or use for multiple API calls:
auth_client = es.options(
    api_key=("api-key-id", "api-key-secret")
)
for i in range(10):
    auth_client.index(
        index="example-index",
        document={"field": i}
    )

HTTP Basic authentication (Username and Password)edit

HTTP Basic authentication uses the basic_auth parameter by passing in a username and password within a tuple:

from elasticsearch import Elasticsearch

# Adds the HTTP header 'Authorization: Basic <base64 username:password>'
es = Elasticsearch(
    "https://localhost:9200",
    ca_certs="/path/to/http_ca.crt",
    basic_auth=("username", "password")
)

HTTP Bearer authenticationedit

HTTP Bearer authentication uses the bearer_auth parameter by passing the token as a string. This authentication method is used by Service Account Tokens and Bearer Tokens.

from elasticsearch import Elasticsearch

# Adds the HTTP header 'Authorization: Bearer token-value'
es = Elasticsearch(
    "https://localhost:9200",
    bearer_auth="token-value"
)

API Key authenticationedit

You can configure the client to use Elasticsearch’s API Key for connecting to your cluster. Note that you need the values of id and api_key to authenticate via an API Key.

from elasticsearch import Elasticsearch

# Adds the HTTP header 'Authorization: ApiKey <base64 api_key.id:api_key.api_key>'
es = Elasticsearch(
    "https://localhost:9200",
    ca_certs="/path/to/http_ca.crt",
    api_key=("api_key.id", "api_key.api_key")
)

Enabling the Compatibility Modeedit

The Elasticsearch server version 8.0 is introducing a new compatibility mode that allows you a smoother upgrade experience from 7 to 8. In a nutshell, you can use the latest 7.x Python Elasticsearch Elasticsearch client with an 8.x Elasticsearch server, giving more room to coordinate the upgrade of your codebase to the next major version.

If you want to leverage this functionality, please make sure that you are using the latest 7.x Python Elasticsearch client and set the environment variable ELASTIC_CLIENT_APIVERSIONING to true. The client is handling the rest internally. For every 8.0 and beyond Python Elasticsearch client, you’re all set! The compatibility mode is enabled by default.

Using the Client in a Function-as-a-Service Environmentedit

This section illustrates the best practices for leveraging the Elasticsearch client in a Function-as-a-Service (FaaS) environment.

The most influential optimization is to initialize the client outside of the function, the global scope.

This practice does not only improve performance but also enables background functionality as – for example – sniffing. The following examples provide a skeleton for the best practices.

The async client shouldn’t be used within Function-as-a-Service as a new event loop must be started for each invocation. Instead the synchronous Elasticsearch client is recommended.

GCP Cloud Functionsedit

from elasticsearch import Elasticsearch

# Client initialization
client = Elasticsearch(
    cloud_id="deployment-name:ABCD...",
    api_key=...
)

def main(request):
    # Use the client
    client.search(index=..., query={"match_all": {}})

AWS Lambdaedit

from elasticsearch import Elasticsearch

# Client initialization
client = Elasticsearch(
    cloud_id="deployment-name:ABCD...",
    api_key=...
)

def main(event, context):
    # Use the client
    client.search(index=..., query={"match_all": {}})

Azure Functionsedit

import azure.functions as func
from elasticsearch import Elasticsearch

# Client initialization
client = Elasticsearch(
    cloud_id="deployment-name:ABCD...",
    api_key=...
)

def main(request: func.HttpRequest) -> func.HttpResponse:
    # Use the client
    client.search(index=..., query={"match_all": {}})

Resources used to assess these recommendations: