Connectingedit

This page contains the information you need to connect and use the Client with Elasticsearch.

On this page

Authenticationedit

This document contains code snippets to show you how to connect to various Elasticsearch providers.

Elastic Cloudedit

If you are using Elastic Cloud, the client offers an easy way to connect to it via the cloud option. You must pass the Cloud ID that you can find in the cloud console, then your username and password inside the auth option.

When connecting to Elastic Cloud, the client will automatically enable both request and response compression by default, since it yields significant throughput improvements. Moreover, the client will also set the ssl option secureProtocol to TLSv1_2_method unless specified otherwise. You can still override this option by configuring them.

Do not enable sniffing when using Elastic Cloud, since the nodes are behind a load balancer, Elastic Cloud will take care of everything for you. Take a look here to know more.

const { Client } = require('@elastic/elasticsearch')
const client = new Client({
  cloud: {
    id: 'name:bG9jYWxob3N0JGFiY2QkZWZnaA==',
  },
  auth: {
    username: 'elastic',
    password: 'changeme'
  }
})

ApiKey authenticationedit

You can use the ApiKey authentication by passing the apiKey parameter via the auth option. The apiKey parameter can be either a base64 encoded string or an object with the values that you can obtain from the create api key endpoint.

If you provide both basic authentication credentials and the ApiKey configuration, the ApiKey takes precedence.

const { Client } = require('@elastic/elasticsearch')
const client = new Client({
  node: 'https://localhost:9200',
  auth: {
    apiKey: 'base64EncodedKey'
  }
})
const { Client } = require('@elastic/elasticsearch')
const client = new Client({
  node: 'https://localhost:9200',
  auth: {
    apiKey: {
      id: 'foo',
      api_key: 'bar'
    }
  }
})

Bearer authenticationedit

You can provide your credentials by passing the bearer token parameter via the auth option. Useful for service account tokens. Be aware that it does not handle automatic token refresh.

const { Client } = require('@elastic/elasticsearch')
const client = new Client({
  node: 'https://localhost:9200',
  auth: {
    bearer: 'token'
  }
})

Basic authenticationedit

You can provide your credentials by passing the username and password parameters via the auth option.

If you provide both basic authentication credentials and the Api Key configuration, the Api Key will take precedence.

const { Client } = require('@elastic/elasticsearch')
const client = new Client({
  node: 'https://localhost:9200',
  auth: {
    username: 'elastic',
    password: 'changeme'
  }
})

Otherwise, you can provide your credentials in the node(s) URL.

const { Client } = require('@elastic/elasticsearch')
const client = new Client({
  node: 'https://username:password@localhost:9200'
})

SSL configurationedit

Without any additional configuration you can specify https:// node urls, and the certificates used to sign these requests will be verified. To turn off certificate verification, you must specify an ssl object in the top level config and set rejectUnauthorized: false. The default ssl values are the same that Node.js’s tls.connect() uses.

const { Client } = require('@elastic/elasticsearch')
const client = new Client({
  node: 'https://localhost:9200',
  auth: {
    username: 'elastic',
    password: 'changeme'
  },
  ssl: {
    ca: fs.readFileSync('./cacert.pem'),
    rejectUnauthorized: false
  }
})

CA fingerprintedit

You can configure the client to only trust certificates that are signed by a specific CA certificate ( CA certificate pinning ) by providing a caFingerprint option. This will verify that the fingerprint of the CA certificate that has signed the certificate of the server matches the supplied value. a caFingerprint option, which will verify the supplied certificate authority fingerprint. You must configure a SHA256 digest.

const { Client } = require('@elastic/elasticsearch')
const client = new Client({
  node: 'https://example.com'
  auth: { ... },
  // the fingerprint (SHA256) of the CA certificate that is used to sign the certificate that the Elasticsearch node presents for TLS.
  caFingerprint: '20:0D:CA:FA:76:...',
  ssl: {
    // might be required if it's a self-signed certificate
    rejectUnauthorized: false
  }
})

Usageedit

Using the client is straightforward, it supports all the public APIs of Elasticsearch, and every method exposes the same signature.

const { Client } = require('@elastic/elasticsearch')
const client = new Client({ node: 'http://localhost:9200' })

// promise API
const result = await client.search({
  index: 'my-index',
  body: {
    query: {
      match: { hello: 'world' }
    }
  }
})

// callback API
client.search({
  index: 'my-index',
  body: {
    query: {
      match: { hello: 'world' }
    }
  }
}, (err, result) => {
  if (err) console.log(err)
})

The returned value of every API call is designed as follows:

{
  body: object | boolean
  statusCode: number
  headers: object
  warnings: [string],
  meta: object
}

The body is a boolean value when you use HEAD APIs.

The above value is returned even if there is an error during the execution of the request, this means that you can safely use the destructuring assignment.

The meta key contains all the information about the request, such as attempt, options, and the connection that has been used.

// promise API
const { body } = await client.search({
  index: 'my-index',
  body: {
    query: {
      match: { hello: 'world' }
    }
  }
})

// callback API
client.search({
  index: 'my-index',
  body: {
    query: {
      match: { hello: 'world' }
    }
  }
}, (err, { body }) => {
  if (err) console.log(err)
})

Aborting a requestedit

If needed, you can abort a running request by calling the request.abort() method returned by the API.

If you abort a request, the request will fail with a RequestAbortedError.

const request = client.search({
  index: 'my-index',
  body: {
    query: {
      match: { hello: 'world' }
    }
  }
}, {
  ignore: [404],
  maxRetries: 3
}, (err, result) => {
  if (err) {
    console.log(err) // RequestAbortedError
  } else {
    console.log(result)
  }
})

request.abort()

The same behavior is valid for the promise style API as well.

const request = client.search({
  index: 'my-index',
  body: {
    query: {
      match: { hello: 'world' }
    }
  }
}, {
  ignore: [404],
  maxRetries: 3
})

request
  .then(result => console.log(result))
  .catch(err => console.log(err)) // RequestAbortedError

request.abort()

Request specific optionsedit

If needed you can pass request specific options in a second object:

// promise API
const result = await client.search({
  index: 'my-index',
  body: {
    query: {
      match: { hello: 'world' }
    }
  }
}, {
  ignore: [404],
  maxRetries: 3
})

// callback API
client.search({
  index: 'my-index',
  body: {
    query: {
      match: { hello: 'world' }
    }
  }
}, {
  ignore: [404],
  maxRetries: 3
}, (err, { body }) => {
  if (err) console.log(err)
})

The supported request specific options are:

ignore

[number] -  HTTP status codes which should not be considered errors for this request.
Default: null

requestTimeout

number - Max request timeout for the request in milliseconds, it overrides the client default.
Default: 30000

maxRetries

number - Max number of retries for the request, it overrides the client default.
Default: 3

compression

string, boolean - Enables body compression for the request.
Options: false, 'gzip'
Default: false

asStream

boolean - Instead of getting the parsed body back, you get the raw Node.js stream of data.
Default: false

headers

object - Custom headers for the request.
Default: null

querystring

object - Custom querystring for the request.
Default: null

id

any - Custom request id. (overrides the top level request id generator)
Default: null

context

any - Custom object per request. (you can use it to pass data to the clients events)
Default: null

maxResponseSize

number - When configured, it verifies that the uncompressed response size is lower than the configured number, if it’s higher it will abort the request. It cannot be higher than buffer.constants.MAX_STRING_LENTGH
Default: null

maxCompressedResponseSize

number - When configured, it verifies that the compressed response size is lower than the configured number, if it’s higher it will abort the request. It cannot be higher than buffer.constants.MAX_LENTGH
Default: null

Using the Client in a Function-as-a-Service Environmentedit

This section illustrates the best practices for leveraging the Elasticsearch client in a Function-as-a-Service (FaaS) environment. The most influential optimization is to initialize the client outside of the function, the global scope. This practice does not only improve performance but also enables background functionality as – for example – sniffing. The following examples provide a skeleton for the best practices.

GCP Cloud Functionsedit

'use strict'

const { Client } = require('@elastic/elasticsearch')

const client = new Client({
  // client initialisation
})

exports.testFunction = async function (req, res) {
  // use the client
}

AWS Lambdaedit

'use strict'

const { Client } = require('@elastic/elasticsearch')

const client = new Client({
  // client initialisation
})

exports.handler = async function (event, context) {
  // use the client
}

Azure Functionsedit

'use strict'

const { Client } = require('@elastic/elasticsearch')

const client = new Client({
  // client initialisation
})

module.exports = async function (context, req) {
  // use the client
}

Resources used to assess these recommendations:

Connecting through a proxyedit

Added in v7.10.0

If you need to pass through an http(s) proxy for connecting to Elasticsearch, the client offers out of the box a handy configuration for helping you with it. Under the hood, it uses the hpagent module.

const client = new Client({
  node: 'http://localhost:9200',
  proxy: 'http://localhost:8080'
})

Basic authentication is supported as well:

const client = new Client({
  node: 'http://localhost:9200',
  proxy: 'http:user:pwd@//localhost:8080'
})

If you are connecting through a not http(s) proxy, such as a socks5 or pac, you can use the agent option to configure it.

const SocksProxyAgent = require('socks-proxy-agent')
const client = new Client({
  node: 'http://localhost:9200',
  agent () {
    return new SocksProxyAgent('socks://127.0.0.1:1080')
  }
})

Error handlingedit

The client exposes a variety of error objects that you can use to enhance your error handling. You can find all the error objects inside the errors key in the client.

const { errors } = require('@elastic/elasticsearch')
console.log(errors)

You can find the errors exported by the client in the table below.

Error

Description

Properties

ElasticsearchClientError

Every error inherits from this class, it is the basic error generated by the client.

  • name - string
  • message - string

TimeoutError

Generated when a request exceeds the requestTimeout option.

  • name - string
  • message - string
  • meta - object, contains all the information about the request

ConnectionError

Generated when an error occurs during the request, it can be a connection error or a malformed stream of data.

  • name - string
  • message - string
  • meta - object, contains all the information about the request

RequestAbortedError

Generated if the user calls the request.abort() method.

  • name - string
  • message - string
  • meta - object, contains all the information about the request

NoLivingConnectionsError

Given the configuration, the ConnectionPool was not able to find a usable Connection for this request.

  • name - string
  • message - string
  • meta - object, contains all the information about the request

SerializationError

Generated if the serialization fails.

  • name - string
  • message - string
  • data - object, the object to serialize

DeserializationError

Generated if the deserialization fails.

  • name - string
  • message - string
  • data - string, the string to deserialize

ConfigurationError

Generated if there is a malformed configuration or parameter.

  • name - string
  • message - string

ResponseError

Generated when in case of a 4xx or 5xx response.

  • name - string
  • message - string
  • meta - object, contains all the information about the request
  • body - object, the response body
  • statusCode - object, the response headers
  • headers - object, the response status code

Automatic product checkedit

Since v7.14.0, the client performs a required product check before the first call. This pre-flight product check allows the client to establish the version of Elasticsearch that it is communicating with. The product check requires one additional HTTP request to be sent to the server as part of the request pipeline before the main API call is sent. In most cases, this will succeed during the very first API call that the client sends. Once the product check completes, no further product check HTTP requests are sent for subsequent API calls.