Tech Topics

X-Pack Security for Elasticsearch with Let's Encrypt™ Certificates

Editor's Note (August 3, 2021): This post uses deprecated features. Please reference the map custom regions with reverse geocoding documentation for current instructions.

Security via public key encryption is critical for your data. It seems not a day goes by where we don't hear of yet another hack, and unencrypted network communications allow for data theft to occur with almost trivial effort across untrusted networks. This blog will focus on simplifying in-transit encryption to help protect against this data threat.


Encryption In-Transit


The Let’s Encrypt™ service is a free, automated, and open non-profit Certificate Authority provided by the Internet Security Research Group™ ("ISRG") with the noble mission of encrypting all HTTP transport-level communications with SSL/TLS:

https://letsencrypt.org/about/


Since the public key infrastructure ("PKI") is ultimately based on a "web of trust", enabling widespread encryption is dependent on a Certificate Authority that can provide this trust at a reasonable cost. The Let's Encrypt certificate authority is the first to do so at no cost, and so is a very economical way to get started with trusted encryption. A tool called "Certbot" is distributed to simplify the process:


https://certbot.eff.org


The Certbot functionality is based on a framework called the Automatic Certificate Management Environment (ACME). To verify the client is authorized for the identified domain, the ACME server will issue a set of challenges based on DNS authorization. In short, if you have control of the DNS records for a domain as well as the ability to bind a webserver process to the desired hostname(s), you should be able to get a certificate via ACME and certbot.


While I won't go into great detail on using certbot, the basic steps are very straightforward. As long as you have the ability to start a process to listen on ports 80 or 443 and DNS is correct, the following steps should be sufficient:


Please note: in all of the following steps, I'm running the commands as root. In your environment and for security and auditing reasons, you may be using "sudo". Pasting "sudo" another 50 times below though seemed a little excessive.


# wget https://dl.eff.org/certbot-auto
# chmod 755 certbot-auto
# ./certbot-auto certonly

Note: the Let’s Encrypt CA issues short-lived certificates (90 days). You will need to make sure you renew the certificates every 3 months.


Tangential tip:

X-Pack does include a new tool for generating certificates, called "certgen". Certgen is an easy tool to simplify the create of Certificate Signing Requests ("CSRs") and self-signed certs. It does appear that there is a way to submit CSRs to Let's Encrypt for signing, through a third-party tool at https://gethttpsforfree.com - please note, the specifics of this site as well as the trust are undetermined, but it looks interesting.  More info on certgen in the X-Pack documentation: https://www.elastic.co/guide/en/x-pack/current/ssl...


SSL/TLS PEM Files


Once you have your CA-signed certificates, you'll be ready to setup the Elastic Stack with X-Pack and transport-level encryption. We'll start with a single-node system. For our example, we'll use a server name of "data.example.com" - this is obviously not a valid Internet hostname, but you'll have your own. We'll also use a certificate Common Name of "data.example.com" in this example. The certbot process will provide you the following files, in the directory location "/etc/letsencrypt" by default:


/etc/letsencrypt/archive/data.example.com:
    cert1.pem
    chain1.pem
    fullchain1.pem
    privkey1.pem

For Elasticsearch to access the SSL files, you'll then need to copy them into the Elasticsearch configuration directory path. Since Elasticsearch 2.0, the Java security manager limits the directories from which the Elasticsearch process can read, so it must be located in the Elasticsearch configuration directory:

# mkdir /etc/elasticsearch/ssl
# cp -pr /etc/letsencrypt/archive/data.example.com /etc/elasticsearch/ssl/
# chmod 750 /etc/elasticsearch/ssl/data.example.com
# chmod 640 /etc/elasticsearch/ssl/data.example.com/*
# chown -R root:elasticsearch /etc/elasticsearch/ssl/data.example.com

For Kibana, you will also need access to the certificate PEM files. Since many sites will run Kibana on separate nodes from Elasticsearch, and since the group access permissions for Kibana will differ, we'll go ahead and maintain a separate copy of the PEM directory just for Kibana.

# mkdir /etc/kibana/ssl
# cp -pr /etc/letsencrypt/archive/data.example.com /etc/kibana/ssl/
# chmod 750 /etc/kibana/ssl/data.example.com
# chmod 640 /etc/kibana/ssl/data.example.com/*
# chown -R root:kibana /etc/kibana/ssl/data.example.com

Elasticsearch

While I won't get into great detail here about Elasticsearch installation, I began by installing the latest GA version of both: Elasticsearch and Kibana 5.1.2 - these notes should generally apply to other 5.x versions also. Since this installation was on CentOS, I used rpm, first checking the sha1sum checksum provided on the Elastic download site:


# sha1sum elasticsearch-5.1.2.rpm
a27c15150888f75cedb4f639d1b29a0779886736  elasticsearch-5.1.2.rpm
# cat elasticsearch-5.1.2.rpm.sha1
a27c15150888f75cedb4f639d1b29a0779886736
# sha1sum kibana-5.1.2-x86_64.rpm
ba355d63fef6702109ebdbf72d7ebca0451ed7ae  kibana-5.1.2-x86_64.rpm
# cat kibana-5.1.2-x86_64.rpm.sha1
ba355d63fef6702109ebdbf72d7ebca0451ed7ae

# rpm -ivh elasticsearch-5.1.2.rpm
# rpm -ivh kibana-5.1.2-x86_64.rpm

It is a good idea to run through the Elasticsearch bootstrap check requirements before starting up Elasticsearch. For this CentOS setup, it required adding the following as a minimum to /etc/security/limits.conf:

/etc/security/limits.conf:
    elasticsearch soft nofile 65536
    elasticsearch hard nofile 65536
    elasticsearch soft nproc 2048
    elasticsearch hard nproc 2048
    elasticsearch - memlock unlimited

X-Pack

In order to enable SSL/TLS security on your Elastic Stack platform, you will require X-Pack to be installed in Elasticsearch and Kibana. A 30-day trial license is installed upon first installation:


# cd /usr/share/elasticsearch
# bin/elasticsearch-plugin install x-pack
# cd /usr/share/kibana
# bin/kibana-plugin install x-pack

Elasticsearch Configuration

Okay, we're finally ready for the Elasticsearch and Kibana configuration files. General reminder here to always set your cluster.name and probably node.name to avoid potential name conflicts.


There are two valid options for configuring the certificates with X-Pack: you can use the Java keystore, or as of Elastic Stack 5.0, you can configure the PEM files directly. While both methods are fine, I've chosen the PEM method here for the simplicity of not having to use the Java keystore. The "fullchain1.pem" certificate can be used to include both the signed certificate and intermediate chain certificate in a single PEM:


/etc/elasticsearch/elasticsearch.yml:

## general setup
cluster.name: data
node.name: data01
path.data: /data
network.host: 11.12.13.14

xpack.ssl.key: /etc/elasticsearch/ssl/data.example.com/privkey1.pem
xpack.ssl.certificate: /etc/elasticsearch/ssl/data.example.com/fullchain1.pem

xpack.security.transport.ssl.enabled: true
xpack.security.http.ssl.enabled: true
xpack.security.audit.enabled: true

If your Java cacerts keystore does not contain the DST Root CA X3 certificate or newer ISRG Root X1 CA certificate for any reason, you could also provide the Certificate Authorities certificates directly to Elasticsearch via the following configuration. This was not required with an updated version of CentOS 6, but you may find that either the DST Root CA X3 certificate or the newer ISRG root CA used by Let's Encrypt may not be recognized by some older OS or web browser versions:


xpack.ssl.certificate_authorities: [
  "/etc/elasticsearch/ssl/data.example.com/chain1.pem",
  "/etc/elasticsearch/ssl/data.example.com/cacert.pem" ]

Upon configuring Elasticsearch for SSL/TLS and restarting, you should immediately change the default passwords for the users "elastic" and "kibana". Note: The default password for the elastic user is "changeme". The following password examples are random hex strings of 40 characters, but please feel free to use your own strong password selection method:

# curl -XPUT -u elastic 'https://data.example.com:9200/_xpack/security/user/elastic/_password' -d '{ "password" : "c73cb507276b17609c380adcdd99621980ae1716" }'

# curl -XPUT -u elastic 'https://data.example.com:9200/_xpack/security/user/kibana/_password' -d '{
 "password" : "878502df3af7b8e0910cb4fb8f75b5c59e44f09b"
}'

Note: curl on CentOS6 or CentOS7 should be built to use the CA bundle at /etc/pki/tls/certs/ca-bundle.crt, which should contain the DST Root CA X3 certificate, although may not contain the ISRG root CA at time of writing. The Let's Encrypt certificates are cross signed and should generally be recognized, but if not, use the curl option "-k" to ignore certificate validation.

Kibana Configuration


Kibana also must be configured for the SSL/TLS and user configuration appropriately. Note the the xpack.security.encryptionKey and xpack.reporting.encryptionKey values can be set to any string 32 characters or longer - again, I'll use a random 40-character hex string:


/etc/kibana/kibana.yml

server.name: data.example.com
elasticsearch.url: "https://data.example.com:9200"
elasticsearch.username: "kibana"
elasticsearch.password: "878502df3af7b8e0910cb4fb8f75b5c59e44f09b"
server.ssl.cert: /etc/ssl/data.example.com/fullchain1.pem
server.ssl.key: /etc/ssl/data.example.com/privkey1.pem
elasticsearch.ssl.verify: true

## encryptionKey
xpack.security.encryptionKey: "e386d5f380dd962614538ad70d7e9745760f7e8e"
xpack.reporting.encryptionKey: "e386d5f380dd962614538ad70d7e9745760f7e8e"

You may find that you need to direct the Console plugin to recognize the CA certificate chain - this is possible through the Console proxyConfig options:

### Console
console.proxyConfig:
 - match:
     host: "*"
     port: "{9200..9202}"
   ssl:
     ca: [
       "/etc/ssl/data.example.com/chain1.pem",
       "/etc/ssl/data.example.com/cacert.pem" ]

That's it! Restart Elasticsearch and Kibana, and you've now encrypted all transport-level connections. The steps for both CentOS 6.8 and CentOS 7.2 are included below, as they differ based on the use of systemd:


// don't forget to start automatically at boot

// CentOS 6.8
# chkconfig --add elasticsearch
# chkconfig --add kibana
# service elasticsearch restart
# service kibana restart

// CentOS 7.2
# systemctl daemon-reload
# systemctl enable elasticsearch.service
# systemctl start elasticsearch.service

Additional Notes

  • If you are using a multiple-node cluster, you will need to repeat the keystore setup on additional nodes - this can be simplified via the use of Subject Alternative Name ("SAN") certificates from Let's Encrypt, or via wildcard certs.
  • If isolating Marvel node(s) in your cluster, you'll need to perform similar steps for Marvel. Same goes for separate Master, Data, and Client nodes.
  • While Encryption At-Rest is beyond the scope of this blog, Elastic supports customers using dm-crypt for doing so, and I do use dm-crypt on my server volumes to ensure that the data is encrypted on the underlying disks. This method of volume-level encryption does require entering a password at system boot, and thus requires manual intervention each reboot.

Hope you find this useful. If you'd like additional training on SSL with X-Pack, check out the new X-Pack Security course from the Elastic Education team. Let's encrypt!


Let's Encrypt [™] is a trademark of the Internet Security Research Group. All rights reserved.