Configuring Security in Logstashedit

The Logstash Elasticsearch plugins (output, input, filter and monitoring) support authentication and encryption over HTTP.

To use Logstash with a secured cluster, you need to configure authentication credentials for Logstash. Logstash throws an exception and the processing pipeline is halted if authentication fails.

If encryption is enabled on the cluster, you also need to enable TLS/SSL in the Logstash configuration.

If you want to monitor your Logstash instance with X-Pack monitoring, and store the monitoring data in a secured Elasticsearch cluster, you must configure Logstash with a username and password for a user with the appropriate permissions.

In addition to configuring authentication credentials for Logstash, you need to grant authorized users permission to access the Logstash indices.

Configuring Logstash to use Basic Authenticationedit

Logstash needs to be able to manage index templates, create indices, and write and delete documents in the indices it creates.

To set up authentication credentials for Logstash:

  1. Use the the Management > Roles UI in Kibana or the role API to create a logstash_writer role. For cluster privileges, add manage_index_templates and monitor. For indices privileges, add write, create, delete, and create_index.

    If you plan to use index lifecycle management, also add manage_ilm for cluster and manage and manage_ilm for indices.

    POST _xpack/security/role/logstash_writer
    {
      "cluster": ["manage_index_templates", "monitor", "manage_ilm"], 
      "indices": [
        {
          "names": [ "logstash-*" ], 
          "privileges": ["write","create","delete","create_index","manage","manage_ilm"]  
        }
      ]
    }

    The cluster needs the manage_ilm privilege if index lifecycle management is enabled.

    If you use a custom Logstash index pattern, specify your custom pattern instead of the default logstash-* pattern.

    If index lifecycle management is enabled, the role requires the manage and manage_ilm privileges to load index lifecycle policies, create rollover aliases, and create and manage rollover indices.

  2. Create a logstash_internal user and assign it the logstash_writer role. You can create users from the Management > Users UI in Kibana or through the user API:

    POST _xpack/security/user/logstash_internal
    {
      "password" : "x-pack-test-password",
      "roles" : [ "logstash_writer"],
      "full_name" : "Internal Logstash User"
    }
  3. Configure Logstash to authenticate as the logstash_internal user you just created. You configure credentials separately for each of the Elasticsearch plugins in your Logstash .conf file. For example:

    input {
      elasticsearch {
        ...
        user => logstash_internal
        password => x-pack-test-password
      }
    }
    filter {
      elasticsearch {
        ...
        user => logstash_internal
        password => x-pack-test-password
      }
    }
    output {
      elasticsearch {
        ...
        user => logstash_internal
        password => x-pack-test-password
      }
    }

Granting Users Access to the Logstash Indicesedit

To access the indices Logstash creates, users need the read and view_index_metadata privileges:

  1. Create a logstash_reader role that has the read and view_index_metadata privileges for the Logstash indices. You can create roles from the Management > Roles UI in Kibana or through the role API:

    POST _xpack/security/role/logstash_reader
    {
      "indices": [
        {
          "names": [ "logstash-*" ], 
          "privileges": ["read","view_index_metadata"]
        }
      ]
    }

    If you use a custom Logstash index pattern, specify that pattern instead of the default logstash-* pattern.

  2. Assign your Logstash users the logstash_reader role. If the Logstash user will be using centralized pipeline management, also assign the logstash_admin role. You can create and manage users from the Management > Users UI in Kibana or through the user API:

    POST _xpack/security/user/logstash_user
    {
      "password" : "x-pack-test-password",
      "roles" : [ "logstash_reader", "logstash_admin"], 
      "full_name" : "Kibana User for Logstash"
    }

    logstash_admin is a built-in role that provides access to .logstash-* indices for managing configurations.

Configuring the Elasticsearch Output to use PKI Authenticationedit

The elasticsearch output supports PKI authentication. To use an X.509 client-certificate for authentication, you configure the keystore and keystore_password options in your Logstash .conf file:

output {
  elasticsearch {
    ...
    keystore => /path/to/keystore.jks
    keystore_password => realpassword
    truststore =>  /path/to/truststore.jks 
    truststore_password =>  realpassword
  }
}

If you use a separate truststore, the truststore path and password are also required.

Configuring Logstash to use TLS Encryptionedit

If TLS encryption is enabled on the Elasticsearch cluster, you need to configure the ssl and cacert options in your Logstash .conf file:

output {
  elasticsearch {
    ...
    ssl => true
    cacert => '/path/to/cert.pem' 
  }
}

The path to the local .pem file that contains the Certificate Authority’s certificate.

Configuring Credentials for Logstash Monitoringedit

If you plan to ship Logstash monitoring data to a secure cluster, you need to configure the username and password that Logstash uses to authenticate for shipping monitoring data.

The security features come preconfigured with a logstash_system built-in user for this purpose. This user has the minimum permissions necessary for the monitoring function, and should not be used for any other purpose - it is specifically not intended for use within a Logstash pipeline.

By default, the logstash_system user does not have a password. The user will not be enabled until you set a password. See Setting built-in user passwords.

Then configure the user and password in the logstash.yml configuration file:

monitoring.elasticsearch.username: logstash_system
monitoring.elasticsearch.password: t0p.s3cr3t

If you initially installed an older version of X-Pack and then upgraded, the logstash_system user may have defaulted to disabled for security reasons. You can enable the user through the user API:

PUT _xpack/security/user/logstash_system/_enable

Configuring Credentials for Centralized Pipeline Managementedit

If you plan to use Logstash centralized pipeline management, you need to configure the username and password that Logstash uses for managing configurations.

You configure the user and password in the logstash.yml configuration file:

xpack.management.elasticsearch.username: logstash_admin_user 
xpack.management.elasticsearch.password: t0p.s3cr3t

The user you specify here must have the built-in logstash_admin role as well as the logstash_writer role that you created earlier.

Grant access using API keysedit

Instead of using usernames and passwords, you can use API keys to grant access to Elasticsearch resources. You can set API keys to expire at a certain time, and you can explicitly invalidate them. Any user with the manage_api_key or manage_own_api_key cluster privilege can create API keys.

Note that API keys are tied to the cluster they are created in. If you are sending output to different clusters, be sure to create the correct kind of API key.

For security reasons, we recommend using a unique API key per Logstash instance. You can create as many API keys per user as necessary.

Create an API keyedit

You can create API keys using either the Create API key API or the Kibana UI. This section walks you through creating an API key using the Create API key API. The privileges needed are the same for either approach.

Here is an example that shows how to create an API key for publishing to Elasticsearch using the Elasticsearch output plugin.

POST /_security/api_key
{
  "name": "logstash_host001", 
  "role_descriptors": {
    "logstash_writer": { 
      "cluster": ["monitor", "manage_ilm", "read_ilm"],
      "index": [
        {
          "names": ["logstash-*"],
          "privileges": ["view_index_metadata", "create_doc"]
        }
      ]
    }
  }
}

Name of the API key

Granted privileges

The return value should look similar to this:

{
  "id":"TiNAGG4BaaMdaH1tRfuU", 
  "name":"logstash_host001",
  "api_key":"KnR6yE41RrSowb0kQ0HWoA" 
}

Unique id for this API key

Generated API key

Create an API key for publishingedit

You’re in luck! The example we used in the Create an API key section creates an API key for publishing to Elasticsearch using the Elasticsearch output plugin.

Here’s an example using the API key in your Elasticsearch output plugin configuration.

output {
  elasticsearch {
    api_key => "TiNAGG4BaaMdaH1tRfuU:KnR6yE41RrSowb0kQ0HWoA" 
  }
}

Format is id:api_key (as returned by Create API key)

Create an API key for readingedit

Creating an API key to use for reading data from Elasticsearch is similar to creating an API key for publishing described earlier. You can use the example in the Create an API key section, granting the appropriate privileges.

Here’s an example using the API key in your Elasticsearch inputs plugin configuration.

input {
  elasticsearch {
    "api_key" => "TiNAGG4BaaMdaH1tRfuU:KnR6yE41RrSowb0kQ0HWoA" 
  }
}

Format is id:api_key (as returned by Create API key)s

Create an API key for filteringedit

Creating an API key to use for processing data from Elasticsearch is similar to creating an API key for publishing described earlier. You can use the example in the Create an API key section, granting the appropriate privileges.

Here’s an example using the API key in your Elasticsearch filter plugin configuration.

filter {
  elasticsearch {
    api_key => "TiNAGG4BaaMdaH1tRfuU:KnR6yE41RrSowb0kQ0HWoA" 
  }
}

Format is id:api_key (as returned by Create API key)

Learn more about API keysedit

See the Elasticsearch API key documentation for more information:

See API Keys for info on managing API keys through Kibana.