To configure centralized pipeline management:
Verify that you are using a license that includes the pipeline management feature.
For more information, see https://www.elastic.co/subscriptions and License management.
Specify configuration management settings in the
logstash.ymlfile. At a minimum, set:
xpack.management.enabled: trueto enable centralized configuration management.
xpack.management.elasticsearch.hoststo specify the Elasticsearch instance that will store the Logstash pipeline configurations and metadata.
xpack.management.pipeline.idto register the pipelines that you want to centrally manage.
- Restart Logstash.
If your Elasticsearch cluster is protected with basic authentication, assign
logstash_adminrole as well as the
logstash_writerrole to any users who will use centralized pipeline management. See Secure your connection for more information.
Centralized management is disabled until you configure and enable security features.
After you’ve configured Logstash to use centralized pipeline
management, you can no longer specify local pipeline configurations. This means
pipelines.yml file and settings like
config.string are inactive when this feature is enabled.
You can set the following
xpack.management settings in
centralized pipeline management.
For more information about configuring Logstash, see logstash.yml.
The following example shows basic settings that assume Elasticsearch and Kibana are installed on the localhost with basic AUTH enabled, but no SSL. If you’re using SSL, you need to specify additional SSL settings.
xpack.management.enabled: true xpack.management.elasticsearch.hosts: "http://localhost:9200/" xpack.management.elasticsearch.username: logstash_admin_user xpack.management.elasticsearch.password: t0p.s3cr3t xpack.management.logstash.poll_interval: 5s xpack.management.pipeline.id: ["apache", "cloudwatch_logs"]
trueto enable X-Pack centralized configuration management for Logstash.
- How often the Logstash instance polls for pipeline changes from Elasticsearch. The default is 5s.
Specify a comma-separated list of pipeline IDs to register for centralized
pipeline management. After changing this setting, you need to restart Logstash
to pick up changes.
Pipeline IDs support
*as a wildcard for matching multiple IDs
The Elasticsearch instance that will store the Logstash pipeline configurations and
metadata. This might be the same Elasticsearch instance specified in the
outputssection in your Logstash configuration, or a different one. Defaults to
If your Elasticsearch cluster is protected with basic authentication, these settings
provide the username and password that the Logstash instance uses to
authenticate for accessing the configuration data. The username you specify here
should have the built-in
logstash_adminrole and the customized
logstash_writerrole, which provides access to system indices for managing configurations. Starting with Elasticsearch version 7.10.0, the
logstash_adminrole inherits the
manage_logstash_pipelinescluster privilege for centralized pipeline management. If a user has created their own roles and granted them access to the .logstash index, those roles will continue to work in 7.x but will need to be updated for 8.0.
- Optional setting that allows you to specify a proxy URL if Logstash needs to use a proxy to reach your Elasticsearch cluster.
- Optional setting that enables you to specify the hex-encoded SHA-256 fingerprint of the certificate authority for your Elasticsearch instance.
A self-secured Elasticsearch cluster will provide the fingerprint of its CA to the console during setup.
You can also get the SHA256 fingerprint of an Elasticsearch’s CA using the
openssl command-line utility on the Elasticsearch host:
openssl x509 -fingerprint -sha256 -in $ES_HOME/config/certs/http_ca.crt
Optional setting that enables you to specify a path to the
.pemfile for the certificate authority for your Elasticsearch instance.
- Optional setting that provides the path to the Java keystore (JKS) to validate the server’s certificate.
You cannot use this setting and
xpack.management.elasticsearch.ssl.certificate_authority at the same time.
- Optional setting that provides the password to the truststore.
- Optional setting that provides the path to the Java keystore (JKS) to validate the client’s certificate.
You cannot use this setting and
xpack.management.elasticsearch.ssl.keystore.certificate at the same time.
- Optional setting that provides the password to the keystore.
- Optional setting that provides the path to an SSL certificate to use to authenticate the client. This certificate should be an OpenSSL-style X.509 certificate file.
This setting can be used only if
xpack.management.elasticsearch.ssl.key is set.
Optional setting that provides the path to an OpenSSL-style RSA private key that corresponds to the
This setting can be used only if
xpack.management.elasticsearch.ssl.certificate is set.
Option to validate the server’s certificate. Defaults to
full. To disable, set to
none. Disabling this severely compromises security.
- Optional setting that provides the list of cipher suites to use, listed by priorities. Supported cipher suites vary depending on the Java and protocol versions.
If you’re using Elasticsearch in Elastic Cloud, you should specify the identifier here.
This setting is an alternative to
xpack.management.elasticsearch.hostsshould not be used. This Elasticsearch instance will store the Logstash pipeline configurations and metadata.
If you’re using Elasticsearch in Elastic Cloud, you can set your auth credentials here.
This setting is an alternative to both
cloud_authis configured, those settings should not be used. The credentials you specify here should be for a user with the
logstash_adminrole, which provides access to system indices for managing configurations.
Authenticate using an Elasticsearch API key. Note that this option also requires using SSL.
The API key Format is
api_keyare as returned by the Elasticsearch Create API key API.
Pipeline IDs must begin with a letter or underscore and contain only letters, underscores, dashes, and numbers.
You can use
xpack.management.pipeline.id to match any number of letters, underscores, dashes, and numbers.
xpack.management.pipeline.id: ["*logs", "*apache*", "tomcat_log"]
In this example,
"*logs" matches all IDs ending in
"*apache*" matches any IDs with
apache in the name.
Wildcard in pipeline IDs is available starting with Elasticsearch 7.10. Logstash can pick up new pipeline without a restart if the new pipeline ID matches the wildcard pattern.
Intro to Kibana
ELK for Logs & Metrics