Terminologyedit

@metadata
A special field for storing content that you don’t want to include in output events. For example, the @metadata field is useful for creating transient fields for use in conditional statements.
administration console
A component of Elastic Cloud Enterprise that provides the API server for the Cloud UI. Also syncs cluster and allocator data from ZooKeeper to Elasticsearch.
allocator
Manages hosts that contain Elasticsearch and Kibana nodes. Controls the lifecycle of these nodes by creating new containers and managing the nodes within these containers when requested. Used to scale the capacity of your Elastic Cloud Enterprise installation.
analysis

Analysis is the process of converting full text to terms. Depending on which analyzer is used, these phrases: FOO BAR, Foo-Bar, foo,bar will probably all result in the terms foo and bar. These terms are what is actually stored in the index.

A full text query (not a term query) for FoO:bAR will also be analyzed to the terms foo,bar and will thus match the terms stored in the index.

It is this process of analysis (both at index time and at search time) that allows Elasticsearch to perform full text queries.

Also see text and term.

anomaly detection job
Anomaly detection jobs contain the configuration information and metadata necessary to perform an analytics task. See Machine learning jobs and the create anomaly detection job API.
API key

A unique identifier that you can use for authentication when submitting Elasticsearch requests. When TLS is enabled, all requests must be authenticated using either basic authentication (user name and password) or an API key.

auto-follow pattern

An index pattern that automatically configures new indices as follower indices for cross-cluster replication. For more information, see Managing auto follow patterns.

availability zone
Contains resources available to a Elastic Cloud Enterprise installation that are isolated from other availability zones to safeguard against failure. Could be a rack, a server zone or some other logical constraint that creates a failure boundary. In a highly available cluster, the nodes of a cluster are spread across two or three availability zones to ensure that the cluster can survive the failure of an entire availability zone. Also see Fault Tolerance (High Availability).
beats runner
Used to send Filebeat and Metricbeat information to the logging cluster.
bucket
The machine learning features use the concept of a bucket to divide the time series into batches for processing. The bucket span is part of the configuration information for anomaly detection jobs. It defines the time interval that is used to summarize and model the data. This is typically between 5 minutes to 1 hour and it depends on your data characteristics. When you set the bucket span, take into account the granularity at which you want to analyze, the frequency of the input data, the typical duration of the anomalies, and the frequency at which alerting is required.
client forwarder
Used for secure internal communications between various components of Elastic Cloud Enterprise and ZooKeeper.
Cloud UI
Provides web-based access to manage your Elastic Cloud Enterprise installation, supported by the administration console.
cluster

One or more nodes that share the same cluster name. Each cluster has a single master node, which is chosen automatically by the cluster and can be replaced if it fails.

codec plugin
A Logstash plugin that changes the data representation of an event. Codecs are essentially stream filters that can operate as part of an input or output. Codecs enable you to separate the transport of messages from the serialization process. Popular codecs include json, msgpack, and plain (text).
cold phase

The third possible phase in the index lifecycle. In the cold phase, an index is no longer updated and seldom queried. The information still needs to be searchable, but it’s okay if those queries are slower.

conditional
A control flow that executes certain actions based on whether a statement (also called a condition) is true or false. Logstash supports if, else if, and else statements. You can use conditional statements to apply filters and send events to a specific output based on conditions that you specify.
constructor
Directs allocators to manage containers of Elasticsearch and Kibana nodes and maximizes the utilization of allocators. Monitors plan change requests from the Cloud UI and determines how to transform the existing cluster. In a highly available installation, places cluster nodes within different availability zones to ensure that the cluster can survive the failure of an entire availability zone.
container
Includes an instance of Elastic Cloud Enterprise software and its dependencies. Used to provision similar environments, to assign a guaranteed share of host resources to nodes, and to simplify operational effort in Elastic Cloud Enterprise.
coordinator
Consists of a logical grouping of some Elastic Cloud Enterprise services and acts as a distributed coordination system and resource scheduler.
cross-cluster replication (CCR)

A feature that enables you to replicate indices in remote clusters to your local cluster. For more information, see Cross-cluster replication.

cross-cluster search (CCS)

A feature that enables any node to act as a federated client across multiple clusters. See Search across clusters.

datafeed
Anomaly detection jobs can analyze either a one-off batch of data or continuously in real time. Datafeeds retrieve data from Elasticsearch for analysis. Alternatively, you can post data from any source directly to a machine learning API.
data frame analytics job
Data frame analytics jobs contain the configuration information and metadata necessary to perform machine learning analytics tasks on a source index and store the outcome in a destination index. See Data frame analytics overview and the create data frame analytics job API.
delete phase

The last possible phase in the index lifecycle. In the delete phase, an index is no longer needed and can safely be deleted.

detector
As part of the configuration information that is associated with anomaly detection jobs, detectors define the type of analysis that needs to be done. They also specify which fields to analyze. You can have more than one detector in a job, which is more efficient than running multiple jobs against the same data.
director
Manages the ZooKeeper datastore. This role is often shared with the coordinator, though in production deployments it can be separated.
document

A document is a JSON document which is stored in Elasticsearch. It is like a row in a table in a relational database. Each document is stored in an index and has a type and an id.

A document is a JSON object (also known in other languages as a hash / hashmap / associative array) which contains zero or more fields, or key-value pairs.

The original JSON document that is indexed will be stored in the _source field, which is returned by default when getting or searching for a document.

Elastic Common Schema (ECS)
A document schema for Elasticsearch, for use cases such as logging and metrics. ECS defines a common set of fields, their datatype, and gives guidance on their correct usage. ECS is used to improve uniformity of event data coming from different sources.
event
A single unit of information, containing a timestamp plus additional data. An event arrives via an input, and is subsequently parsed, timestamped, and passed through the Logstash pipeline.
feature influence
In outlier detection, feature influence scores indicate which features of a data point contribute to its outlier behavior. See Feature influence.
feature importance
In supervised machine learning methods such as regression and classification, feature importance indicates the degree to which a specific feature affects a prediction. See Regression feature importance and Classification feature importance.
field

A document contains a list of fields, or key-value pairs. The value can be a simple (scalar) value (eg a string, integer, date), or a nested structure like an array or an object. A field is similar to a column in a table in a relational database.

The mapping for each field has a field type (not to be confused with document type) which indicates the type of data that can be stored in that field, eg integer, string, object. The mapping also allows you to define (amongst other things) how the value for a field should be analyzed.

In Logstash, this term refers to an event property. For example, each event in an apache access log has properties, such as a status code (200, 404), request path ("/", "index.html"), HTTP verb (GET, POST), client IP address, and so on. Logstash uses the term "fields" to refer to these properties.

field reference
A reference to an event field. This reference may appear in an output block or filter block in the Logstash config file. Field references are typically wrapped in square ([]) brackets, for example [fieldname]. If you are referring to a top-level field, you can omit the [] and simply use the field name. To refer to a nested field, you specify the full path to that field: [top-level field][nested field].
filter

A filter is a non-scoring query, meaning that it does not score documents. It is only concerned about answering the question - "Does this document match?". The answer is always a simple, binary yes or no. This kind of query is said to be made in a filter context, hence it is called a filter. Filters are simple checks for set inclusion or exclusion. In most cases, the goal of filtering is to reduce the number of documents that have to be examined.

filter plugin
A Logstash plugin that performs intermediary processing on an event. Typically, filters act upon event data after it has been ingested via inputs, by mutating, enriching, and/or modifying the data according to configuration rules. Filters are often applied conditionally depending on the characteristics of the event. Popular filter plugins include grok, mutate, drop, clone, and geoip. Filter stages are optional.
flush

Peform a Lucene commit to write index updates in the transaction log (translog) to disk. Because a Lucene commit is a relatively expensive operation, Elasticsearch records index and delete operations in the translog and automatically flushes changes to disk in batches. To recover from a crash, operations that have been acknowledged but not yet committed can be replayed from the translog. Before upgrading, you can explicitly call the Flush API to ensure that all changes are committed to disk.

follower index

The target index for cross-cluster replication. A follower index exists in a local cluster and replicates a leader index.

frozen index

An index reduced to a low overhead state that still enables occasional searches. Frozen indices use a memory-efficient shard implementation and throttle searches to conserve resources. Searching a frozen index is lower overhead than re-opening a closed index to enable searching.

gem
A self-contained package of code that’s hosted on RubyGems.org. Logstash plugins are packaged as Ruby Gems. You can use the Logstash plugin manager to manage Logstash gems.
hot phase

The first possible phase in the index lifecycle. In the hot phase, an index is actively updated and queried.

hot thread
A Java thread that has high CPU usage and executes for a longer than normal period of time.
ID

The ID of a document identifies a document. The index/id of a document must be unique. If no ID is provided, then it will be auto-generated. (also see routing)

index

An optimized collection of JSON documents. Each document is a collection of fields, the key-value pairs that contain your data.

An index is a logical namespace that maps to one or more primary shards and can have zero or more replica shards.

index alias

An index alias is a logical name used to reference one or more indices.

Most Elasticsearch APIs accept an index alias in place of an index name.

See Add index alias.

index lifecycle

The four phases an index can transition through: hot, warm, cold, and delete. For more information, see Index lifecycle.

index lifecycle policy

Specifies how an index moves between phases in the index lifecycle and what actions to perform during each phase.

index pattern

A string that can contain the * wildcard to match multiple index names. In most cases, the index parameter in an Elasticsearch request can be the name of a specific index, a list of index names, or an index pattern. For example, if you have the indices datastream-000001, datastream-000002, and datastream-000003, to search across all three you could use the datastream-* index pattern.

indexer
A Logstash instance that is tasked with interfacing with an Elasticsearch cluster in order to index event data.
influencer
Influencers are entities that might have contributed to an anomaly in a specific bucket in an anomaly detection job. For more information, see Influencers.
input plugin
A Logstash plugin that reads event data from a specific source. Input plugins are the first stage in the Logstash event processing pipeline. Popular input plugins include file, syslog, redis, and beats.
job
Machine learning jobs contain the configuration information and metadata necessary to perform an analytics task. There are two types: anomaly detection jobs and data frame analytics jobs. See also rollup job.
leader index

The source index for cross-cluster replication. A leader index exists on a remote cluster and is replicated to follower indices.

local cluster

The cluster that pulls data from a remote cluster in cross-cluster search or cross-cluster replication.

machine learning node
A machine learning node is a node that has xpack.ml.enabled and node.ml set to true, which is the default behavior. If you set node.ml to false, the node can service API requests but it cannot run machine learning jobs. If you want to use machine learning features, there must be at least one machine learning node in your cluster.
mapping

A mapping is like a schema definition in a relational database. Each index has a mapping, which defines a type, plus a number of index-wide settings.

A mapping can either be defined explicitly, or it will be generated automatically when a document is indexed.

master node
Handles write requests for the cluster and publishes changes to other nodes in an ordered fashion. Each cluster has a single master node which is chosen automatically by the cluster and is replaced if the current master node fails. Also see node.
message broker
Also referred to as a message buffer or message queue, a message broker is external software (such as Redis, Kafka, or RabbitMQ) that stores messages from the Logstash shipper instance as an intermediate store, waiting to be processed by the Logstash indexer instance.
node

A running instance of Elasticsearch that belongs to a cluster. Multiple nodes can be started on a single server for testing purposes, but usually you should have one node per server.

output plugin
A Logstash plugin that writes event data to a specific destination. Outputs are the final stage in the event pipeline. Popular output plugins include elasticsearch, file, graphite, and statsd.
pipeline
A term used to describe the flow of events through the Logstash workflow. A pipeline typically consists of a series of input, filter, and output stages. Input stages get data from a source and generate events, filter stages, which are optional, modify the event data, and output stages write the data to a destination. Inputs and outputs support codecs that enable you to encode or decode the data as it enters or exits the pipeline without having to use a separate filter.
plan
Specifies the configuration and topology of an Elasticsearch or Kibana cluster, such as capacity, availability, and Elasticsearch version, for example. When changing a plan, the constructor determines how to transform the existing cluster into the pending plan.
plugin
A self-contained software package that implements one of the stages in the Logstash event processing pipeline. The list of available plugins includes input plugins, output plugins, codec plugins, and filter plugins. The plugins are implemented as Ruby gems and hosted on RubyGems.org. You define the stages of an event processing pipeline by configuring plugins.
plugin manager
Accessed via the bin/logstash-plugin script, the plugin manager enables you to manage the lifecycle of plugins in your Logstash deployment. You can install, remove, and upgrade plugins by using the plugin manager Command Line Interface (CLI).
primary shard

Each document is stored in a single primary shard. When you index a document, it is indexed first on the primary shard, then on all replicas of the primary shard.

By default, an index has one primary shard. You can specify more primary shards to scale the number of documents that your index can handle.

You cannot change the number of primary shards in an index, once the index is created. However, an index can be split into a new index using the split index API.

See also routing.

proxy
A highly available, TLS-enabled proxy layer that routes user requests, mapping cluster IDs that are passed in request URLs for the container to the cluster nodes handling the user requests.
query

A request for information from Elasticsearch. You can think of a query as a question, written in a way Elasticsearch understands. A search consists of one or more queries combined.

There are two types of queries: scoring queries and filters. For more information about query types, see Query and filter context.

recovery

Shard recovery is the process of syncing a replica shard from a primary shard. Upon completion, the replica shard is available for search.

Recovery automatically occurs during the following processes:

reindex

To cycle through some or all documents in one or more indices, re-writing them into the same or new index in a local or remote cluster. This is most commonly done to update mappings, or to upgrade Elasticsearch between two incompatible index versions.

remote cluster

A separate cluster, often in a different data center or locale, that contains indices that can be replicated or searched by the local cluster. The connection to a remote cluster is unidirectional.

replica shard

Each primary shard can have zero or more replicas. A replica is a copy of the primary shard, and has two purposes:

  1. Increase failover: a replica shard can be promoted to a primary shard if the primary fails
  2. Increase performance: get and search requests can be handled by primary or replica shards.

By default, each primary shard has one replica, but the number of replicas can be changed dynamically on an existing index. A replica shard will never be started on the same node as its primary shard.

roles token
Enables a host to join an existing Elastic Cloud Enterprise installation and grants permission to hosts to hold certain roles, such as the allocator role. Used when installing Elastic Cloud Enterprise on additional hosts, a roles token helps secure Elastic Cloud Enterprise by making sure that only authorized hosts become part of the installation.
rollup

Summarize high-granularity data into a more compressed format to maintain access to historical data in a cost-effective way.

rollup index

A special type of index for storing historical data at reduced granularity. Documents are summarized and indexed into a rollup index by a rollup job.

rollup job

A background task that runs continuously to summarize documents in an index and index the summaries into a separate rollup index. The job configuration controls what information is rolled up and how often.

routing

When you index a document, it is stored on a single primary shard. That shard is chosen by hashing the routing value. By default, the routing value is derived from the ID of the document or, if the document has a specified parent document, from the ID of the parent document (to ensure that child and parent documents are stored on the same shard).

This value can be overridden by specifying a routing value at index time, or a routing field in the mapping.

runner
A local control agent that runs on all hosts, used to deploy local containers based on role definitions. Ensures that containers assigned to it exist and are able to run, and creates or recreates the containers if necessary.
services forwarder
Routes data internally in an Elastic Cloud Enterprise installation.
shard

A shard is a single Lucene instance. It is a low-level “worker” unit which is managed automatically by Elasticsearch. An index is a logical namespace which points to primary and replica shards.

Other than defining the number of primary and replica shards that an index should have, you never need to refer to shards directly. Instead, your code should deal only with an index.

Elasticsearch distributes shards amongst all nodes in the cluster, and can move shards automatically from one node to another in the case of node failure, or the addition of new nodes.

shipper
An instance of Logstash that send events to another instance of Logstash, or some other application.
shrink

Reduce the number of primary shards in an index.

You can shrink an index to reduce its overhead when the request volume drops. For example, you might opt to shrink an index once it is no longer the write index. See the shrink index API.

snapshot

A backup taken from a running Elasticsearch cluster. You can take snapshots of individual indices or of the entire cluster.

snapshot lifecycle policy

Specifies how frequently to perform automatic backups of a cluster and how long to retain the resulting snapshots.

snapshot repository

Specifies where snapshots are to be stored. Snapshots can be written to a shared filesystem or to a remote repository.

source field

By default, the JSON document that you index will be stored in the _source field and will be returned by all get and search requests. This allows you access to the original object directly from search results, rather than requiring a second step to retrieve the object from an ID.

split

To grow the amount of shards in an index. See the split index API.

stunnel
Securely tunnels all traffic in an Elastic Cloud Enterprise installation.
term

A term is an exact value that is indexed in Elasticsearch. The terms foo, Foo, FOO are NOT equivalent. Terms (i.e. exact values) can be searched for using term queries.

See also text and analysis.

text

Text (or full text) is ordinary unstructured text, such as this paragraph. By default, text will be analyzed into terms, which is what is actually stored in the index.

Text fields need to be analyzed at index time in order to be searchable as full text, and keywords in full text queries must be analyzed at search time to produce (and search for) the same terms that were generated at index time.

See also term and analysis.

type

A type used to represent the type of document, e.g. an email, a user, or a tweet. Types are deprecated and are in the process of being removed. See Removal of mapping types.

warm phase

The second possible phase in the index lifecycle. In the warm phase, an index is generally optimized for search and no longer updated.

worker
The filter thread model used by Logstash, where each worker receives an event and applies all filters, in order, before emitting the event to the output queue. This allows scalability across CPUs because many filters are CPU intensive.
ZooKeeper
A coordination service for distributed systems used by Elastic Cloud Enterprise to store the state of the installation. Responsible for discovery of hosts, resource allocation, leader election after failure and high priority notifications.