Terminologyedit

@metadata

A special field for storing content that you don’t want to include in output events. For example, the @metadata field is useful for creating transient fields for use in conditional statements.

administration console

A component of Elastic Cloud Enterprise that provides the API server for the Cloud UI. Also syncs cluster and allocator data from ZooKeeper to Elasticsearch.

allocator

Manages hosts that contain Elasticsearch and Kibana nodes. Controls the lifecycle of these nodes by creating new containers and managing the nodes within these containers when requested. Used to scale the capacity of your Elastic Cloud Enterprise installation.

analysis

Analysis is the process of converting full text to terms. Depending on which analyzer is used, these phrases: FOO BAR, Foo-Bar, foo,bar will probably all result in the terms foo and bar. These terms are what is actually stored in the index.

A full text query (not a term query) for FoO:bAR will also be analyzed to the terms foo,bar and will thus match the terms stored in the index.

It is this process of analysis (both at index time and at search time) that allows Elasticsearch to perform full text queries.

Also see text and term.

availability zone

Contains resources available to a Elastic Cloud Enterprise installation that are isolated from other availability zones to safeguard against failure. Could be a rack, a server zone or some other logical constraint that creates a failure boundary. In a highly available cluster, the nodes of a cluster are spread across two or three availability zones to ensure that the cluster can survive the failure of an entire availability zone. Also see high availability.

beats runner

Used to send Filebeat and Metricbeat information to the logging cluster.

bucket

The X-Pack machine learning features use the concept of a bucket to divide the time series into batches for processing. The bucket span is part of the configuration information for a job. It defines the time interval that is used to summarize and model the data. This is typically between 5 minutes to 1 hour and it depends on your data characteristics. When you set the bucket span, take into account the granularity at which you want to analyze, the frequency of the input data, the typical duration of the anomalies, and the frequency at which alerting is required.

client forwarder

Used for secure internal communications between various components of Elastic Cloud Enterprise and ZooKeeper.

Cloud UI

Provides web-based access to manage your Elastic Cloud Enterprise installation, supported by the administration console.

cluster

A cluster consists of one or more nodes which share the same cluster name. Each cluster has a single master node which is chosen automatically by the cluster and which can be replaced if the current master node fails.

codec plugin

A Logstash plugin that changes the data representation of an event. Codecs are essentially stream filters that can operate as part of an input or output. Codecs enable you to separate the transport of messages from the serialization process. Popular codecs include json, msgpack, and plain (text).

conditional

A control flow that executes certain actions based on whether a statement (also called a condition) is true or false. Logstash supports if, else if, and else statements. You can use conditional statements to apply filters and send events to a specific output based on conditions that you specify.

constructor

Directs allocators to manage containers of Elasticsearch and Kibana nodes and maximizes the utilization of allocators. Monitors plan change requests from the Cloud UI and determines how to transform the existing cluster. In a highly available installation, places cluster nodes within different availability zones to ensure that the cluster can survive the failure of an entire availability zone.

container

Includes an instance of Elastic Cloud Enterprise software and its dependencies. Used to provision similar environments, to assign a guaranteed share of host resources to nodes, and to simplify operational effort in Elastic Cloud Enterprise.

coordinator

Consists of a logical grouping of some Elastic Cloud Enterprise services and acts as a distributed coordination system and resource scheduler.

datafeed

Machine learning jobs can analyze either a one-off batch of data or continuously in real time. Datafeeds retrieve data from Elasticsearch for analysis. Alternatively you can post data from any source directly to a machine learning API.

detector

As part of the configuration information that is associated with an X-Pack machine learning job, detectors define the type of analysis that needs to be done. They also specify which fields to analyze. You can have more than one detector in a job, which is more efficient than running multiple jobs against the same data.

director

Manages the ZooKeeper datastore. This role is often shared with the coordinator, though in production deployments it can be separated.

document

A document is a JSON document which is stored in Elasticsearch. It is like a row in a table in a relational database. Each document is stored in an index and has a type and an id.

A document is a JSON object (also known in other languages as a hash / hashmap / associative array) which contains zero or more fields, or key-value pairs.

The original JSON document that is indexed will be stored in the _source field, which is returned by default when getting or searching for a document.

event

A single unit of information, containing a timestamp plus additional data. An event arrives via an input, and is subsequently parsed, timestamped, and passed through the Logstash pipeline.

field

A document contains a list of fields, or key-value pairs. The value can be a simple (scalar) value (for example, a string, integer, date), or a nested structure like an array or an object. A field is similar to a column in a table in a relational database.

The mapping for each field has a field type (not to be confused with document type) which indicates the type of data that can be stored in that field, eg integer, string, object. The mapping also allows you to define (amongst other things) how the value for a field should be analyzed.

In Logstash, this term refers to an event property. For example, each event in an apache access log has properties, such as a status code (200, 404), request path ("/", "index.html"), HTTP verb (GET, POST), client IP address, and so on. Logstash uses the term "fields" to refer to these properties.

field reference

A reference to an event field. This reference may appear in an output block or filter block in the Logstash config file. Field references are typically wrapped in square ([]) brackets, for example [fieldname]. If you are referring to a top-level field, you can omit the [] and simply use the field name. To refer to a nested field, you specify the full path to that field: [top-level field][nested field].

filter plugin

A Logstash plugin that performs intermediary processing on an event. Typically, filters act upon event data after it has been ingested via inputs, by mutating, enriching, and/or modifying the data according to configuration rules. Filters are often applied conditionally depending on the characteristics of the event. Popular filter plugins include grok, mutate, drop, clone, and geoip. Filter stages are optional.

gem

A self-contained package of code that’s hosted on RubyGems.org. Logstash plugins are packaged as Ruby Gems. You can use the Logstash plugin manager to manage Logstash gems.

hot thread

A Java thread that has high CPU usage and executes for a longer than normal period of time.

id

The ID of a document identifies a document. The index/id of a document must be unique. If no ID is provided, then it will be auto-generated. (Also see routing).

index

An index is like a table in a relational database. It has a mapping which contains a type, which contains the fields in the index.

An index is a logical namespace which maps to one or more primary shards and can have zero or more replica shards.

indexer

A Logstash instance that is tasked with interfacing with an Elasticsearch cluster in order to index event data.

input plugin

A Logstash plugin that reads event data from a specific source. Input plugins are the first stage in the Logstash event processing pipeline. Popular input plugins include file, syslog, redis, and beats.

job

Machine learning jobs contain the configuration information and metadata necessary to perform an analytics task.

machine learning node

A machine learning node is a node that has xpack.ml.enabled and node.ml set to true, which is the default behavior. If you set node.ml to false, the node can service API requests but it cannot run jobs. If you want to use X-Pack machine learning features, there must be at least one machine learning node in your cluster.

mapping

A mapping is like a schema definition in a relational database. Each index has a mapping, which defines a type, plus a number of index-wide settings.

A mapping can either be defined explicitly, or it will be generated automatically when a document is indexed.

master node

Handles write requests for the cluster and publishes changes to other nodes in an ordered fashion. Each cluster has a single master node which is chosen automatically by the cluster and is replaced if the current master node fails. Also see node.

message broker

Also referred to as a message buffer or message queue, a message broker is external software (such as Redis, Kafka, or RabbitMQ) that stores messages from the Logstash shipper instance as an intermediate store, waiting to be processed by the Logstash indexer instance.

node

A node is a running instance of Elasticsearch or Kibana which belongs to a cluster. Multiple nodes can be started on a single server for testing purposes, but usually you should have one node per server.

At startup, a node will use unicast to discover an existing cluster with the same cluster name and will try to join that cluster.

output plugin

A Logstash plugin that writes event data to a specific destination. Outputs are the final stage in the event pipeline. Popular output plugins include elasticsearch, file, graphite, and statsd.

pipeline

A term used to describe the flow of events through the Logstash workflow. A pipeline typically consists of a series of input, filter, and output stages. Input stages get data from a source and generate events, filter stages, which are optional, modify the event data, and output stages write the data to a destination. Inputs and outputs support codecs that enable you to encode or decode the data as it enters or exits the pipeline without having to use a separate filter.

plan

Specifies the configuration and topology of an Elasticsearch or Kibana cluster, such as capacity, availability, and Elasticsearch version, for example. When changing a plan, the constructor determines how to transform the existing cluster into the pending plan.

plugin

A self-contained software package that implements one of the stages in the Logstash event processing pipeline. The list of available plugins includes input plugins, output plugins, codec plugins, and filter plugins. The plugins are implemented as Ruby gems and hosted on RubyGems.org. You define the stages of an event processing pipeline by configuring plugins.

plugin manager

Accessed via the bin/logstash-plugin script, the plugin manager enables you to manage the lifecycle of plugins in your Logstash deployment. You can install, remove, and upgrade plugins by using the plugin manager Command Line Interface (CLI).

primary shard

Each document is stored in a single primary shard. When you index a document, it is indexed first on the primary shard, then on all replicas of the primary shard.

By default, an index has 5 primary shards. You can specify fewer or more primary shards to scale the number of documents that your index can handle.

You cannot change the number of primary shards in an index, once the index is created.

See also routing.

proxy

A highly available, TLS-enabled proxy layer that routes user requests, mapping cluster IDs that are passed in request URLs for the container to the cluster nodes handling the user requests.

replica shard

Each primary shard can have zero or more replicas. A replica is a copy of the primary shard, and has two purposes:

  1. increase failover: a replica shard can be promoted to a primary shard if the primary fails
  2. increase performance: get and search requests can be handled by primary or replica shards.

    By default, each primary shard has one replica, but the number of replicas can be changed dynamically on an existing index. A replica shard will never be started on the same node as its primary shard.

roles token

Enables a host to join an existing Elastic Cloud Enterprise installation and grants permission to hosts to hold certain roles, such as the allocator role. Used when installing Elastic Cloud Enterprise on additional hosts, a roles token helps secure Elastic Cloud Enterprise by making sure that only authorized hosts become part of the installation.

routing

When you index a document, it is stored on a single primary shard. That shard is chosen by hashing the routing value. By default, the routing value is derived from the ID of the document or, if the document has a specified parent document, from the ID of the parent document (to ensure that child and parent documents are stored on the same shard).

This value can be overridden by specifying a routing value at index time, or a /mapping-routing-field.html[routing field] in the mapping.

runner

A local control agent that runs on all hosts, used to deploy local containers based on role definitions. Ensures that containers assigned to it exist and are able to run, and creates or recreates the containers if necessary.

services forwarder

Routes data internally in an Elastic Cloud Enterprise installation.

shard

A shard is a single Lucene instance. It is a low-level “worker” unit which is managed automatically by Elasticsearch. An index is a logical namespace which points to primary and replica shards.

Other than defining the number of primary and replica shards that an index should have, you never need to refer to shards directly. Instead, your code should deal only with an index.

Elasticsearch distributes shards amongst all nodes in the cluster, and can move shards automatically from one node to another in the case of node failure, or the addition of new nodes.

shipper

An instance of Logstash that send events to another instance of Logstash, or some other application.

source field

By default, the JSON document that you index will be stored in the _source field and will be returned by all get and search requests. This allows you access to the original object directly from search results, rather than requiring a second step to retrieve the object from an ID.

stunnel

Securely tunnels all traffic in an Elastic Cloud Enterprise installation.

term

A term is an exact value that is indexed in Elasticsearch. The terms foo, Foo, FOO are NOT equivalent. Terms (i.e. exact values) can be searched for using term queries. See also text and analysis.

text

Text (or full text) is ordinary unstructured text, such as this paragraph. By default, text will be analyzed into terms, which is what is actually stored in the index.

Text fields need to be analyzed at index time in order to be searchable as full text, and keywords in full text queries must be analyzed at search time to produce (and search for) the same terms that were generated at index time.

See also term and analysis.

type

A type used to represent the type of document, e.g. an email, a user, or a tweet. Types are deprecated and are in the process of being removed. See removal-of-types.html[Removal of mapping types].

worker

The filter thread model used by Logstash, where each worker receives an event and applies all filters, in order, before emitting the event to the output queue. This allows scalability across CPUs because many filters are CPU intensive.

ZooKeeper

A coordination service for distributed systems used by Elastic Cloud Enterprise to store the state of the installation. Responsible for discovery of hosts, resource allocation, leader election after failure and high priority notifications.