A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z



The alert-specific response that occurs when an alert fires. An alert can have multiple actions. See Action and connector types.

administration console
A component of Elastic Cloud Enterprise that provides the API server for the Cloud UI. Also syncs cluster and allocator data from ZooKeeper to Elasticsearch.
Advanced Settings

Enables you to control the appearance and behavior of Kibana by setting the date format, default index, and other attributes. Part of Kibana Stack Management. See Advanced Settings.


A set of conditions, schedules, and actions that enable notifications. See Alerts and Actions.

Alerts and Actions

A comprehensive view of all your alerts. Enables you to access and manage alerts for all Kibana apps from one place. See Alerts and Actions.

Manages hosts that contain Elasticsearch and Kibana nodes. Controls the lifecycle of these nodes by creating new containers and managing the nodes within these containers when requested. Used to scale the capacity of your Elastic Cloud Enterprise installation.

Analysis is the process of converting full text to terms. Depending on which analyzer is used, these phrases: FOO BAR, Foo-Bar, foo,bar will probably all result in the terms foo and bar. These terms are what is actually stored in the index.

A full text query (not a term query) for FoO:bAR will also be analyzed to the terms foo,bar and will thus match the terms stored in the index.

It is this process of analysis (both at index time and at search time) that allows Elasticsearch to perform full text queries.

Also see text and term.


A way to augment a data display with descriptive domain knowledge.

anomaly detection job
Anomaly detection jobs contain the configuration information and metadata necessary to perform an analytics task. See Machine learning jobs and the create anomaly detection job API.
API key

A unique identifier that you can use for authentication when submitting Elasticsearch requests. When TLS is enabled, all requests must be authenticated using either basic authentication (user name and password) or an API key.

APM agent
An open-source library, written in the same language as your service, which instruments your code and collects performance data and errors at runtime.
APM Server
An open-source application that receives data from APM agents and sends it to Elasticsearch.

A top-level Kibana component that is accessed through the side navigation. Apps include core Kibana components such as Discover and Dashboard, solutions like Observability and Security, and special-purpose tools like Maps and Stack Management.

auto-follow pattern

An index pattern that automatically configures new indices as follower indices for cross-cluster replication. For more information, see Managing auto follow patterns.

availability zone
Contains resources available to a Elastic Cloud Enterprise installation that are isolated from other availability zones to safeguard against failure. Could be a rack, a server zone or some other logical constraint that creates a failure boundary. In a highly available cluster, the nodes of a cluster are spread across two or three availability zones to ensure that the cluster can survive the failure of an entire availability zone. Also see Fault Tolerance (High Availability).



The background detail necessary to orient the location of a map.

beats runner
Used to send Filebeat and Metricbeat information to the logging cluster.

A set of documents in Kibana that have certain characteristics in common. For example, matching documents might be bucketed by color, distance, or date range.

The machine learning features also use the concept of a bucket to divide the time series into batches for processing. The bucket span is part of the configuration information for anomaly detection jobs. It defines the time interval that is used to summarize and model the data. This is typically between 5 minutes to 1 hour and it depends on your data characteristics. When you set the bucket span, take into account the granularity at which you want to analyze, the frequency of the input data, the typical duration of the anomalies, and the frequency at which alerting is required.

bucket aggregation

An aggregation that creates buckets of documents. Each bucket is associated with a criterion (depending on the aggregation type), which determines whether or not a document in the current context falls into the bucket.



Enables you to create presentations and infographics that pull live data directly from Elasticsearch. See Canvas.

Canvas expression language

A pipeline-based expression language for manipulating and visualizing data. Includes dozens of functions and other capabilities, such as table transforms, type casting, and sub-expressions. Supports TinyMath functions for complex math calculations. See Canvas function reference.


Specifies how many documents must contain a pair of terms before it is considered a useful connection in a graph.

client forwarder
Used for secure internal communications between various components of Elastic Cloud Enterprise and ZooKeeper.
Cloud UI
Provides web-based access to manage your Elastic Cloud Enterprise installation, supported by the administration console.

One or more nodes that share the same cluster name. Each cluster has a single master node, which is chosen automatically by the cluster and can be replaced if it fails.

codec plugin
A Logstash plugin that changes the data representation of an event. Codecs are essentially stream filters that can operate as part of an input or output. Codecs enable you to separate the transport of messages from the serialization process. Popular codecs include json, msgpack, and plain (text).
cold phase

The third possible phase in the index lifecycle. In the cold phase, an index is no longer updated and seldom queried. The information still needs to be searchable, but it’s okay if those queries are slower.

cold tier

A data tier that contains nodes that hold time series data that is accessed occasionally and not normally updated.


Specifies the circumstances that must be met to trigger an alert.

A control flow that executes certain actions based on whether a statement (also called a condition) is true or false. Logstash supports if, else if, and else statements. You can use conditional statements to apply filters and send events to a specific output based on conditions that you specify.

A configuration that enables integration with an external system (the destination for an action). See Action and connector types.


A tool for interacting with the Elasticsearch REST API. You can send requests to Elasticsearch, view responses, view API documentation, and get your request history. See Console.

Directs allocators to manage containers of Elasticsearch and Kibana nodes and maximizes the utilization of allocators. Monitors plan change requests from the Cloud UI and determines how to transform the existing cluster. In a highly available installation, places cluster nodes within different availability zones to ensure that the cluster can survive the failure of an entire availability zone.
Includes an instance of Elastic Cloud Enterprise software and its dependencies. Used to provision similar environments, to assign a guaranteed share of host resources to nodes, and to simplify operational effort in Elastic Cloud Enterprise.
content tier

A data tier that contains nodes that handle the indexing and query load for content such as a product catalog.

Consists of a logical grouping of some Elastic Cloud Enterprise services and acts as a distributed coordination system and resource scheduler.
cross-cluster replication (CCR)

A feature that enables you to replicate indices in remote clusters to your local cluster. For more information, see Cross-cluster replication.

cross-cluster search (CCS)

A feature that enables any node to act as a federated client across multiple clusters. See Search across clusters.



A collection of visualizations, saved searches, and maps that provide insights into your data from multiple perspectives.

Anomaly detection jobs can analyze either a one-off batch of data or continuously in real time. Datafeeds retrieve data from Elasticsearch for analysis.
data frame analytics job
Data frame analytics jobs contain the configuration information and metadata necessary to perform machine learning analytics tasks on a source index and store the outcome in a destination index. See Data frame analytics overview and the create data frame analytics job API.
data source

A file, database, or service that provides the underlying data for a map, Canvas element, or visualization.

data stream

A named resource used to ingest, search, and manage time series data in Elasticsearch. A data stream’s data is stored across multiple hidden, auto-generated indices. You can automate management of these indices to more efficiently store large data volumes.

See Data streams.

data tier

A collection of nodes with the same data role that typically share the same hardware profile. See content tier, hot tier, warm tier, cold tier.

delete phase

The last possible phase in the index lifecycle. In the delete phase, an index is no longer needed and can safely be deleted.

As part of the configuration information that is associated with anomaly detection jobs, detectors define the type of analysis that needs to be done. They also specify which fields to analyze. You can have more than one detector in a job, which is more efficient than running multiple jobs against the same data.
Manages the ZooKeeper datastore. This role is often shared with the coordinator, though in production deployments it can be separated.

Enables you to search and filter your data to zoom in on the information that you are interested in.

distributed tracing
The end-to-end collection of performance data throughout your microservices architecture.

A navigation path that retains context (time range and filters) from the source to the destination, so you can view the data from a new perspective. A dashboard that shows the overall status of multiple data centers might have a drilldown to a dashboard for a single data center. See Drilldowns.


A document is a JSON document which is stored in Elasticsearch. It is like a row in a table in a relational database. Each document is stored in an index and has a type and an id.

A document is a JSON object (also known in other languages as a hash / hashmap / associative array) which contains zero or more fields, or key-value pairs.

The original JSON document that is indexed will be stored in the _source field, which is returned by default when getting or searching for a document.



A connection between nodes in a graph that shows that they are related. The line weight indicates the strength of the relationship. See Graph.

Elastic Common Schema (ECS)
A document schema for Elasticsearch, for use cases such as logging and metrics. ECS defines a common set of fields, their datatype, and gives guidance on their correct usage. ECS is used to improve uniformity of event data coming from different sources.
Elastic Maps Service (EMS)

A service that provides basemap tiles, shape files, and other key features that are essential for visualizing geospatial data.


A Canvas workpad object that displays an image, text, or visualization.

A single unit of information, containing a timestamp plus additional data. An event arrives via an input, and is subsequently parsed, timestamped, and passed through the Logstash pipeline.


Feature Controls

Enables administrators to customize which features are available in each space. See Feature Controls.

feature influence
In outlier detection, feature influence scores indicate which features of a data point contribute to its outlier behavior. See Feature influence.
feature importance
In supervised machine learning methods such as regression and classification, feature importance indicates the degree to which a specific feature affects a prediction. See Regression feature importance and Classification feature importance.

A document contains a list of fields, or key-value pairs. The value can be a simple (scalar) value (eg a string, integer, date), or a nested structure like an array or an object. A field is similar to a column in a table in a relational database.

The mapping for each field has a field type (not to be confused with document type) which indicates the type of data that can be stored in that field, eg integer, string, object. The mapping also allows you to define (amongst other things) how the value for a field should be analyzed.

In Logstash, this term refers to an event property. For example, each event in an apache access log has properties, such as a status code (200, 404), request path ("/", "index.html"), HTTP verb (GET, POST), client IP address, and so on. Logstash uses the term "fields" to refer to these properties.

field reference
A reference to an event field. This reference may appear in an output block or filter block in the Logstash config file. Field references are typically wrapped in square ([]) brackets, for example [fieldname]. If you are referring to a top-level field, you can omit the [] and simply use the field name. To refer to a nested field, you specify the full path to that field: [top-level field][nested field].

A filter is a non-scoring query, meaning that it does not score documents. It is only concerned about answering the question - "Does this document match?". The answer is always a simple, binary yes or no. This kind of query is said to be made in a filter context, hence it is called a filter. Filters are simple checks for set inclusion or exclusion. In most cases, the goal of filtering is to reduce the number of documents that have to be examined.

filter plugin
A Logstash plugin that performs intermediary processing on an event. Typically, filters act upon event data after it has been ingested via inputs, by mutating, enriching, and/or modifying the data according to configuration rules. Filters are often applied conditionally depending on the characteristics of the event. Popular filter plugins include grok, mutate, drop, clone, and geoip. Filter stages are optional.

Peform a Lucene commit to write index updates in the transaction log (translog) to disk. Because a Lucene commit is a relatively expensive operation, Elasticsearch records index and delete operations in the translog and automatically flushes changes to disk in batches. To recover from a crash, operations that have been acknowledged but not yet committed can be replayed from the translog. Before upgrading, you can explicitly call the Flush API to ensure that all changes are committed to disk.

follower index

The target index for cross-cluster replication. A follower index exists in a local cluster and replicates a leader index.

frozen index

An index reduced to a low overhead state that still enables occasional searches. Frozen indices use a memory-efficient shard implementation and throttle searches to conserve resources. Searching a frozen index is lower overhead than re-opening a closed index to enable searching.


A self-contained package of code that’s hosted on RubyGems.org. Logstash plugins are packaged as Ruby Gems. You can use the Logstash plugin manager to manage Logstash gems.

A data structure and visualization that shows interconnections between a set of entities. Each entity is represented by a node. Connections between nodes are represented by edges. See Graph.

Grok Debugger

A tool for building and debugging grok patterns. Grok is good for parsing syslog, Apache, and other webserver logs. See Debugging grok expressions.


hidden index

An index that is excluded by default when you access indices using a wildcard expression. You can specify the expand_wildcards parameter to include hidden indices. Note that hidden indices are included if the wildcard expression starts with a dot, for example .watcher-history*.

hot phase

The first possible phase in the index lifecycle. In the hot phase, an index is actively updated and queried.

hot thread
A Java thread that has high CPU usage and executes for a longer than normal period of time.
hot tier

A data tier that contains nodes that handle the indexing load for time series data such as logs or metrics and hold your most recent, most-frequently-accessed data.



The ID of a document identifies a document. The index/id of a document must be unique. If no ID is provided, then it will be auto-generated. (also see routing)


An optimized collection of JSON documents. Each document is a collection of fields, the key-value pairs that contain your data.

An index is a logical namespace that maps to one or more primary shards and can have zero or more replica shards.

index alias

An index alias is a logical name used to reference one or more indices.

Most Elasticsearch APIs accept an index alias in place of an index name.

See Add index alias.

index lifecycle

The four phases an index can transition through: hot, warm, cold, and delete. For more information, see Index lifecycle.

index lifecycle policy

Specifies how an index moves between phases in the index lifecycle and what actions to perform during each phase.

index pattern

A string that can contain the * wildcard to match multiple index names. In most cases, the index parameter in an Elasticsearch request can be the name of a specific index, a list of index names, or an index pattern. For example, if you have the indices datastream-000001, datastream-000002, and datastream-000003, to search across all three you could use the datastream-* index pattern.

A Logstash instance that is tasked with interfacing with an Elasticsearch cluster in order to index event data.
Influencers are entities that might have contributed to an anomaly in a specific bucket in an anomaly detection job. For more information, see Influencers.
The process of collecting and sending data from various data sources to Elasticsearch.
input plugin
A Logstash plugin that reads event data from a specific source. Input plugins are the first stage in the Logstash event processing pipeline. Popular input plugins include file, syslog, redis, and beats.
Extending application code to track where your application is spending time. Code is considered instrumented when it collects and reports this performance data to APM.
Out-of-the-box configurations for common data sources to simplify the collection, parsing, and visualization of logs and metrics. Also known as a module.


Machine learning jobs contain the configuration information and metadata necessary to perform an analytics task. There are two types: anomaly detection jobs and data frame analytics jobs.


Kibana privileges

Enable administrators to grant users read-only, read-write, or no access to individual features within spaces in Kibana. See Kibana privileges.

Kibana Query Language (KQL)

The default language for querying in Kibana. KQL provides support for scripted fields. See Kibana Query Language.


leader index

The source index for cross-cluster replication. A leader index exists on a remote cluster and is replicated to follower indices.


Enables you to build visualizations by dragging and dropping data fields. Lens makes makes smart visualization suggestions for your data, allowing you to switch between visualization types. See Lens.

local cluster

The cluster that pulls data from a remote cluster in cross-cluster search or cross-cluster replication.

Lucene query syntax

The query syntax for Kibana’s legacy query language. The Lucene query syntax is available under the options menu in the query bar and from Advanced Settings.


machine learning node
A machine learning node is a node that has xpack.ml.enabled set to true and ml in node.roles. If you want to use machine learning features, there must be at least one machine learning node in your cluster. See Machine learning nodes.

A representation of geographic data using symbols and labels. See Maps.


A mapping is like a schema definition in a relational database. Each index has a mapping, which defines a type, plus a number of index-wide settings.

A mapping can either be defined explicitly, or it will be generated automatically when a document is indexed.

master node
Handles write requests for the cluster and publishes changes to other nodes in an ordered fashion. Each cluster has a single master node which is chosen automatically by the cluster and is replaced if the current master node fails. Also see node.
message broker
Also referred to as a message buffer or message queue, a message broker is external software (such as Redis, Kafka, or RabbitMQ) that stores messages from the Logstash shipper instance as an intermediate store, waiting to be processed by the Logstash indexer instance.
metric aggregation

An aggregation that calculates and tracks metrics for a set of documents.

A special field for storing content that you don’t want to include in output events. For example, the @metadata field is useful for creating transient fields for use in conditional statements.
Out-of-the-box configurations for common data sources to simplify the collection, parsing, and visualization of logs and metrics. Also known as an integration.
A network endpoint which is monitored to track the performance and availability of applications and services.



A running instance of Elasticsearch that belongs to a cluster. Multiple nodes can be started on a single server for testing purposes, but usually you should have one node per server.


Unifying your logs, metrics, uptime data, and application traces to provide granular insights and context into the behavior of services running in your environments.
output plugin
A Logstash plugin that writes event data to a specific destination. Outputs are the final stage in the event pipeline. Popular output plugins include elasticsearch, file, graphite, and statsd.


Painless Lab

An interactive code editor that lets you test and debug Painless scripts in real-time. See Painless Lab.


A dashboard component that contains a query element or visualization, such as a chart, table, or list.

A term used to describe the flow of events through the Logstash workflow. A pipeline typically consists of a series of input, filter, and output stages. Input stages get data from a source and generate events, filter stages, which are optional, modify the event data, and output stages write the data to a destination. Inputs and outputs support codecs that enable you to encode or decode the data as it enters or exits the pipeline without having to use a separate filter.
Specifies the configuration and topology of an Elasticsearch or Kibana cluster, such as capacity, availability, and Elasticsearch version, for example. When changing a plan, the constructor determines how to transform the existing cluster into the pending plan.
A self-contained software package that implements one of the stages in the Logstash event processing pipeline. The list of available plugins includes input plugins, output plugins, codec plugins, and filter plugins. The plugins are implemented as Ruby gems and hosted on RubyGems.org. You define the stages of an event processing pipeline by configuring plugins.
plugin manager
Accessed via the bin/logstash-plugin script, the plugin manager enables you to manage the lifecycle of plugins in your Logstash deployment. You can install, remove, and upgrade plugins by using the plugin manager Command Line Interface (CLI).
primary shard

Each document is stored in a single primary shard. When you index a document, it is indexed first on the primary shard, then on all replicas of the primary shard.

By default, an index has one primary shard. You can specify more primary shards to scale the number of documents that your index can handle.

You cannot change the number of primary shards in an index, once the index is created. However, an index can be split into a new index using the split index API.

See also routing.

A highly available, TLS-enabled proxy layer that routes user requests, mapping cluster IDs that are passed in request URLs for the container to the cluster nodes handling the user requests.



A request for information from Elasticsearch. You can think of a query as a question, written in a way Elasticsearch understands. A search consists of one or more queries combined.

There are two types of queries: scoring queries and filters. For more information about query types, see Query and filter context.

Query Profiler

A tool that enables you to inspect and analyze search queries to diagnose and debug poorly performing queries. See Query Profiler.


Real user monitoring (RUM)
Performance monitoring, metrics, and error tracking of web applications.

Shard recovery is the process of syncing a replica shard from a primary shard. Upon completion, the replica shard is available for search.

Recovery automatically occurs during the following processes:


Copies documents from a source to a destination. The source and destination can be any pre-existing index, index alias, or data stream.

You can reindex all documents from a source or select a subset of documents to copy. You can also reindex to a destination in a remote cluster.

A reindex is often performed to update mappings, change static index settings, or upgrade Elasticsearch between incompatible versions.

remote cluster

A separate cluster, often in a different data center or locale, that contains indices that can be replicated or searched by the local cluster. The connection to a remote cluster is unidirectional.

replica shard

Each primary shard can have zero or more replicas. A replica is a copy of the primary shard, and has two purposes:

  1. Increase failover: a replica shard can be promoted to a primary shard if the primary fails
  2. Increase performance: get and search requests can be handled by primary or replica shards.

By default, each primary shard has one replica, but the number of replicas can be changed dynamically on an existing index. A replica shard will never be started on the same node as its primary shard.

roles token
Enables a host to join an existing Elastic Cloud Enterprise installation and grants permission to hosts to hold certain roles, such as the allocator role. Used when installing Elastic Cloud Enterprise on additional hosts, a roles token helps secure Elastic Cloud Enterprise by making sure that only authorized hosts become part of the installation.

Aggregates an index’s time series data and stores the results in a new read-only index. For example, you can roll up hourly data into daily or weekly summaries. Summarize high-granularity data into a more compressed format to maintain access to historical data in a cost-effective way.


When you index a document, it is stored on a single primary shard. That shard is chosen by hashing the routing value. By default, the routing value is derived from the ID of the document or, if the document has a specified parent document, from the ID of the parent document (to ensure that child and parent documents are stored on the same shard).

This value can be overridden by specifying a routing value at index time, or a routing field in the mapping.

A local control agent that runs on all hosts, used to deploy local containers based on role definitions. Ensures that containers assigned to it exist and are able to run, and creates or recreates the containers if necessary.


saved object

A representation of a dashboard, visualization, map, index pattern, or Canvas workpad that can be stored and reloaded.

saved search

The query text, filters, and time filter that make up a search, saved for later retrieval and reuse.

scripted field

A field that computes data on the fly from the data in Elasticsearch indices. Scripted field data is shown in Discover and used in visualizations.

services forwarder
Routes data internally in an Elastic Cloud Enterprise installation.

A shard is a single Lucene instance. It is a low-level “worker” unit which is managed automatically by Elasticsearch. An index is a logical namespace which points to primary and replica shards.

Other than defining the number of primary and replica shards that an index should have, you never need to refer to shards directly. Instead, your code should deal only with an index.

Elasticsearch distributes shards amongst all nodes in the cluster, and can move shards automatically from one node to another in the case of node failure, or the addition of new nodes.


A Canvas workpad that can be embedded on any webpage. Shareables enable you to display Canvas visualizations on internal wiki pages or public websites.

An instance of Logstash that send events to another instance of Logstash, or some other application.

Reduce the number of primary shards in an index.

You can shrink an index to reduce its overhead when the request volume drops. For example, you might opt to shrink an index once it is no longer the write index. See the shrink index API.


Captures the state of the whole cluster or of particular indices or data streams at a particular point in time. Snapshots provide a back up of a running cluster, ensuring you can restore your data in the event of a failure. You can also mount indices or datastreams from snapshots as read-only searchable snapshots.

snapshot lifecycle policy

Specifies how frequently to perform automatic backups of a cluster and how long to retain the resulting snapshots.

snapshot repository

Specifies where snapshots are to be stored. Snapshots can be written to a shared filesystem or to a remote repository.

source field

By default, the JSON document that you index will be stored in the _source field and will be returned by all get and search requests. This allows you access to the original object directly from search results, rather than requiring a second step to retrieve the object from an ID.


A place for organizing dashboards, visualizations, and other saved objects by category. For example, you might have different spaces for each team, use case, or individual. See Spaces.

Information about the execution of a specific code path. Spans measure from the start to the end of an activity and can have a parent/child relationship with other spans.

To grow the amount of shards in an index. See the split index API.

Securely tunnels all traffic in an Elastic Cloud Enterprise installation.
system index

An index that contains configuration information or other data used internally by the system, such as the .security index. The name of a system index is always prefixed with a dot. You should not directly access or modify system indices.



A term is an exact value that is indexed in Elasticsearch. The terms foo, Foo, FOO are NOT equivalent. Terms (i.e. exact values) can be searched for using term queries.

See also text and analysis.

term join

A shared key that combines vector features with the results of an Elasticsearch terms aggregation. Term joins augment vector features with properties for data-driven styling and rich tooltip content in maps.


Text (or full text) is ordinary unstructured text, such as this paragraph. By default, text will be analyzed into terms, which is what is actually stored in the index.

Text fields need to be analyzed at index time in order to be searchable as full text, and keywords in full text queries must be analyzed at search time to produce (and search for) the same terms that were generated at index time.

See also term and analysis.

time filter

A Kibana control that constrains the search results to a particular time period.


A tool for building a time series visualization that analyzes data in time order. See Timelion.

time series data

Timestamped data such as logs, metrics, and events that is indexed on an ongoing basis.

Defines the amount of time an application spends on a request. Traces are made up of a collection of transactions and spans that have a common root.
A special kind of span that has additional attributes associated with it. Transactions describe an event captured by an Elastic APM agent instrumenting a service.

A time series data visualizer that allows you to combine an infinite number of aggregations to display complex data. See TSVB.


A type used to represent the type of document, e.g. an email, a user, or a tweet. Types are deprecated and are in the process of being removed. See Removal of mapping types.


Upgrade Assistant

A tool that helps you prepare for an upgrade to the next major version of Elasticsearch. The assistant identifies the deprecated settings in your cluster and indices and guides you through resolving issues, including reindexing. See Upgrade Assistant.

A metric of system reliability used to monitor the status of network endpoints via HTTP/S, TCP, and ICMP.


vector data

Points, lines, and polygons used to represent a map.


A declarative language used to create interactive visualizations. See Vega.


A graphical representation of query results in Kibana (e.g., a histogram, line graph, pie chart, or heat map).


warm phase

The second possible phase in the index lifecycle. In the warm phase, an index is generally optimized for search and no longer updated.

warm tier

A data tier that contains nodes that hold time series data that is accessed less frequently and rarely needs to be updated.


The original suite of alerting features. See Watcher.

The filter thread model used by Logstash, where each worker receives an event and applies all filters, in order, before emitting the event to the output queue. This allows scalability across CPUs because many filters are CPU intensive.

A workspace where you build presentations of your live data in Canvas. See Create a workpad.


A coordination service for distributed systems used by Elastic Cloud Enterprise to store the state of the installation. Responsible for discovery of hosts, resource allocation, leader election after failure and high priority notifications.