A data stream lets you store append-only time series data across multiple indices while giving you a single named resource for requests. Data streams are well-suited for logs, events, metrics, and other continuously generated data.
You can submit indexing and search requests directly to a data stream. The stream automatically routes the request to backing indices that store the stream’s data. You can use index lifecycle management (ILM) to automate the management of these backing indices. For example, you can use ILM to automatically move older backing indices to less expensive hardware and delete unneeded indices. ILM can help you reduce costs and overhead as your data grows.
A data stream consists of one or more hidden, auto-generated backing indices.
A data stream requires a matching index template. The template contains the mappings and settings used to configure the stream’s backing indices.
Every document indexed to a data stream must contain a
mapped as a
date_nanos field type. If the
index template doesn’t specify a mapping for the
@timestamp field, Elasticsearch maps
@timestamp as a
date field with default options.
The same index template can be used for multiple data streams. You cannot delete an index template in use by a data stream.
The name pattern for the backing indices is an implementation detail and no intelligence should be derived from it. The only invariant the holds is that each data stream generation index will have a unique name.
When you submit a read request to a data stream, the stream routes the request to all its backing indices.
The most recently created backing index is the data stream’s write index. The stream adds new documents to this index only.
You cannot add new documents to other backing indices, even by sending requests directly to the index.
You also cannot perform operations on a write index that may hinder indexing, such as:
A rollover creates a new backing index that becomes the stream’s new write index.
We recommend using ILM to automatically roll over data streams when the write index reaches a specified age or size. If needed, you can also manually roll over a data stream.
Each data stream tracks its generation: a six-digit, zero-padded integer starting at
When a backing index is created, the index is named using the following convention:
<yyyy.MM.dd> is the backing index’s creation date. Backing indices with a
higher generation contain more recent data. For example, the
data stream has a generation of
34. The stream’s most recent backing index,
created on 7 March 2099, is named
Some operations, such as a shrink or restore, can change a backing index’s name. These name changes do not remove a backing index from its data stream.
The generation of the data stream can change without a new index being added to the data stream (e.g. when an existing backing index is shrunk). This means the backing indices for some generations will never exist. You should not derive any intelligence from the backing indices names.
Data streams are designed for use cases where existing data is rarely, if ever, updated. You cannot send update or deletion requests for existing documents directly to a data stream. Instead, use the update by query and delete by query APIs.
If needed, you can update or delete documents by submitting requests directly to the document’s backing index.
If you frequently update or delete existing time series data, use an index alias with a write index instead of a data stream. See Manage time series data without data streams.
Intro to Kibana
ELK for Logs & Metrics