Set the lifecycle of a data streamedit

This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features.

Configures the data stream lifecycle for the targeted data streams.

Prerequisitesedit

If the Elasticsearch security features are enabled, you must have the manage_data_stream_lifecycle index privilege or higher to use this API. For more information, see Security privileges.

Requestedit

PUT _data_stream/<data-stream>/_lifecycle

Descriptionedit

Configures the data stream lifecycle for the targeted data streams. If multiple data streams are provided but at least one of them does not exist, then the update of the lifecycle will fail for all of them and the API will respond with 404.

Path parametersedit

<data-stream>
(Required, string) Comma-separated list of data streams used to limit the request. Supports wildcards (*). To target all data streams use * or _all.

Query parametersedit

expand_wildcards

(Optional, string) Type of data stream that wildcard patterns can match. Supports comma-separated values, such as open,hidden. Valid values are:

all, hidden
Match any data stream, including hidden ones.
open, closed
Matches any non-hidden data stream. Data streams cannot be closed.
none
Wildcard patterns are not accepted.

Defaults to open.

Request bodyedit

lifecycle

(Required, object)

Properties of lifecycle
data_retention
(Optional, string) If defined, every document added to this data stream will be stored at least for this time frame. Any time after this duration the document could be deleted. When empty, every document in this data stream will be stored indefinitely.
enabled
(Optional, boolean) If defined, it turns data streqm lifecycle on/off (true/false) for this data stream. A data stream lifecycle that’s disabled (enabled: false) will have no effect on the data stream. Defaults to true.
downsampling
(Optional, array) An optional array of downsampling configuration objects, each defining an after interval representing when the backing index is meant to be downsampled (the time frame is calculated since the index was rolled over, i.e. generation time) and a fixed_interval representing the downsampling interval (the minimum fixed_interval value is 5m). A maximum number of 10 downsampling rounds can be configured. See configuration example below.

Examplesedit

The following example sets the lifecycle of my-data-stream:

response = client.indices.put_data_lifecycle(
  name: 'my-data-stream',
  body: {
    data_retention: '7d'
  }
)
puts response
PUT _data_stream/my-data-stream/_lifecycle
{
  "data_retention": "7d"
}

When the lifecycle is successfully updated in all data streams, you receive the following result:

{
  "acknowledged": true
}

Examplesedit

The following example configures two downsampling rounds, the first one starting one day after the backing index is rolled over (or later, if the index is still within its write-accepting time bounds) with an interval of 10m, and a second round starting 7 days after rollover at an interval of 1d:

response = client.indices.put_data_lifecycle(
  name: 'my-weather-sensor-data-stream',
  body: {
    downsampling: [
      {
        after: '1d',
        fixed_interval: '10m'
      },
      {
        after: '7d',
        fixed_interval: '1d'
      }
    ]
  }
)
puts response
PUT _data_stream/my-weather-sensor-data-stream/_lifecycle
{
    "downsampling": [
      {
        "after": "1d",
        "fixed_interval": "10m"
      },
      {
        "after": "7d",
        "fixed_interval": "1d"
      }
    ]
}