Create transforms APIedit

Instantiates a transform.

This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features.


PUT _data_frame/transforms/<transform_id>


  • If the Elasticsearch security features are enabled, you must have manage_data_frame_transforms cluster privileges to use this API. The built-in data_frame_transforms_admin role has these privileges. You must also have read and view_index_metadata privileges on the source index and read, create_index, and index privileges on the destination index. For more information, see Security privileges and Built-in roles.


This API defines a transform, which copies data from source indices, transforms it, and persists it into an entity-centric destination index. The entities are defined by the set of group_by fields in the pivot object. You can also think of the destination index as a two-dimensional tabular data structure (known as a data frame). The ID for each document in the data frame is generated from a hash of the entity, so there is a unique row per entity. For more information, see Transforming data.

When the transform is created, a series of validations occur to ensure its success. For example, there is a check for the existence of the source indices and a check that the destination index is not part of the source index pattern. You can use the defer_validation parameter to skip these checks.

Deferred validations are always run when the transform is started, with the exception of privilege checks. When Elasticsearch security features are enabled, the transform remembers which roles the user that created it had at the time of creation and uses those same roles. If those roles do not have the required privileges on the source and destination indices, the transform fails when it attempts unauthorized operations.

You must use Kibana or this API to create a transform. Do not put a transform directly into any .data-frame-internal* indices using the Elasticsearch index API. If Elasticsearch security features are enabled, do not give users any privileges on .data-frame-internal* indices.

Path parametersedit

(Required, string) Identifier for the transform. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters.

Query parametersedit

(Optional, boolean) When true, deferrable validations are not run. This behavior may be desired if the source index does not exist until after the transform is created.

Request bodyedit

(Optional, string) Free text description of the transform.

(Required, object) Required. The destination configuration, which has the following properties:

(Required, string) The destination index for the transform.
(Optional, string) The unique identifier for a pipeline.
(Optional, time units) The interval between checks for changes in the source indices when the transform is running continuously. Also determines the retry interval in the event of transient failures while the transform is searching or indexing. The minimum value is 1s and the maximum is 1h. The default value is 1m.
(Required, object) Defines the pivot function group by fields and the aggregation to reduce the data. See Pivot objects.

(Required, object) The source configuration, which has the following properties:

(Required, string or array) The source indices for the transform. It can be a single index, an index pattern (for example, "myindex*"), or an array of indices (for example, ["index1", "index2"]).
(Optional, object) A query clause that retrieves a subset of data from the source index. See Query DSL.

(Optional, object) Defines the properties required to run continuously.


(Required, object) Specifies that the transform uses a time field to synchronize the source and destination indices.


(Required, string) The date field that is used to identify new documents in the source.

In general, it’s a good idea to use a field that contains the ingest timestamp. If you use a different field, you might need to set the delay such that it accounts for data transmission delays.

(Optional, time units) The time delay between the current time and the latest input data time. The default value is 60s.


PUT _data_frame/transforms/ecommerce_transform
  "source": {
    "index": "kibana_sample_data_ecommerce",
    "query": {
      "term": {
        "geoip.continent_name": {
          "value": "Asia"
  "pivot": {
    "group_by": {
      "customer_id": {
        "terms": {
          "field": "customer_id"
    "aggregations": {
      "max_price": {
        "max": {
          "field": "taxful_total_price"
  "description": "Maximum priced ecommerce data by customer_id in Asia",
  "dest": {
    "index": "kibana_sample_data_ecommerce_transform",
    "pipeline": "add_timestamp_pipeline"
  "frequency": "5m",
  "sync": {
    "time": {
      "field": "order_date",
      "delay": "60s"

When the transform is created, you receive the following results:

  "acknowledged" : true