The following limitations and known problems apply to the 7.10.0 release of the Elastic transform feature:
Transforms UI will not work during a rolling upgrade from 7.2edit
If your cluster contains mixed version nodes, for example during a rolling upgrade from 7.2 to a newer version, and transforms have been created in 7.2, the transforms UI (earler data frame UI) will not work. Please wait until all nodes have been upgraded to the newer version before using the transforms UI.
Transforms reassignment suspended during a rolling upgrade from 7.2 and 7.3edit
If your cluster contains mixed version nodes, for example during a rolling upgrade from 7.2 or 7.3 to a newer version, transforms whose nodes are stopped will not be reassigned until the upgrade is complete. After the upgrade is done, transforms resume automatically; no action is required.
Data frame data type limitationedit
Data frames do not (yet) support fields containing arrays – in the UI or the API. If you try to create one, the UI will fail to show the source index table.
Up to 1,000 transforms are supportededit
A single cluster will support up to 1,000 transforms. When using the
GET transforms API a total
count of transforms
is returned. Use the
from parameters to enumerate through the full
Aggregation responses may be incompatible with destination index mappingsedit
When a transform is first started, it will deduce the mappings
required for the destination index. This process is based on the field types of
the source index and the aggregations used. If the fields are derived from
dynamic mappings will be used. In some instances the
deduced mappings may be incompatible with the actual data. For example, numeric
overflows might occur or dynamically mapped fields might contain both numbers
and strings. Please check Elasticsearch logs if you think this may have occurred. As a
workaround, you may define custom mappings prior to starting the
transform. For example,
create a custom destination index or
define an index template.
Batch transforms may not account for changed documentsedit
A batch transform uses a composite aggregation which allows efficient pagination through all buckets. Composite aggregations do not yet support a search context, therefore if the source data is changed (deleted, updated, added) while the batch data frame is in progress, then the results may not include these changes.
Continuous transform consistency does not account for deleted or updated documentsedit
While the process for transforms allows the continual recalculation of the transform as new data is being ingested, it does also have some limitations.
Changed entities will only be identified if their time field has also been updated and falls within the range of the action to check for changes. This has been designed in principle for, and is suited to, the use case where new data is given a timestamp for the time of ingest.
If the indices that fall within the scope of the source index pattern are removed, for example when deleting historical time-based indices, then the composite aggregation performed in consecutive checkpoint processing will search over different source data, and entities that only existed in the deleted index will not be removed from the data frame destination index.
Depending on your use case, you may wish to recreate the transform entirely after deletions. Alternatively, if your use case is tolerant to historical archiving, you may wish to include a max ingest timestamp in your aggregation. This will allow you to exclude results that have not been recently updated when viewing the destination index.
Deleting a transform does not delete the destination index or Kibana index patternedit
When deleting a transform using
neither the destination index nor the Kibana index pattern, should one have been
created, are deleted. These objects must be deleted separately.
Handling dynamic adjustment of aggregation page sizeedit
During the development of transforms, control was favoured over performance. In the design considerations, it is preferred for the transform to take longer to complete quietly in the background rather than to finish quickly and take precedence in resource consumption.
Composite aggregations are well suited for high cardinality data enabling pagination through results. If a circuit breaker memory exception occurs when performing the composite aggregated search then we try again reducing the number of buckets requested. This circuit breaker is calculated based upon all activity within the cluster, not just activity from transforms, so it therefore may only be a temporary resource availability issue.
For a batch transform, the number of buckets requested is only ever adjusted downwards. The lowering of value may result in a longer duration for the transform checkpoint to complete. For continuous transforms, the number of buckets requested is reset back to its default at the start of every checkpoint and it is possible for circuit breaker exceptions to occur repeatedly in the Elasticsearch logs.
The transform retrieves data in batches which means it calculates several
buckets at once. Per default this is 500 buckets per search/index operation. The
default can be changed using
max_page_search_size and the minimum value is 10.
If failures still occur once the number of buckets requested has been reduced to
its minimum, then the transform will be set to a failed state.
Handling dynamic adjustments for many termsedit
For each checkpoint, entities are identified that have changed since the last time the check was performed. This list of changed entities is supplied as a terms query to the transform composite aggregation, one page at a time. Then updates are applied to the destination index for each page of entities.
size is defined by
max_page_search_size which is also used to
define the number of buckets returned by the composite aggregation search. The
default value is 500, the minimum is 10.
The index setting
the maximum number of terms that can be used in a terms query. The default value
is 65536. If
transform will fail.
Using smaller values for
max_page_search_size may result in a longer duration
for the transform checkpoint to complete.
Continuous transform scheduling limitationsedit
A continuous transform periodically checks for changes to source data. The functionality
of the scheduler is currently limited to a basic periodic timer which can be
frequency range from 1s to 1h. The default is 1m. This is designed
to run little and often. When choosing a
frequency for this timer consider
your ingest rate along with the impact that the transform
search/index operations has other users in your cluster. Also note that retries
Handling of failed transformsedit
Failed transforms remain as a persistent task and should be handled appropriately, either by deleting it or by resolving the root cause of the failure and re-starting.
When using the API to delete a failed transform, first stop it using
_stop?force=true, then delete it.
Continuous transforms may give incorrect results if documents are not yet available to searchedit
After a document is indexed, there is a very small delay until it is available to search.
A continuous transform periodically checks for changed entities between the time since
it last checked and
sync.time.delay. This time window moves
without overlapping. If the timestamp of a recently indexed document falls
within this time window but this document is not yet available to search then
this entity will not be updated.
If using a
sync.time.field that represents the data ingest time and using a
zero second or very small
sync.time.delay, then it is more likely that this
issue will occur.
Support for date nanoseconds data typeedit
If your data uses the date nanosecond data type, aggregations are nonetheless on millisecond resolution. This limitation also affects the aggregations in your transforms.