Loading

Semantic text field type reference

Serverless Stack 9.0.0

This page provides reference content for the semantic_text field type, including parameter descriptions, inference endpoint configuration options, chunking behavior, update operations, querying options, and limitations.

The semantic_text field type uses default indexing settings based on the inference endpoint specified, enabling you to get started without providing additional configuration details. You can override these defaults by customizing the parameters described below.

inference_id
(Optional, string) Inference endpoint that will be used to generate embeddings for the field. If search_inference_id is specified, the inference endpoint will only be used at index time. Learn more about configuring this parameter.

Updating the inference_id parameter

This parameter cannot be updated.

You can update this parameter by using the Update mapping API. You can update the inference endpoint if no values have been indexed or if the new endpoint is compatible with the current one.

Important

When updating an inference_id it is important to ensure the new inference endpoint produces embeddings compatible with those already indexed. This typically means using the same underlying model.

search_inference_id

(Optional, string) The inference endpoint that will be used to generate embeddings at query time. Use the Create inference API to create the endpoint. If not specified, the inference endpoint defined by inference_id will be used at both index and query time.

You can update this parameter by using the Update mapping API.

Learn how to use dedicated endpoints for ingestion and search.

index_options Stack 9.1.0
(Optional, object) Specifies the index options to override default values for the field. Currently, dense_vector and sparse_vector index options are supported. For text embeddings, index_options may match any allowed.
chunking_settings Stack 9.1.0

(Optional, object) Settings for chunking text into smaller passages. If specified, these will override the chunking settings set in the Inference endpoint associated with inference_id.

If chunking settings are updated, they will not be applied to existing documents until they are reindexed. Defaults to the optimal chunking settings for Elastic Rerank.

To completely disable chunking, use the none chunking strategy.

Important

When using the none chunking strategy, if the input exceeds the maximum token limit of the underlying model, some services (such as OpenAI) may return an error. In contrast, the elastic and elasticsearch services will automatically truncate the input to fit within the model's limit.

The following example shows how to configure inference_id, index_options and chunking_settings for a semantic_text field type:

				PUT my-index-000004
					{
  "mappings": {
    "properties": {
      "inference_field": {
        "type": "semantic_text",
        "inference_id": "my-text-embedding-endpoint",
        "index_options": {
          "dense_vector": {
            "type": "int4_flat"
          }
        },
        "chunking_settings": {
          "type": "none"
        }
      }
    }
  }
}
		
  1. The inference_id of the inference endpoint to use for generating embeddings.
  2. Overrides default index options by specifying int4_flat quantization for dense vector embeddings.
  3. Disables automatic chunking by setting the chunking strategy to none.
Note

Stack 9.1.0 Newly created indices with semantic_text fields using dense embeddings will be quantized to bbq_hnsw automatically as long as they have a minimum of 64 dimensions.

The semantic_text field type specifies an inference endpoint identifier (inference_id) that is used to generate embeddings.

The following inference endpoint configurations are available:

If you use a custom inference endpoint through your ML node and not through Elastic Inference Service (EIS), the recommended method is to use dedicated endpoints for ingestion and search.

Stack 9.1.0 If you use EIS, you don't have to set up dedicated endpoints.

Warning

Removing an inference endpoint will cause ingestion of documents and semantic queries to fail on indices that define semantic_text fields with that inference endpoint as their inference_id. Trying to delete an inference endpoint that is used on a semantic_text field will result in an error.

Inference endpoints have a limit on the amount of text they can process. To allow for large amounts of text to be used in semantic search, semantic_text automatically generates smaller passages if needed, called chunks.

Each chunk refers to a passage of the text and the corresponding embedding generated from it. When querying, the individual passages will be automatically searched for each document, and the most relevant passage will be used to compute a score.

Chunks are stored as start and end character offsets rather than as separate text strings. These offsets point to the exact location of each chunk within the original input text.

You can pre-chunk content by providing text as arrays before indexing.

Refer to the Inference API documentation for values for chunking_settings and to Configuring chunking to learn about different chunking strategies.

semantic_text field types have the following limitations:

When an index contains a semantic_text field, the docs.count value returned by the _cat/indices API may be higher than the number of documents you indexed. This occurs because semantic_text stores embeddings in nested documents, one per chunk. The _cat/indices API counts all documents in the Lucene index, including these hidden nested documents.

To count only top-level documents, excluding the nested documents that store embeddings, use one of the following APIs:

  • GET /<index>/_count
  • GET _cat/count/<index>