k-nearest neighbor (kNN) searchedit

A k-nearest neighbor (kNN) search finds the k nearest vectors to a query vector, as measured by a similarity metric.

Common use cases for kNN include:

  • Relevance ranking based on natural language processing (NLP) algorithms
  • Product recommendations and recommendation engines
  • Similarity search for images or videos

Prerequisitesedit

  • To run a kNN search, you must be able to convert your data into meaningful vector values. You create these vectors outside of Elasticsearch and add them to documents as dense_vector field values. Queries are represented as vectors with the same dimension.

    Design your vectors so that the closer a document’s vector is to a query vector, based on a similarity metric, the better its match.

  • To complete the steps in this guide, you must have the following index privileges:

    • create_index or manage to create an index with a dense_vector field
    • create, index, or write to add data to the index you created
    • read to search the index

kNN methodsedit

Elasticsearch supports two methods for kNN search:

  • Exact, brute-force kNN using a script_score query with a vector function
  • [preview] This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features. Approximate kNN using the knn search option

In most cases, you’ll want to use approximate kNN. Approximate kNN offers lower latency at the cost of slower indexing and imperfect accuracy.

Exact, brute-force kNN guarantees accurate results but doesn’t scale well with large datasets. With this approach, a script_score query must scan each matching document to compute the vector function, which can result in slow search speeds. However, you can improve latency by using a query to limit the number of matching documents passed to the function. If you filter your data to a small subset of documents, you can get good search performance using this approach.

Exact kNNedit

To run an exact kNN search, use a script_score query with a vector function.

  1. Explicitly map one or more dense_vector fields. If you don’t intend to use the field for approximate kNN, omit the index mapping option or set it to false. This can significantly improve indexing speed.

    PUT product-index
    {
      "mappings": {
        "properties": {
          "product-vector": {
            "type": "dense_vector",
            "dims": 5,
            "index": false
          },
          "price": {
            "type": "long"
          }
        }
      }
    }
  2. Index your data.

    POST product-index/_bulk?refresh=true
    { "index": { "_id": "1" } }
    { "product-vector": [230.0, 300.33, -34.8988, 15.555, -200.0], "price": 1599 }
    { "index": { "_id": "2" } }
    { "product-vector": [-0.5, 100.0, -13.0, 14.8, -156.0], "price": 799 }
    { "index": { "_id": "3" } }
    { "product-vector": [0.5, 111.3, -13.0, 14.8, -156.0], "price": 1099 }
    ...
  3. Use the search API to run a script_score query containing a vector function.

    To limit the number of matched documents passed to the vector function, we recommend you specify a filter query in the script_score.query parameter. If needed, you can use a match_all query in this parameter to match all documents. However, matching all documents can significantly increase search latency.

    POST product-index/_search
    {
      "query": {
        "script_score": {
          "query" : {
            "bool" : {
              "filter" : {
                "range" : {
                  "price" : {
                    "gte": 1000
                  }
                }
              }
            }
          },
          "script": {
            "source": "cosineSimilarity(params.queryVector, 'product-vector') + 1.0",
            "params": {
              "queryVector": [-0.5, 90.0, -10, 14.8, -156.0]
            }
          }
        }
      }
    }

Approximate kNNedit

This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features.

To run an approximate kNN search, use the knn option to search a dense_vector field with indexing enabled.

  1. Explicitly map one or more dense_vector fields. Approximate kNN search requires the following mapping options:

    • An index value of true.
    • A similarity value. This value determines the similarity metric used to score documents based on similarity between the query and document vector. For a list of available metrics, see the similarity parameter documentation.
    PUT image-index
    {
      "mappings": {
        "properties": {
          "image-vector": {
            "type": "dense_vector",
            "dims": 3,
            "index": true,
            "similarity": "l2_norm"
          },
          "title": {
            "type": "text"
          },
          "file-type": {
            "type": "keyword"
          }
        }
      }
    }
  2. Index your data.

    POST image-index/_bulk?refresh=true
    { "index": { "_id": "1" } }
    { "image-vector": [1, 5, -20], "title": "moose family", "file-type": "jpg" }
    { "index": { "_id": "2" } }
    { "image-vector": [42, 8, -15], "title": "alpine lake", "file-type": "png" }
    { "index": { "_id": "3" } }
    { "image-vector": [15, 11, 23], "title": "full moon", "file-type": "jpg" }
    ...
  3. Run the search using the knn option.

    POST image-index/_search
    {
      "knn": {
        "field": "image-vector",
        "query_vector": [-5, 9, -12],
        "k": 10,
        "num_candidates": 100
      },
      "fields": [ "title", "file-type" ]
    }

The document _score is determined by the similarity between the query and document vector. See similarity for more information on how kNN search scores are computed.

Support for approximate kNN search was added in version 8.0. Before this, dense_vector fields did not support enabling index in the mapping. If you created an index prior to 8.0 containing dense_vector fields, then to support approximate kNN search the data must be reindexed using a new field mapping that sets index: true.

Tune approximate kNN for speed or accuracyedit

To gather results, the kNN search API finds a num_candidates number of approximate nearest neighbor candidates on each shard. The search computes the similarity of these candidate vectors to the query vector, selecting the k most similar results from each shard. The search then merges the results from each shard to return the global top k nearest neighbors.

You can increase num_candidates for more accurate results at the cost of slower search speeds. A search with a high value for num_candidates considers more candidates from each shard. This takes more time, but the search has a higher probability of finding the true k top nearest neighbors.

Similarly, you can decrease num_candidates for faster searches with potentially less accurate results.

Filtered kNN searchedit

The kNN search API supports restricting the search using a filter. The search will return the top k documents that also match the filter query.

The following request performs an approximate kNN search filtered by the file-type field:

POST image-index/_search
{
  "knn": {
    "field": "image-vector",
    "query_vector": [54, 10, -2],
    "k": 5,
    "num_candidates": 50,
    "filter": {
      "term": {
        "file-type": "png"
      }
    }
  },
  "fields": ["title"],
  "_source": false
}

The filter is applied during the approximate kNN search to ensure that k matching documents are returned. This contrasts with a post-filtering approach, where the filter is applied after the approximate kNN search completes. Post-filtering has the downside that it sometimes returns fewer than k results, even when there are enough matching documents.

Combine approximate kNN with other featuresedit

You can perform hybrid retrieval by providing both the knn option and a query:

POST image-index/_search
{
  "query": {
    "match": {
      "title": {
        "query": "mountain lake",
        "boost": 0.9
      }
    }
  },
  "knn": {
    "field": "image-vector",
    "query_vector": [54, 10, -2],
    "k": 5,
    "num_candidates": 50,
    "boost": 0.1
  },
  "size": 10
}

This search finds the global top k = 5 vector matches, combines them with the matches from the match query, and finally returns the 10 top-scoring results. The knn and query matches are combined through a disjunction, as if you took a boolean or between them. The top k vector results represent the global nearest neighbors across all index shards.

The score of each hit is the sum of the knn and query scores. You can specify a boost value to give a weight to each score in the sum. In the example above, the scores will be calculated as

score = 0.9 * match_score + 0.1 * knn_score

The knn option can also be used with aggregations. In general, Elasticsearch computes aggregations over all documents that match the search. So for approximate kNN search, aggregations are calculated on the top k nearest documents. If the search also includes a query, then aggregations are calculated on the combined set of knn and query matches.

Indexing considerationsedit

Elasticsearch shards are composed of segments, which are internal storage elements in the index. For approximate kNN search, Elasticsearch stores the dense vector values of each segment as an HNSW graph. Indexing vectors for approximate kNN search can take substantial time because of how expensive it is to build these graphs. You may need to increase the client request timeout for index and bulk requests.

Force merging the index to a single segment can improve kNN search latency. With only one segment, the search needs to check a single, all-inclusive HNSW graph. When there are multiple segments, kNN search must check several smaller HNSW graphs as it searches each segment after another. You should only force merge an index if it is no longer being written to.

The HNSW algorithm has index-time parameters that trade off between the cost of building the graph, search speed, and accuracy. When setting up the dense_vector mapping, you can use the index_options argument to adjust these parameters:

PUT image-index
{
  "mappings": {
    "properties": {
      "image-vector": {
        "type": "dense_vector",
        "dims": 3,
        "index": true,
        "similarity": "l2_norm",
        "index_options": {
          "type": "hnsw",
          "m": 32,
          "ef_construction": 100
        }
      }
    }
  }
}

Limitations for approximate kNN searchedit

  • You can’t run an approximate kNN search on a filtered alias. Running a kNN search against a filtered alias may incorrectly result in fewer than k hits.
  • You can’t run an approximate kNN search on a dense_vector field within a nested mapping.
  • When using kNN search in cross-cluster search, the ccs_minimize_roundtrips option is not supported.
  • Elasticsearch uses the HNSW algorithm to support efficient kNN search. Like most kNN algorithms, HNSW is an approximate method that sacrifices result accuracy for improved search speed. This means the results returned are not always the true k closest neighbors.

Approximate kNN search always uses the dfs_query_then_fetch search type in order to gather the global top k matches across shards. You cannot set the search_type explicitly when running kNN search.