One API. All data types. Exceptional relevance.

Elasticsearch gives you the tools to combine keyword precision with semantic recall, so your results are always relevant across all data types.

Hybrid search with Elasticsearch: From keywords to context

Search every data type in one datastore, and power retrieval augmented generation (RAG) and agents with results that balance BM25F accuracy and semantic understanding. Start fast with great defaults and an easy-to-use API, and then customize on your terms.

  • One distributed datastore for all of your data

    The best vector database starts with search. Elasticsearch scales hybrid search effortlessly across billions of documents, delivering best-in-class relevance, flexible model support, and cost-efficient performance, all in one platform. Query it all with ES|QL: Joins, analytics, and more.

  • Simple to start and powerful to customize

    With the elegance and speed of a single API, build hybrid search that balances exact term matches with contextual meaning using filters, boosts, ranking, and reranking. Start fast and configure with full control.

  • Text, geo, or multimodal — hybrid for every data type

    With Elasticsearch, hybrid search adapts to whatever combination you need. Fuse lexical with vectors, geo with semantic, or text with images to fit your use case, and deliver results that are as precise as they are relevant.

Why developers choose Elasticsearch

Get the best tools for precision, explainability, and control. Lexical search excels at structured queries, rare terms, and out-of-domain data. Semantic search adds fuzziness and recall when exact matches fall short. Control how they work together with tune scoring, filters, and boosts.

For exact, structured, and explainable queries
For flexible, semantic, high-recall search
For production-grade relevance from both worlds
Scoring that makes sense

Use BM25F scoring with full control over field weights and term boosts — no model required.

Retrieve semantically related results via dense_vector or semantic_text fields.

Combine results via reciprocal_rank_fusion or <options> in the rank API.

Full control in your query DSL

Tune relevance using combined_fields, boost, fuzziness, synonyms, and analyzers.

Bring your own embeddings or use built-in inference with ELSER, OpenAI, etc.

Use a single hybrid query with shared filters, weights, and rerank logic.

Filters that just work

Get native support for geo, term, range, and ACL filters — fast and stable at scale.

ACORN-1 enables fast filtered kNN even on large datasets with filter clause support.

The shared filtering layer works across both retrievers — no pipeline stitching required.
Debug and inspect capabilities

Use explain, profile, and the _rank_features field to understand how docs score.

Vector scores are fully exposed — inspect similarity math or weight contributions.
Gain end-to-end debug visibility across both search paths — down to each reranker’s impact.
Good for when ...
You need precision, filtering, and control — for logs, catalog, identifiers, and compliance.
You're handling vague queries, new terms, semantic drift, or unknown phrasing.
You want robust, tunable, explainable results — even when queries get weird.
Scoring that makes sense
Full control in your query DSL
Filters that just work
Debug and inspect capabilities
Good for when ...
For exact, structured, and explainable queries
For flexible, semantic, high-recall search
For production-grade relevance from both worlds

Use BM25F scoring with full control over field weights and term boosts — no model required.

Retrieve semantically related results via dense_vector or semantic_text fields.

Combine results via reciprocal_rank_fusion or <options> in the rank API.

Tune relevance using combined_fields, boost, fuzziness, synonyms, and analyzers.

Bring your own embeddings or use built-in inference with ELSER, OpenAI, etc.

Use a single hybrid query with shared filters, weights, and rerank logic.

Get native support for geo, term, range, and ACL filters — fast and stable at scale.

ACORN-1 enables fast filtered kNN even on large datasets with filter clause support.

The shared filtering layer works across both retrievers — no pipeline stitching required.

Use explain, profile, and the _rank_features field to understand how docs score.

Vector scores are fully exposed — inspect similarity math or weight contributions.
Gain end-to-end debug visibility across both search paths — down to each reranker’s impact.
You need precision, filtering, and control — for logs, catalog, identifiers, and compliance.
You're handling vague queries, new terms, semantic drift, or unknown phrasing.
You want robust, tunable, explainable results — even when queries get weird.

Rightsize your relevance journey

Elasticsearch gives you relevance control at every level — from zero-config to full customization. Explore the full tuning journey on Elasticsearch Labs.

  • Use BM25F: the original no-LLM-needed technology.

  • Use ELSER or E5 with lexical search for better recall on complex queries.

  • Expert mode

    Use rerankers, retrievers, and BBQ to ship domain-specific retrieval pipelines.

Best in class? Built right in

With native integrations to all the leading AI products, your apps go further, faster.

A four-column ecosystem diagram displaying the logos of leading AI and machine learning partners across Model Providers, Platform Providers, MLOps and orchestration tools, and Open Standard API clients. The visual shows Elastic connecting natively to the full AI stack to enhance search and power intelligent applications.

Frequently asked questions

Hybrid search combines keyword (lexical) precision with vector (semantic) similarity, so users get relevant results even when queries don’t match exact text.