Relevance workbench
In this workbench, you can compare our Elastic Learned Sparse Encoder model (with or without RRF) and traditional textual search using BM25.
Start comparing different hybrid search techniques using TMDB's movies dataset as sample data. Or fork the code and ingest your own data to try it on your own!
Try these queries to get started:
- "The matrix"
- "Movies in Space"
- "Superhero animated movies"
Notice how some queries work great for both search techniques. For example, 'The Matrix' performs well with both models. However, for queries like "Superhero animated movies", the Elastic Learned Sparse Encoder model outperforms BM25. This can be attributed to the semantic search capabilities of the model.
Explora demostraciones similares

Platform
Index Lifecycle Management
Since time-series data like logs data is continually growing over time, we can use Index Lifecycle Management to control how long the logs data should be stored for and the maximum size that the logs data should be allowed to grow. This enables you to better manage the costs associated with storing your logs data.

Search
Run the Chatbot RAG App with Vertex AI - using Docker
Follow the step-by-step process of setting up and running the Elastic Chatbot RAG example app with Google Cloud Vertex AI. This tour demonstrates how to use Docker Compose to run the app.

Search
Run the Chatbot RAG App with Vertex AI - using Python
Follow the step-by-step process of setting up and running the Elastic Chatbot RAG example app with Google Cloud Vertex AI. This tour demonstrates how to use Python flask to run the app.