Stack

ChatGPT and Elasticsearch: A plugin to use ChatGPT with your Elastic data
Learn how to implement a plugin and enable ChatGPT users to extend ChatGPT with any content indexed in Elasticsearch, using the Elastic documentation.

Chunking Large Documents via Ingest pipelines plus nested vectors equals easy passage search
In this post we'll show how to easily ingest large documents and break them up into sentences via an ingest pipeline so that they can be text embedded along with nested vector support for searching large documents semantically. Generated image of a chonker.

Improving information retrieval in the Elastic Stack: Improved inference performance with ELSER v2
Learn about the improvements we've made to the inference performance of ELSER v2.

Improving information retrieval in the Elastic Stack: Optimizing retrieval with ELSER v2
Learn about how we're reducing retrieval costs for ELSER v2.

Evaluating RAG: A journey through metrics
Learn how Elastic is evaluating RAG.

Finding your puppy with Image Search
Have you ever been in a situation where you found a lost puppy on the street and didn’t know if it had an owner? Learn how to do it with vector search or image search.

Generative AI using Elastic and Amazon SageMaker JumpStart
Learn how to build a GAI solution by exploring Amazon SageMaker JumpStart, Elastic, and Hugging Face open source LLMs using the sample implementation provided in this post and a data set relevant to your business.

How to deploy NLP: Text Embeddings and Vector Search
Taking Text Embeddings and Vector Similarity Search as the example task, this blog describes the process for getting up and running using deep learning models for Natural Language Processing, and demonstrates vector search capability in Elasticsearch

Less merging and faster ingestion in Elasticsearch 8.11
Elasticsearch 8.11 improves how it manages its indexing buffer, resulting in less segment merging.

How to get the best of lexical and AI-powered search with Elastic’s vector database
Elastic has all you should expect from a vector database — and much more! You get the best of both worlds: traditional lexical and AI-powered search, including semantic search out of the box with Elastic’s novel Learned Sparse Encoder model.

Lexical and Semantic Search with Elasticsearch
In this blog post, you will explore various approaches to retrieving information using Elasticsearch, focusing specifically on text: lexical and semantic search.

Bringing Maximum-Inner-Product into Lucene
How we brought maximum-inner-product into Lucene

Improving information retrieval in the Elastic Stack: Introducing Elastic Learned Sparse Encoder, our new retrieval model
Deep learning has transformed how people retrieve information. We've created a retrieval model that works with a variety of text with streamlined processes to deploy it. Learn about the model's performance, its architecture, and how it was trained.

Accessing machine learning models in Elastic
Bring your own transformer models into Elastic to use optimized embedding models and NLP, or integrate with third-party transformer modes such as OpenAI GPT-4 via APIs to leverage more accurate, business-specific content based on private data stores.

Introducing Elastic Learned Sparse Encoder: Elastic’s AI model for semantic search
Elastic Learned Sparse Encoder is an AI model for high relevance semantic search across domains. As a sparse vector model, it expands the query with terms that don't exist in the query itself, delivering superior relevance without domain adaptation.

Multilingual vector search with the E5 embedding model
In this post we'll introduce multilingual vector search. We'll use the Microsoft E5 multilingual embedding model, which has state-of-the-art performance in zero-shot and multilingual settings. We'll walk through how multilingual embeddings work in general and then how to use E5 in Elasticsearch.

Stateless — your new state of find with Elasticsearch
Discover this future of stateless Elasticsearch. Learn how we’re investing in building a new fully cloud native architecture to push the boundaries of scale and speed.

Vector search in Elasticsearch: The rationale behind the design
There are different ways to implement a vector database, which have different trade-offs. In this blog, you'll learn more about how vector search has been integrated into Elastisearch and the trade-offs that we made.

Generative AI architectures with transformers explained from the ground up
This long-form article explains how generative AI works, from the ground all the way up to generative transformer architectures with a focus on intuitions.

Using hybrid search for gopher hunting with Elasticsearch and Go
Just like animals and programming languages, search has undergone an evolution of different practices that can be difficult to pick between. In the final blog of this series, Carly Richmond and Laurent Saint-Félix combine keyword and vector search to hunt for gophers in Elasticsearch using the Go client.

Go-ing gopher hunting with Elasticsearch and Go
Just like animals and programming languages, search has undergone an evolution of different practices that can be difficult to pick between. Join us as we use Go to hunt for gophers in Elasticsearch using traditional keyword search.

Finding gophers with vector search in Elasticsearch and Go
Just like animals and programming languages, search has undergone an evolution of different practices that can be difficult to pick between. Join us on part two of our journey hunting gophers in Go with vector search in Elasticsearch.