Red Hat extends collaboration with Elasticsearch vector database for Red Hat OpenShift AI

Red Hat and Elastic today announced a collaboration to enable integration for the Elasticsearch vector database on Red Hat OpenShift AI. Red Hat OpenShift users can now implement Elasticsearch for vector search and Retrieval-Augmented Generation (RAG) applications via the Red Hat Ecosystem Catalog.

This announcement is a natural evolution of the multi-year collaboration between Red Hat and Elastic. Elastic Cloud on Kubernetes (ECK) is a certified offering on Red Hat OpenShift. Elastic is an IBM partner, and IBM Watsonx Assistant and Watsonx Discovery use Elastic vector search for question-answering and retrieval augmentation use cases.

With today’s announcement, Elasticsearch users will benefit from Red Hat OpenShift AI, a flexible, scalable MLOps platform for building, training, testing, and serving models for AI-enabled applications.

Elasticsearch vector database for generative AI and RAG apps

Elasticsearch Relevance Engine (ESRE) is a comprehensive suite of developer tools for building generative AI and RAG applications. ESRE incorporates a vector database that stores embeddings for text, image, and video data. ESRE’s native hybrid search can effectively combine results containing text, vectors, and geospatial data, with filtering, aggregations, and document-level security.

With ESRE, developers can implement vector search and semantic search, including k-nearest neighbors (kNN) and approximate nearest neighbor (ANN) search, along with support for both built-in and third-party natural language processing (NLP) models. ESRE also seamlessly integrates with key third-party ecosystem products from providers such as Cohere, LangChain, and LlamaIndex. Elasticsearch can be self-managed or deployed with Elastic Cloud.

Elasticsearch as the preferred vector database solution on Red Hat OpenShift AI

As part of today’s announcement, users will be able to leverage ESRE capabilities by downloading Elasticsearch directly from the Red Hat Ecosystem Catalog.

What is Red Hat OpenShift AI for generative AI apps

Red Hat OpenShift AI is a hybrid MLOps platform that brings IT, data science, and app dev teams together. Designed to simplify Generative AI application development and deployment, it provides a comprehensive infrastructure stack tailored for distributed workloads. This includes training, optimizing, fine-tuning, and deploying foundational and predictive AI models. Collaborating with model builders helps provide access to a variety of pre-built models. Developers and data scientists can work together on the same platform, greatly enhancing collaboration. The platform facilitates end-to-end AI lifecycle management—from model development and training to deployment, serving, and continuous monitoring.

  • Model development: Conduct exploratory data science in JupyterLab with access to core AI / ML libraries and frameworks, including TensorFlow and PyTorch using our notebook images or your own.
  • Model serving & monitoring: Deploy models across on-premise or any cloud, either in a fully managed or self-managed Red Hat OpenShift footprint and centrally monitor their performance.
  • Lifecycle Management: Create repeatable data science pipelines for model training and validation and integrate them with DevOps pipelines for the delivery of models across your enterprise.
  • Increased capabilities and collaboration: Create projects and share them across teams. Combine Red Hat components, open-source software, and ISV-certified software.

To get started, just follow the installation instructions provided in the Red Hat Ecosystem Catalog, and start building your next generative AI application with RAG!

Visit Elasticsearch Labs for articles and sample notebooks on vector search, RAG, and more.

Ready to build RAG into your apps? Want to try different LLMs with a vector database?
Check out our sample notebooks for LangChain, Cohere and more on Github, and join the Elasticsearch Engineer training starting soon!
Recommended Articles