Elasticsearch open inference API adds support for Azure OpenAI chat completions

Azure OpenAI chat completions is available via the Elasticsearch inference API. Learn how to use this feature to answer questions.

Elasticsearch has native integrations with the industry-leading Gen AI tools and providers. Check out our webinars on going Beyond RAG Basics, or building prod-ready apps with the Elastic vector database.

To build the best search solutions for your use case, start a free cloud trial or try Elastic on your local machine now.

We’ve integrated Azure OpenAI chat completions in the inference API, which allows our customers to build powerful GenAI applications based on chat completion using large language models like GPT-4 Azure and Elasticsearch developers can utilize the unique capabilities of the Elasticsearch vector database and the Azure AI ecosystem to power unique GenAI applications with the model of their choice.

This blog quickly goes over the catalog of supported providers in the open inference API and explains how to use Azure’s OpenAI chat completions to answer questions through an example.

The inference API is growing…fast!

We’re heavily extending the catalog of supported providers in the open inference API. Check out some of our latest blog posts on Elastic Search labs to learn more about recent integrations around embeddings, completions and reranking:

Azure OpenAI chat completions support is available through the open inference API in our stateless offering on Elastic Cloud. It’ll also be soon available to everyone in an upcoming versioned Elasticsearch release. This also complements the capability to use the Elasticsearch vector database in the Azure OpenAI service.

Using Azure’s OpenAI chat completions to answer questions

In my last blog post about OpenAI chat completions we’ve learned how to summarize text using OpenAI’s chat completions. In this guide we’ll use Azure OpenAI chat completions to answer questions during ingestion to have answers ready ahead of searching. Make sure you have your Azure OpenAI api key, deployment id and resource name ready by creating a free Azure account first and setting up a model suited for chat completions. You can follow Azure's OpenAI Service GPT quickstart guide to get a model up and running. In the following example we’ve used `gpt-4` with the version `2024-02-01`. You can read more about supported models and versions here.

In Kibana, you'll have access to a console for you to input these next steps in Elasticsearch without even needing to set up an IDE.

First, we configure a model, which will perform completions:

You’ll get back a response similar to the following with status code `200 OK` on successful inference creation:

You can now call the configured model to perform completion on any text input. Let’s ask the model what’s inference in the context of GenAI:

You should get back a response with status code `200 OK` explaining what inference is:

Now we can set up a small catalog of questions, which we want to be answered during ingestion. We’ll use the Bulk API to index three questions about products of Elastic:

You’ll get back a response with status `200 OK` back similar to the following upon successful indexing:

We’ll create now our question and answering ingest pipeline using the script-, inference- and remove-processor:

This pipeline prefixes the content with the instruction “Please answer the following question: “ in a temporary field named `prompt`. The content of this temporary `prompt` field will be sent to Azure’s OpenAI Service through the inference API to perform a completion. Using an ingest pipeline allows for immense flexibility as you can change the pre-prompt to anything you would like. This allows you to summarize documents for example, too. Check out Elasticsearch open inference API adds support for OpenAI chat completions to learn about how to build a summarisation ingest pipeline!

We now send our documents containing questions through the question and answering pipeline by calling the reindex API.

You'll get back a response with status 200 OK similar to the following:

In a real world setup you’ll probably use another ingestion mechanism to ingest your documents in an automated way. Check out our Adding data to Elasticsearch guide to learn more about the various options offered by Elastic to ingest data into Elasticsearch. We’re also committed to showcase ingest mechanisms and provide guidance on how to bring data into Elasticsearch using 3rd party tools. Take a look at Ingest Data from Snowflake to Elasticsearch using Meltano: A developer’s journey for example on how to use Meltano for ingesting data.

You're now able to search for your pre-generated answers using the Search API:

In the response you'll get back your pre-generated answers:

Pre-generating answers for frequently asked questions is particularly effective in reducing operational costs. By minimizing the need for on-the-fly response generation, you can significantly cut down on the amount of computational resources required like token usage. Additionally, this method ensures that every user receives the same, precise information. Consistency is crucial, especially in fields requiring high reliability and accuracy such as medical, legal, or technical support.

More to come!

We’re already working on adding support for more task types using Cohere, Google Vertex AI and many more. Furthermore we’re actively developing an intuitive UI in Kibana for managing Inference endpoints. Lots of exciting stuff to come! Bookmark Elastic Search Labs now to keep with Elastic’s innovations in the GenAI space!

관련 콘텐츠

최첨단 검색 환경을 구축할 준비가 되셨나요?

충분히 고급화된 검색은 한 사람의 노력만으로는 달성할 수 없습니다. Elasticsearch는 여러분과 마찬가지로 검색에 대한 열정을 가진 데이터 과학자, ML 운영팀, 엔지니어 등 많은 사람들이 지원합니다. 서로 연결하고 협력하여 원하는 결과를 얻을 수 있는 마법 같은 검색 환경을 구축해 보세요.

직접 사용해 보세요