Elastic Playground: Using Elastic connectors to chat with your data

Learn how to use Elastic connectors and Playground to chat with your data. We'll start by using connectors to search for information in different sources.

Elasticsearch allows you to index data quickly and in a flexible manner. Try it free in the cloud or run it locally to see how easy indexing can be.

Elastic connectors make it easy to index and combine data from different sources to run unified searches. With the addition of Playground you can set up a knowledge base that you can chat with and ask questions.

Connectors are a type of Elastic integration that are helpful for syncing data from different sources to an Elasticsearch index.

In this article, we'll see how to index a Confluence Wiki using the Elastic connector, configure an index to run semantic queries, and then use Playground to chat with your data.

Steps

  1. Configure the connector
  2. Preparing the index
  3. Chat with data using Playground

Configure the connector

In our example, our Wiki works as a centralized repository for a hospital and contains info on:

  • Doctors' profiles: speciality, availability, contact info.
  • Patients' files: Medical records and other relevant data.
  • Hospital guidelines: Policies, emergency protocols and instructions for staff.

We'll index the content from our Wiki using the Elasticsearch-managed Confluence connector.

The first step is to get your Atlassian API Key:

Configuring the Confluence native connector

You can follow the steps here to guide you through the configuration:

  1. Access your Kibana instance and go to Search > Connectors
  2. Click on add a connector and select Confluence from the list.
  3. Name the new connector "hospital".
  4. Then click on the create new Index button.
  5. Click on edit configurations and, for this example, we need to modify the data source for "confluence cloud". The required fields are:
    1. Confluence Cloud account email
    2. API Key
    3. Confluence URL label
  6. Save the configuration and go to the next step.

By default, the connector will index:

  • Pages
  • Spaces
  • Blog Posts
  • Attachments

To make sure to only index the wiki, you need to use an advanced filter rule to include only pages inside the space named "Hospital Health" identified as "HH".

You can check out additional examples here.

Now, let's run a Full Content Sync to index our wiki.

Once completed, we can check the indexed documents on the tab "Documents".

Preparing the index

With what we have so far, we could run full text queries on our content. Since we want to make questions instead of looking for keywords, we now need to have semantic search.

For this purpose we will use Elasticsearch ELSER model as the embeddings provider.

To configure this, use the Elasticsearch's inference API.

Go to Kibana Dev Tools and copy this code to start the endpoint:

Now the model is loading in the background. You might get a 502 Bad Gateway error if you haven't used the ELSER model before. To make sure the model is loading, check Machine Learning > Trained Models:

Let's add a semantic_text field using the UI. Go to the connector's page, select Index mappings, and click on Add Field.

Select "Semantic text" as field type. For this example, the reference field will be "body" and the field name content_semantic. Finally, select the inference endpoint we've just configured.

Before clicking on "Add field", check that your configuration looks similar to this:

Now click on "Save mapping":

One you've ran the Full Content Sync from the UI, let's check it's ok by running a semantic query:

The response should look something like this:

Chat with your data using Playground

What is Playground?

Playground is a low code platform hosted in Kibana that allows you to easily create a RAG application and ask questions to your indices, regardless if they have embeddings.

Playground not only provides a UI chat with citations and provides full control over the queries, but also handles different LLMs to synthesize the answers.

You can read this article for a deeper insight and test the online demo to familiarize yourself with it.

Configure Playground

To begin, you only need the credentials for any of the compatible models:

  • OpenAI (or any local model compatible with OpenAI API)
  • Amazon Bedrock
  • Google Gemini

When you open Playground, you have the option to configure the LLM provider and select the index with the documents you want to use as knowledge base.

For this example, we'll use OpenAI. You can check this link to learn how to get an API key.

Let's create our OpenAI connector by clicking Connect to an LLM > OpenAI and let's fill in the fields as in the image below:

To select the index we created using the Confluence connector, click on "Add data sources" and click on the index.

NOTE: You can select more than one index, if you want.

Now that we're done configuring, we can start making questions to the model.

Aside from choosing to include citations with the source document in your answers, you can also control which fields to send to the LLM to use in search.

The View Code window provides the python code you need to integrate this into your apps.

Conclusion

In this article, we learned that we can use connectors both to search for information in different sources as well as a knowledge base using Playground. We also learned to easily deploy a RAG application to chat with your data without leaving the Elastic environment.

자주 묻는 질문

What are Elastic Connectors?

Elastic connectors are a type of Elastic integration that are helpful for syncing data from different sources to an Elasticsearch index.

이 콘텐츠가 얼마나 도움이 되었습니까?

도움이 되지 않음

어느 정도 도움이 됩니다

매우 도움이 됨

관련 콘텐츠

최첨단 검색 환경을 구축할 준비가 되셨나요?

충분히 고급화된 검색은 한 사람의 노력만으로는 달성할 수 없습니다. Elasticsearch는 여러분과 마찬가지로 검색에 대한 열정을 가진 데이터 과학자, ML 운영팀, 엔지니어 등 많은 사람들이 지원합니다. 서로 연결하고 협력하여 원하는 결과를 얻을 수 있는 마법 같은 검색 환경을 구축해 보세요.

직접 사용해 보세요