How to Use Amazon Bedrock with Elasticsearch and Langchain

Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon with a single API, along with a broad set of capabilities you need to build generative AI applications, simplifying development while maintaining privacy and security. Since Amazon Bedrock is serverless, you don't have to manage any infrastructure, and you can securely integrate and deploy generative AI capabilities into your applications using the AWS services you are already familiar with.

In this example, we will demonstrate how to split documents into passages, index these passages into Elasticsearch, and use Amazon Bedrock to answer questions based on the indexed data. This approach enhances the retrieval specificity and ensures comprehensive answers by leveraging relevant passages from the indexed documents.

1. Install packages and import modules

Firstly we need to install modules. Make sure python is installed with min version 3.8.1.

!python3 -m pip install -qU langchain langchain-elasticsearch langchain_community boto3 tiktoken

Then we need to import modules

# import modules
from getpass import getpass
from urllib.request import urlopen
from langchain_elasticsearch import ElasticsearchStore
from langchain_community.embeddings.bedrock import BedrockEmbeddings
from langchain.llms import Bedrock
from langchain.chains import RetrievalQA
import boto3
import json

Note: boto3 is part of AWS SDK for Python and is required to use Bedrock LLM

2. Init Amazon Bedrock client

To authorize in AWS service we can use ~/.aws/config file with configuring credentials or pass AWS_ACCESS_KEY, AWS_SECRET_KEY, AWS_REGION to boto3 module.

We're using second approach for our example.

default_region = "us-east-1"
AWS_ACCESS_KEY = getpass("AWS Acces key: ")
AWS_SECRET_KEY = getpass("AWS Secret key: ")
AWS_REGION = input(f"AWS Region [default: {default_region}]: ") or default_region

bedrock_client = boto3.client(
    service_name="bedrock-runtime",
    region_name=AWS_REGION,
    aws_access_key_id=AWS_ACCESS_KEY,
    aws_secret_access_key=AWS_SECRET_KEY
)

3. Connect to Elasticsearch

ℹ️ We're using an Elastic Cloud deployment of Elasticsearch for this notebook. If you don't have an Elastic Cloud deployment, sign up here for a free trial.

Use your Elastic Cloud deployment’s Cloud ID and API key to connect to Elasticsearch. We will use ElasticsearchStore to connect to our elastic cloud deployment. This would help to create and index data easily. In the ElasticsearchStore instance, will set embedding to BedrockEmbeddings to embed the texts and elasticsearch index name that will be used in this example.

Getting Your Cloud ID

To find the Cloud ID for your deployment, log in to your Elastic Cloud account and select your deployment. The Cloud ID can be found on the deployment overview page. For detailed instructions, refer to the Elastic Cloud finding your Cloud ID instruction.

Creating an API Key

To create an API key, navigate to the “Management” section of your Elastic Cloud deployment, and select “API keys” under the “Security” tab. Follow the prompts to generate a new API key. For more information, see the Elastic Cloud creating API key instruction.

# https://www.elastic.co/search-labs/tutorials/install-elasticsearch/elastic-cloud#finding-your-cloud-id
ELASTIC_CLOUD_ID = getpass("Elastic Cloud ID: ")

# https://www.elastic.co/search-labs/tutorials/install-elasticsearch/elastic-cloud#creating-an-api-key
ELASTIC_API_KEY = getpass("Elastic Api Key: ")

bedrock_embedding = BedrockEmbeddings(client=bedrock_client)

vector_store = ElasticsearchStore(
    es_cloud_id=ELASTIC_CLOUD_ID,
    es_api_key=ELASTIC_API_KEY,
    index_name="workplace_index",
    embedding=bedrock_embedding,
)

4. Download the dataset

Let's download the sample dataset and deserialize the document.

url = "https://raw.githubusercontent.com/elastic/elasticsearch-labs/main/example-apps/chatbot-rag-app/data/data.json"

response = urlopen(url)

workplace_docs = json.loads(response.read())

5. Split documents into passages

We’ll chunk documents into passages in order to improve the retrieval specificity and to ensure that we can provide multiple passages within the context window of the final question answering prompt.

Here we are using a simple splitter but Langchain offers more advanced splitters to reduce the chance of context being lost. To improve retrieval specificity and ensure comprehensive context, chunk the documents into smaller passages.

from langchain.text_splitter import RecursiveCharacterTextSplitter

metadata = []
content = []

for doc in workplace_docs:
    content.append(doc["content"])
    metadata.append(
        {
            "name": doc["name"],
            "summary": doc["summary"],
            "rolePermissions": doc["rolePermissions"],
        }
    )

text_splitter = RecursiveCharacterTextSplitter.from_tiktoken_encoder(
    chunk_size=512, chunk_overlap=256
)
docs = text_splitter.create_documents(content, metadatas=metadata)

6. Index data into elasticsearch

Next, we will index data to elasticsearch using ElasticsearchStore.from_documents. We will use Cloud ID, Password and Index name values set in the Create cloud deployment step.

documents = vector_store.from_documents(
    docs,
    es_cloud_id=ELASTIC_CLOUD_ID,
    es_api_key=ELASTIC_API_KEY,
    index_name="workplace_index",
    embedding=bedrock_embedding,
)

7. Init Amazon Bedrock LLM

Next, we will initialize Amazon Bedrock LLM. In the Bedrock instance, will pass bedrock_client and specific model_id: amazon.titan-text-express-v1, ai21.j2-ultra-v1, anthropic.claude-v2, cohere.command-text-v14 or etc. You can see list of available base models on Amazon Bedrock User Guide

default_model_id = "amazon.titan-text-express-v1"
AWS_MODEL_ID = input(f"AWS model [default: {default_model_id}]: ") or default_model_id
llm = Bedrock(
    client=bedrock_client,
    model_id=AWS_MODEL_ID
)

8. Asking a question

Now that we have the passages stored in Elasticsearch and llm is initialized, we can now ask a question to get the relevant passages.

retriever = vector_store.as_retriever()

qa = RetrievalQA.from_llm(
    llm=llm,
    retriever=retriever,
    return_source_documents=True
)

questions = [
    'What is the nasa sales team?',
    'What is our work from home policy?',
    'Does the company own my personal project?',
    'What job openings do we have?',
    'How does compensation work?'
]
question = questions[1]
print(f"Question: {question}\n")

ans = qa({"query": question})

print("\033[92m ---- Answer ---- \033[0m")
print(ans["result"] + "\n")
print("\033[94m ---- Sources ---- \033[0m")
for doc in ans["source_documents"]:
  print("Name: " + doc.metadata["name"])
  print("Content: "+ doc.page_content)
  print("-------\n")

Example Output

For the question “What is our work from home policy?”, you might get:

 ---- Answer ----

This policy applies to all employees who are eligible for remote work as determined by their role and responsibilities. It is designed to allow employees to work from home full time while maintaining the same level of performance and collaboration as they would in the office.

 ---- Sources ----
Name: Work From Home Policy
Content: Effective: March 2020
...

Trying it out

Amazon Bedrock LLM is a powerful tool that can be used in many ways. You can try it out with different base models and different questions. You can also try it out with different datasets and see how it performs. To learn more about Amazon Bedrock, check out the documentation.

You can try to run this example in Google Colab.

Ready to try this out on your own? Start a free trial.
Elasticsearch has integrations for tools from LangChain, Cohere and more. Join our advanced semantic search webinar to build your next GenAI app!
Recommended Articles
Using NVIDIA NIM with Elasticsearch vector store
Generative AIIntegrationsHow To

Using NVIDIA NIM with Elasticsearch vector store

Explore how NVIDIA NIM enhances applications with natural language processing capabilities. NVIDIA NIM offers features such as in-flight batching, which not only speeds up request processing but also integrates seamlessly with Elasticsearch to boost data indexing and search functionalities.

Alex Salgado

Using Elasticsearch as a vector database for Azure OpenAI On Your Data
IntegrationsHow ToVector Search

Using Elasticsearch as a vector database for Azure OpenAI On Your Data

Explore how to quickly set up and ingest data into Elasticsearch for use as a vector database with Azure OpenAI On Your Data, enabling you to chat with your private data.

Paul Oremland

Elasticsearch open inference API adds Azure AI Studio support
IntegrationsHow ToGenerative AIVector Search

Elasticsearch open inference API adds Azure AI Studio support

Elasticsearch open inference API adds support for embeddings generated from models hosted on Azure AI Studio and completion tasks from large language models such as Meta-Llama-3-8B-Instruct."

Mark Hoy

Elasticsearch open inference API adds support for Azure OpenAI chat completions
IntegrationsHow ToGenerative AI

Elasticsearch open inference API adds support for Azure OpenAI chat completions

Elasticsearch open inference API adds support for Azure Open AI chat completions, providing full developer access to the Azure AI ecosystem

Tim Grein

Elasticsearch open inference API adds support for Azure OpenAI embeddings
IntegrationsHow ToVector Search

Elasticsearch open inference API adds support for Azure OpenAI embeddings

Elasticsearch open inference API adds support for Azure OpenAI embeddings to be stored in the world's most downloaded vector database.

Mark Hoy