Getting started with the Elastic AI Assistant for Observability and Amazon Bedrock

Follow this step-by-step process to get started with the Elastic AI Assistant for Observability and Amazon Bedrock.

Getting started with the Elastic AI Assistant for Observability and Amazon Bedrock

Elastic recently released version 8.13, which includes the general availability of Amazon Bedrock integration for the Elastic AI Assistant for Observability. This blog post will walk through the step-by-step process of setting up the Elastic AI Assistant with Amazon Bedrock. Then, we’ll show you how to add content to the AI Assistant’s knowledge base to demonstrate how the power of Elasticsearch combined with Amazon Bedrock can supercharge the answers Elastic AI Assistant provides so that they are uniquely specific to your needs.

Managing applications and the infrastructure they run on requires advanced observability into the diverse types of data involved like logs, traces, profiles, and metrics. General purpose generative AI large language models (LLMs) offer a new capability to provide human readable guidance to your observability questions. However, they have limitations. Specifically, when it comes to providing answers about your application’s distinct observability data like real-time metrics, the LLMs require additional context to provide answers that will help to actually resolve issues. This is a limitation that the Elastic AI Assistant for Observability can uniquely solve.

Elastic Observability, serving as a central datastore of all the observability data flowing from your application, combined with the Elastic AI Assistant gives you the ability to generate a context window that can inform an LLM’s responses and vastly improve the answers it provides. For example, when you ask the Elastic AI Assistant a question about a specific issue happening in your application, it gathers up all the relevant details — current errors captured from logs or a related runbook that your team has stored in the Elastic AI Assistant’s knowledge base. Then, it sends that information to the Amazon Bedrock LLM as a context window from which it can better answer your observability questions.

Read on to follow the steps for setting up the Elastic AI Assistant for yourself.

Set up the Elastic AI Assistant for Observability: Create an Amazon Bedrock connector in Elastic Cloud

Start by creating an Elastic Cloud 8.13 deployment via the AWS marketplace. If you’re a new user of Elastic Cloud, you can create a new deployment with a 7-day free trial.

Sign in to the Elastic Cloud deployment you’ve created. From the top level menu, select Stack Management.

Select Connectors.

Click the Create connector button.

Enable Amazon Bedrock model access

For populating the required connector settings, enable Amazon Bedrock model access in the AWS console using the following steps.

In a new browser tab, open Amazon Bedrock and click the Get started button.

Currently, access to the Amazon Bedrock foundation models is granted by requesting access using the Bedrock Model access section in the AWS console.

Select Model access from the navigation menu.

To request access, select the foundation models that you want to access and click the Save Changes button. For this blog post, we will choose the Anthropic Claude models.

Once access is granted, the Manage model access settings will indicate that access has been granted.

Create AWS IAM User

Create an IAM user and assign it a role with Amazon Bedrock full access and also generate an IAM access key and secret key in the console. If you already have an IAM user with a generated access key and secret key, you can use the existing credentials to access Amazon Bedrock.

Configure Elastic connector to use Amazon Bedrock

Back in the Elastic Cloud deployment create connector flyout, select the connector for Amazon Bedrock.

Enter a Name of your choice for the connector. Also, enter the Access Key and Key Secret that you copied in a previous step. Click the Save & test button to create the connector.

Within the Edit Connector flyout window, click the Run button to confirm that the connector configuration is valid and can successfully connect to your Amazon Bedrock instance.

You should see confirmation that the connector test was successful.

Add an example logs record

Now that the connector is configured, let's add a logs record to demonstrate how the Elastic AI Assistant can help you to better understand the diverse types of information contained within logs.

Use the Elastic Dev Tools to add a single logs record. Click the top-level menu and select Dev Tools.

Within the console area of Dev Tools, enter the following POST statement:

POST /logs-elastic_agent-default/_doc
    "message": "Status(StatusCode=\"BadGateway\", Detail=\"Error: The server encountered a temporary error and could not complete your request\").",
    "@timestamp": "2024-04-21T10:33:00.884Z",
    "log": {
   	 "level": "error"
    "service": {
   	 "name": "proxyService"
    "host": {
   	 "name": "appserver-2"

Then run the POST command by clicking the green Run button.

You should see a 201 response confirming that the example logs record was successfully created.

Use the Elastic AI Assistant

Now that you have a log entry, let’s use the AI Assistant to see how it interacts with logs data. Click the top-level menu and select Observability.

Select Logs Explorer under Observability.

In the Logs Explorer search box, enter the text “badgateway” and press the Enter key to perform the search.

Click the View all matches button to include all search results.

You should see the one log record that you previously inserted via Dev Tools. Click the expand icon in the actions column to see the log record’s details.

You should see the expanded view of the logs record. Let’s use the AI Assistant to summarize it. Click on the What's this message? button.

We get a fairly generic answer back. Depending on the exception or error we're trying to analyze, this can still be really useful, but we can improve this response by adding additional documentation to the AI Assistant knowledge base.

Let’s add an entry in AI Assistant’s knowledge base to improve its understanding of this specific logs message.

Click the AI Assistant button at the top right of the window.

Click the Install Knowledge base button.

Click the top-level menu and select Stack Management.

Then select AI Assistants.

Click Elastic AI Assistant for Observability.

Select the Knowledge base tab.

Click the New entry button and select Single entry.

Give it the Name “proxyservice” and enter the following text as the Contents :

​​I have the following runbook located on Github. Store this information in your knowledge base and always include the link to the runbook in your response if the topic is related to a bad gateway error.

Runbook Link:

Runbook Title: Handling 502 Bad Gateway Errors

Summary: This is likely an issue with Nginx proxy configuration

Body: This runbook provides instructions for diagnosing and resolving 502 Bad Gateway errors in your system.

Click Save to save the new knowledge base entry.

Now let’s go back to the Observability Logs Explorer. Click the top-level menu and select Observability.

Then select Explorer under Logs.

Expand the same logs entry as you did previously and click the What’s this message? button.

The response you get now should be much more relevant.

Try out the Elastic AI Assistant with a knowledge base filled with your own data

Now you’ve seen the complete process of connecting the Elastic AI Assistant to Amazon Bedrock. You’ve also seen how to use the AI Assistant’s knowledge base to store custom remediation documentation like runbooks that the AI Assistant can leverage to generate more helpful responses. Steps like this can help you remediate issues more quickly when they happen. Try out the Elastic AI Assistant with your own logs and custom knowledge base.

Start a 7-day free trial by signing up via AWS Marketplace and quickly spin up a deployment in minutes on any of the Elastic Cloud regions on AWS around the world.

The release and timing of any features or functionality described in this post remain at Elastic's sole discretion. Any features or functionality not currently available may not be delivered on time or at all.

In this blog post, we may have used or referred to third party generative AI tools, which are owned and operated by their respective owners. Elastic does not have any control over the third party tools and we have no responsibility or liability for their content, operation or use, nor for any loss or damage that may arise from your use of such tools. Please exercise caution when using AI tools with personal, sensitive or confidential information. Any data you submit may be used for AI training or other purposes. There is no guarantee that information you provide will be kept secure or confidential. You should familiarize yourself with the privacy practices and terms of use of any generative AI tools prior to use.

Elastic, Elasticsearch, ESRE, Elasticsearch Relevance Engine and associated marks are trademarks, logos or registered trademarks of Elasticsearch N.V. in the United States and other countries. All other company and product names are trademarks, logos or registered trademarks of their respective owners.