Prerequisites
-
Elasticsearch 9.3+ (or Elastic Cloud Serverless)
-
Elasticsearch API KEY and Kibana URL
-
An application instrumented with Elastic APM: the RUM agent for frontend interactions (populates
traces-apm-*) and the APM agent for backend errors (populateslogs-apm.error-* -
Cursor (version 2.6+) installed
The problem with two worlds
Application logs and code are two separate worlds that don't talk to each other. If you want to apply log insights into the application you have to analyze the logs, and then come back to the editor and apply your findings.
The Model Context Protocol (MCP) changes this. MCP is an open standard that lets AI clients like Cursor connect to external tools and data sources through a standardized interface. Instead of your IDE only knowing about your local code, it can also talk to your Elasticsearch cluster, query your APM data, and reason about production behavior alongside your source files.
Elastic ships a built-in MCP server as part of Agent Builder. You define tools in Kibana, expose them via the MCP endpoint, and any MCP-compatible client can call them. Cursor supports MCP natively, which means you can set this up in minutes.
What we're building
We're working with an eCommerce search app instrumented with Elastic APM. The RUM JS agent tracks filter click interactions from the browser, stored in traces-apm-default. The Node.js APM agent captures backend errors, stored in logs-apm.error-default.
Two situations come up during development:
-
Use case 1: The product team wants to simplify the search page. There are six filters but we don't know which ones users actually click. We need usage data to decide which to keep.
-
Use case 2: Users report intermittent 500 errors on search. The errors are not constant and started two days ago. We need the error details to find the root cause.
To bring that data into Cursor, we'll build two Agent Builder tools in Kibana and connect them via the Elastic Agent Builder MCP Server:
-
get_filter_usage: queriestraces-apm-defaultfor filter click events and returns a usage breakdown by filter name -
get_recent_errors: querieslogs-apm.error-defaultfor the most recent error groups for a given service, including the exception message and stack trace culprit
For a deeper look at the overall architecture, see the Agent Builder reference guide.
Setting up the Elastic MCP Server
Step 1: Create the Agent Builder tools
We create both tools via the Kibana Agent Builder API. Each tool is an ES|QL query with a name and description that Cursor uses to decide when to call it. The full implementation of the tools is in the following notebook.
Tool 1: get_filter_usage
The product team needs to know which filters users actually click before deciding which ones to remove. The query reads RUM interaction events from traces-apm-default and groups them by filter name:
{
"id": "get_filter_usage",
"type": "esql",
"description": "Returns the usage count for each search filter in the ecommerce-search-ui service, sorted by most used first.",
"configuration": {
"query": "FROM traces-apm-default | WHERE service.name == \"ecommerce-search-ui\" | WHERE transaction.type == \"user-interaction\" | WHERE labels.filter_name IS NOT NULL | STATS count = COUNT(*) BY labels.filter_name | SORT count DESC"
}
}
Tool 2: get_recent_errors
For the error debugging use case, we need to surface the most frequent recent errors for a service, along with where in the code they originate. STATS ... BY groups errors by their fingerprint (grouping_key), surfaces the exception message and the line of code that caused it (culprit), and ranks by frequency:
{
"id": "get_recent_errors",
"type": "esql",
"description": "Returns the most frequent error groups for ecommerce-search-ui, ranked by occurrence count, with the exception message and code location.",
"configuration": {
"query": "FROM logs-apm.error-default | WHERE service.name == \"ecommerce-search-ui\" | WHERE processor.name == \"error\" | STATS count = COUNT(*) BY error.grouping_key, error.exception.0.message, error.culprit | SORT count DESC | LIMIT 5"
}
}
Both tools are created with POST /api/agent_builder/tools. You can learn more about the Kibana API endpoints for Elastic Agent Builder here.
Step 2: Connect to Cursor
Open ~/.cursor/mcp.json and add the Elastic server. For detailed information, see the Cursor documentation. The Agent Builder MCP endpoint uses Server-Sent Events (SSE) transport, so we connect via mcp-remote, a lightweight bridge that Cursor invokes as a local process:
{
"mcpServers": {
"elastic-agent-builder": {
"command": "npx",
"args": [
"-y",
"mcp-remote",
"https://YOUR_KIBANA_URL/api/agent_builder/mcp",
"--header",
"Authorization: ApiKey YOUR_API_KEY"
]
}
}
}
Replace YOUR_KIBANA_URL and YOUR_API_KEY with your values.
Restart Cursor, open the Agent panel, and confirm that get_filter_usage and get_recent_errors appear in the available tools list.
Use case 1: Data-driven UI optimization
The eCommerce search page has six filters: category, manufacturer, price range, customer gender, day of week, and region. The product team wants to simplify the UI by removing filters that users don't use as much. Rather than guessing, we ask Cursor to check.
When you type a prompt in Cursor's Agent panel, the model sees the name and description of every connected MCP tool. It matches your intent to the best-fitting tool and calls it automatically. This is why the description field we set in Step 1 matters: it's what the model reads to decide which tool answers your question. If you are interested in learning more about Cursor’s MCP tools management, read the following documentation.
Open a Cursor chat and ask: "Show me how often each search filter is used." Cursor calls the tool and returns something like:
The category and manufacturer filters get most of the clicks. The bottom three filters (customer_gender, day_of_week, region) are rarely used.
Ask Cursor to act on this: "Based on this data, simplify the SearchFilters component. Keep the top 3 filters visible, collapse the others under a 'More filters' toggle."
Cursor opens src/components/SearchFilters.jsx, reads the current implementation, and proposes the change.
Before:
After:
The entire loop took one chat prompt. The decision was backed by production data, not a team discussion about what users probably care about.
Use case 2: Production error debugging
A bug report comes in: intermittent 500 errors on the search endpoint. The errors started appearing two days ago but they're not constant. The developer opens Cursor and asks: "Show me what errors ecommerce-search-ui is throwing."
Cursor calls the tool and returns the error groups:
The error message is explicit: category is a text field and can't be used in terms of aggregation. The correct field is category.keyword.
With APM data available alongside your code, the debugging session becomes a conversation: you describe the symptom, the agent pulls the relevant logs, and you work through what's happening together. You can ask follow-up questions, check whether the error correlates with a recent deployment, or ask which endpoints are most affected, all within the same context where you'll make the fix. If you want to go further, Elastic also provides pre-built observability tools in Agent Builder that you can use alongside custom tools like the ones we created here. For a complementary approach to AI-driven observability, see how to monitor web AI agents with OpenLIT and Elastic.
Conclusion
What we covered:
-
How to create Agent Builder tools in Kibana that wrap APM data queries
-
How to connect the Elastic Agent Builder MCP Server to Cursor in three lines of JSON
-
Using production telemetry to make a UI decision backed by real usage data
-
Debugging a production error from the same window where you fix it
These two use cases are a starting point. The same pattern works for any data you have in Elasticsearch: performance metrics, A/B test results, audit logs, feature flag usage, user session data. Define the Agent Builder tool, connect it via MCP, and it becomes part of your development context in Cursor. For other examples of what's possible, see automating synthetic monitoring with MCP and agentic CI/CD deployment gates.