Transforming the employee search experience with AI
In this article
Explore the development and business impact of ElasticGPT, our generative AI employee assistant that helps employees find information across company data. Results we’ve seen since launch include:
- Rapid ROI: Built on our Search AI Platform to solve information silos, the initiative achieved full investment payback in two months.
- Significant productivity improvement: We are saving 63 hours per employee annually and have achieved a 98% employee satisfaction rate.
- Platform validation: The development of ElasticGPT validates our platform as a secure, scalable foundation for enterprise AI, proving it drives efficiency while mitigating risk of shadow AI.
The goal
Improve workforce efficiency with AI
Companies and executives realize the value of generative AI for delivering workforce efficiencies. Over 95% of C-suite executives believe an AI assistant for information retrieval and summarization for day-to-day tasks can deliver tangible business value. To reduce information silos and create efficiencies for our employees, our IT team built and launched ElasticGPT, a generative AI employee assistant powered by the Search AI Platform.
63 hours saved per employee per year
Two-month payback period
ElasticGPT enables employees to do more within the workday. The experience enables information retrieval and knowledge discovery so employees can quickly find information and answers across our company data. Today, our 3,000+ employees use ElasticGPT to find answers to common queries. The introduction of ElasticGPT has significantly streamlined information retrieval and increased productivity, resulting in employees saving more than five hours per month or 63 hours annually, which can be reallocated for critical tasks.
“Laying out our generative AI strategy around how we are going to enable the business and investing in building our base foundation is going to be crucial so we don’t have to deal with the mess we create.”
— Matthew Minetola, CIO, Elastic
The challenge
Finding information and shadow IT
Our workforce consists of several business functions. Each business unit needs access to company-wide information and resources to support day-to-day operations and decision-making, from sales, engineering, and marketing to HR, IT, and legal. Across our organization, we were dealing with various factors that led us to implement generative AI.
- Information overload: We continuously produce information and data across our organization, making it difficult for employees to quickly find the most relevant information across various enterprise data sources and systems.
- Data accuracy: Company-wide information and resources were fragmented and siloed, often leading to outdated information. These data quality issues meant employees often found outdated and inaccurate information.
- Redundant requests: Shared service teams, such as HR, IT, and legal, often addressed redundant requests, leading to inefficiencies and wasted resources.
- Shadow AI: Since the launch of ChatGPT, our workforce has been using various unauthorized generative AI technologies within the organization, potentially exposing sensitive data and creating security risks for the company.
- Responsible AI adoption: Empowering employees with the benefits of generative AI while maintaining security and protecting the confidentiality of company data.
- Technical debt: With various enterprise generative AI point solutions in the market, our IT team wanted to avoid investing in tools that locked them into specific technology and instead utilize a flexible, scalable, and sustainable platform that could serve as a central landing point for future generative AI use cases and innovation.
“Our workforce was struggling to quickly find information across our fragmented company information. We knew generative AI was the solution, but in order to build it right, we knew we had to invest in building a strong framework and strategy.”
— Jay Shah, Senior Director, Enterprise Applications, Elastic
The solution
Building a robust generative AI foundation to launch ElasticGPT
With the rise of generative AI, various cases of shadow AI, and the demand for quicker access to information, we knew we had to focus on enabling generative AI technology across the organization to improve employee efficiency.
Our IT team enables our business with technology initiatives that deliver business value. As customer zero, we knew that before enabling generative AI, it was critical to build a robust foundation that could scale as our generative AI use cases grow. This included:
- Data foundation: To build a reliable and accurate generative AI experience that is grounded in our proprietary data, we tackled the data quality of our sources while creating a governance process and framework that we will leverage as we add more data sources over time. For our MVP, we landed on two main data sources: 1) Our internal Confluence site to address queries on our company strategy, teams, products, processes, and more. 2) ServiceNow knowledge articles to address HR and IT policies and processes.
- Centralized infrastructure: As a part of building out ElasticGPT, we wanted to invest in shared infrastructure to gain flexibility and maintain multiple generative AI experiences in one environment. This ensures that we avoid future technical debt — additional work that would result from choosing a solution that doesn’t scale ot meet our needs — while promoting and streamlining access to generative AI across the company. Now we can easily add data sources and specialized models, integrate these technologies within our existing business applications, and easily deploy new generative AI applications.
- Secure access to LLMs: As a Search AI company, we knew our employees are testing different LLMs and public generative AI experiences. To reduce risk of shadow AI usage, within ElasticGPT we offer secure access to multiple LLMs so that our teams can leverage the latest and greatest technology.
To deliver ElasticGPT, we utilized the Search AI Platform on Elastic Cloud because it provides access to a comprehensive set of capabilities for building and monitoring generative AI applications.
We took advantage of our enterprise connectors to ingest data into Elastic. We then used Elasticsearch as a vector database to pass large volumes of data to our LLM. To ensure that information was grounded in our internal data, we used retrieval augmented generation (RAG).
The backbone of ElasticGPT is SmartSource, a combination of RAG and Microsoft Azure OpenAI license for GPT-4o. With lightning-fast semantic search and vector search, we can efficiently retrieve the most relevant answers for the context of the query and pass it to the GPT-4o for a polished answer in milliseconds. In July 2025, we launched ElasticGPT as an app directly in Slack, our internal communication channel, to enable faster and seamless access to our GenAI assistant.
To build a secure generative AI application, we implemented access controls and SSO authentication for all employees to manage security and compliance with policies and regulations. To track performance and health, we implemented Elastic Observability and Kibana to capture real user monitoring (RUM) data and resolve bottlenecks impacting the user experience.
Multiple capabilities and solutions on the Elastic Search AI Platform play a role in developing ElasticGPT. With these comprehensive capabilities, ElasticGPT is not just a generative AI experience but a robust platform for generative AI innovation at Elastic. Our investment in a centralized platform approach enables us to scale and evolve faster as AI technology advances.
“ElasticGPT’s real purpose is to build a framework and foundation for each line of business and team to apply generative AI in their functional sense. Investing in a platform approach enables each of these organizations to start faster.”
— Jay Shah, Senior Director, Enterprise Applications, Elastic
Use cases
Providing employees with self-service answers
The use cases of ElasticGPT will continue to advance as we add new data and models to assist with function-specific use cases and applications. Today, ElasticGPT supports the following use cases and prompt examples:
- HR, IT, legal helpdesk:
What are key aspects of Elastic’s company 401K policy?
How do I get access to Salesforce?
When is our December holiday party? - Company assistant:
What did I miss on our last company all hands?
Where is our latest earnings presentation?
What is our sustainability plan? - Sales assistant:
Can you help me write a prospecting email for a CISO?
What is our commission structure?
How do I create an account plan? - Product enablement:
What new features are we launching in our next release?
What’s the difference between Elastic Cloud Hosted and Serverless?
What integrations does Elastic offer? - Support assistant:
How can I upgrade our customer to the latest version of Elastic?
Do we have documentation on how to get started with Elastic?
How can I optimize performance for our customer's observability workload?
The results
ROI surpassed within 2 months of launching
Within the first three months of launch, each employee reclaims more than five hours per month or 63 hours per year by enabling users to quickly find information they need across our proprietary data sources. Considering this increase in workforce efficiency, the cost of running ElasticGPT, including hosting the LLM, and the labor costs associated with building this solution, our IT team has already gained a return on their investment within two months of launch. Additionally, based on user ratings on the accuracy and reliability of retrieved results, we’ve accomplished a 98% employee satisfaction.
There are several other benefits of ElasticGPT:
- Improved reliability: As users continue to engage with ElasticGPT, we’ve observed a flywheel effect improving the quality of ElasticGPT’s knowledge base. If an employee comes across outdated or incorrect information, they update the source information to make ElasticGPT more accurate and reliable.
- Reclaim shared service teams’ time: Before ElasticGPT, employees would contact their respective HR, IT, and legal partners to request support or information, or search various knowledge articles to find answers. With ElasticGPT, teams can now self-serve and access real-time answers to critical questions, reducing time spent by shared service teams to support individual requests.
- Improved security of shadow AI: Enabling secure access to multiple LLMs allows our teams to try out and leverage various LLMs without compromising personal and proprietary data.
“Investing and building ElasticGPT goes beyond traditional bottom line value. We’re looking to drive productivity across our business lines and we’ve built the foundation of what’s to come.”
— Jay Shah, Senior Director, Enterprise Applications, Elastic
What’s next
We’re continuing to expand on ElasticGPT’s roadmap by investing in the following:
- Expanding our knowledge base: To further increase self-service information retrieval and discovery, we’re continuing to bring new proprietary data sources to ElasticGPT to expand our knowledge base.
- Improved chat experience: We are investing in updating our UI and experience to allow for even more seamless and simplified access to information.
- Increase performance and stability: As we launch new features, we’re updating our core technology to make ElasticGPT faster and more reliable.
- AI-assisted workflows: We’re developing AI-assisted workflows by bringing ElasticGPT to existing business applications to streamline access. For example, we’re embedding ElasticGPT, particularly the sales assistant capabilities, into Salesforce so our sellers can easily use generative AI to accelerate their sales cycle.
- Automation and agentic AI: As we look into the future, we know agentic AI and workflow automation will be critical to improve efficiency. We are excited to see how we can bring these capabilities into ElasticGPT.
This use case is based on our use of our own products and services. As such, certain typical costs, such as licensing fees, were not incurred. The results, savings, and fees presented are illustrative and for information only, and may not necessarily reflect the outcomes achievable by users under our standard commercial terms and applicable fees. While similar results may be possible, individual outcomes may vary significantly depending on numerous factors. No guarantees are made or implied.