Rethinking public sector AI: 5 shifts driving mission outcomes

The public sector is experiencing a tectonic operational shift. In the past, artificial intelligence (AI) has primarily served as a tool for individual productivity. Analysts used generative models to summarize documents, draft policy outlines, and schedule workflows.
These localized efficiencies were necessary to test the waters. Today, the conversation has changed. Public sector organizations are moving beyond isolated task assistance to orchestrating end-to-end organizational processes. Agentic AI is now applied to complex, multistage workflows solving real-world problems from improving citizen services to fraud and risk detection.
This transition demands more than just new algorithms and different models. It requires a fundamental rethinking of data architecture, governance frameworks, and the relationship between human expertise and machine automation. IT leaders must now focus on how to securely connect distributed data to AI agents to drive measurable, scalable impact across their agencies.
1. Investing in AI that delivers measurable impact
Public sector agencies are committing significant portions of their operational budgets to ensure these capabilities yield tangible results.
According to Massimiliano Claps, research director at IDC, agencies are reallocating significant innovation funding toward AI. "We collected survey data from over 600 public sector entities across the US, and 152 of them were from the federal government. They're telling us that in 65% of agencies we interviewed, they plan to allocate 11% or more of their IT budget to AI in 2026 and going forward," says Claps.1 As a result, government agencies must link their technology investments directly to improved service delivery and operational continuity.
Furthermore, roughly 80% of these agencies expect measurable value within 12 months, and they anticipate a 2x return within two years.1 These demanding timelines leave no room for isolated data silos. IT leaders need a unified platform that delivers immediate visibility across their entire operational footprint.
2. Rethinking architecture for scalable impact
It’s easy to test AI by manually dropping a few PDF files into a chat interface. But how do you scale that to an entire organization's historical data?
Transforming fragmented data into an AI-ready foundation requires an open, flexible architecture that connects information across environments. As Dave Erickson, public sector distinguished architect at Elastic, notes, “Many organizations are thinking about how to ensure an open architecture, so they haven't created a new data silo oriented around a single cloud or single vendor.” He continues, “You have to think about that and keep it modular and open. OpenTelemetry is important because it gives a level of agnosticism.”
This need for openness extends beyond architecture into how data is stored and accessed. “You can't just put data in static storage buckets and expect AI to magically derive insights,” adds James Garside, senior customer enterprise specialist at Elastic.
This architectural shift takes time and discipline. Reflecting on the United Kingdom’s journey, Garside adds, “We definitely took a little bit longer to implement ... to just make sure we got it right.”
To get the most out of that data, teams need to instantly and accurately search across petabytes of information. This level of performance demands an architecture engineered for high-speed search and retrieval at scale.
3. Redefining control from human in the loop to human in the lead
“Out of our recent US survey of 152 federal IT and mission executives, 0% told us that they want no human oversight,” Claps points out.1 “That’s a very telling story.”
As automation handles more complex processes, the role of the human operator evolves. The "human in the loop" model, where an analyst simply clicks an approval button, is giving way to a "human in the lead" approach. In this model, AI acts as a dedicated assistant, processing data at scale while the human sets the strategy and makes the final decision.
Erickson agrees. "There is a lot of context and institutional knowledge that comes from people. It is the role of AI to help automate something we already know how to do the right way," he notes. This keeps people in control, allowing them to guide and refine outcomes while ensuring critical institutional knowledge remains the guiding force behind every decision.
Want to learn what we got wrong about AI? Watch the on-demand webinar to hear more insights from IDC and Elastic experts.
5. Strategic autonomy and the sovereign AI imperative
As data becomes the critical fuel for agentic workflows, controlling where that data resides and who has access to it is paramount. Sovereign AI has emerged as a global priority for organizations handling sensitive or classified information.
According to IDC, 46% of federal entities surveyed currently use some form of sovereign AI, and another 38% plan to do so within the next 12 months.1 This reflects a growing recognition that sovereign AI is very different from self-sufficiency. It’s not about isolation; it’s about controlling where the data is located, how it is exchanged, and who has access with what rights. To maintain independence in their technical architecture, IT leaders must ensure they control their technical stack starting with the foundational data layer.
Prioritizing your next steps for AI integration
The shift from individual productivity to mission impact is already underway. To keep pace, organizations must move beyond isolated experiments and focus on the architectural foundations that support AI at scale.
Start by auditing your current data landscape to identify silos that hinder real-time access. Establish governance frameworks that prioritize a human-in-the-lead operating model, ensuring your teams keep ultimate control over critical decisions. Finally, invest in a flexible platform that enables visibility and maintains strategic autonomy over your most sensitive information.
Connecting your distributed data to the experts who rely on it every day is key to unlocking the real value of AI. Learn how to build this foundation for mission success by tuning into our webinar.
1. IDC, "US Government and Education Buyer Intelligence Survey" (survey results, February 2026), N=152.
The release and timing of any features or functionality described in this post remain at Elastic's sole discretion. Any features or functionality not currently available may not be delivered on time or at all.