Unpacking the new US executive order on artificial intelligence

aiops-monitoring.jpeg

On Monday, October 30, President Biden signed the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence — the longest in history at 117 pages. The executive order (EO) aims to advance and regulate artificial intelligence (AI) in the US. This landmark order pulls together a number of priorities that influence not just the AI industry, but also society at large. 

The EO is an essential step toward operationalizing AI for the benefit of the US government and society as a whole. AI has the potential to change the way everyday citizens interact with the government and can vastly improve the citizen experience.  However, as the EO recognizes, with such promise comes inherent challenges and risks. This EO strikes a healthy balance in offering guidance on mitigating risks while embracing AI’s potential for the public good.

The AI executive order: What to know

A comprehensive foundation

This directive doesn't introduce new legal authorities. Instead, it utilizes existing tools to shape the AI industry, emphasizing the significance of purchasing decisions and current regulatory instruments. With the Congress yet to pass specific AI legislation, this EO brings a much-needed national coherence to AI policy, guiding the nation toward a more unified AI approach. Congress now has an open field to codify parts of the EO or develop new legislation.

Broad application and integration

AI’s transformative power is evident across sectors. The EO encourages AI’s responsible use in sectors like healthcare, education, trade, and housing. Throughout, the EO's message is clear: AI's potential and risk reaches beyond technological spheres, influencing a range of industries.

Regulation, checks, and coordination

The EO takes a step toward setting standards to ensure data privacy, bolster cybersecurity, and prevent discrimination. It also eyes a balanced competitive AI market and attempts to influence industry dynamics. The timeline for implementation is fairly aggressive — agencies have between 90 to 240 days to comply with the various directives coming from the EO. For instance, the Treasury Department will have 150 days to produce a public report on how the banking sector can handle cyber risks related to AI. And all federal agencies have three months to determine AI risks in the industries where they have existing authority. From there, agencies will create guidelines based on the White House's new AI Risk Management Framework.

Building on existing AI initiatives

The EO builds on previous initiatives, notably the AI Bill of Rights from the Office of Science and Technology Policy (OSTP) and the National Institute of Standards and Technology's (NIST) AI Risk Management Framework (RMF). Not only does it highlight AI research, but it also accentuates the National AI Research Resource's introduction.

Cybersecurity and privacy

The rapid evolution of AI cyber weapons demands robust defenses. The EO recognizes this need, emphasizing both the proactive use of AI to enhance cybersecurity and recognition of the challenges AI-driven cyber attacks present. To this end, companies, especially those developing influential AI models, are required to inform the Department of Commerce of measures taken against espionage or other digital threats.

On the privacy front, the draft mandates federal entities to implement stringent privacy protections for data in AI systems. This includes determining the amount of personally identifiable information purchased and setting guidelines to minimize privacy risks associated with data collection, use, and deletion. Federal agencies are strongly encouraged to use advanced privacy-enhancing technologies to safeguard collected data. Technology that can enable role-based and attribute-access control to data and documents will likely be critical for ensuring privacy levels.

Competition, immigration, and workforce concerns

A central part of the EO is its treatment of competition. It calls for continuous monitoring of AI competition issues, with the FTC specifically directed to keep an eye on firms displaying anti-competitive behaviors. Recognizing the global nature of talent in the AI domain, the EO also proposes guidelines to ease the US tech industry's access to skilled AI professionals from abroad. This includes streamlined visa processes and advocating for the US as an attractive location for AI experts. 

The implications of AI on the workforce are not overlooked. With concerns over job disruptions due to AI, the draft necessitates considering the priorities of workers and labor unions in AI-related policies.

Sector-specific directives

Whether it’s healthcare, copyright, housing, or telecommunications, the EO provides sector-specific instructions. For instance, the United States Patent and Trademark Office (USPTO) is directed to issue guidance on AI in patents. The Department of Health and Human Services (HHS) is tasked with establishing a task force to ensure AI's responsible use, especially in areas like drug safety and public health.

The AI industry's take

For those companies that develop AI, the EO offers both guidance and caution. As the AI landscape evolves and regulations become more stringent, businesses must stay vigilant, particularly about mandates concerning computational benchmarks and international ties.

President Biden's EO is a monumental stride forward, sketching a visionary yet practical plan for AI in the US. However, the EO doesn't exist in a vacuum. Governments around the world are grappling with how to regulate AI. This EO dovetails nicely with the ongoing G7 Hiroshima AI Process, complements the agenda for the UK's 2023 AI Safety Summit, and aligns with ongoing dialogues at the United Nations.

As we chart this dynamic terrain, all stakeholders, from policymakers to industries, need to proceed with both enthusiasm and prudence, ensuring the vast potential of AI is wielded judiciously.

To learn more about Elastic’s approach to generative AI:

The release and timing of any features or functionality described in this post remain at Elastic's sole discretion. Any features or functionality not currently available may not be delivered on time or at all.

In this blog post, we may have used or referred to third party generative AI tools, which are owned and operated by their respective owners. Elastic does not have any control over the third party tools and we have no responsibility or liability for their content, operation or use, nor for any loss or damage that may arise from your use of such tools. Please exercise caution when using AI tools with personal, sensitive or confidential information. Any data you submit may be used for AI training or other purposes. There is no guarantee that information you provide will be kept secure or confidential. You should familiarize yourself with the privacy practices and terms of use of any generative AI tools prior to use. 

Elastic, Elasticsearch, ESRE, Elasticsearch Relevance Engine and associated marks are trademarks, logos or registered trademarks of Elasticsearch N.V. in the United States and other countries. All other company and product names are trademarks, logos or registered trademarks of their respective owners.