Listen
A podcast-like overview created with Google NotebookLM.
Content created with artificial general intelligence. This is a work in progress. Have feedback? Submit an issue or contact us.
Summary
The Biden Administration has created a detailed plan for managing artificial intelligence (AI) that focuses on how the federal government uses it and its effects on society. This plan, outlined in Executive Order 14110, aims to take advantage of AI’s benefits while also putting in place rules to reduce risks. The main goals include making sure AI systems are safe and effective, preventing unfair treatment by algorithms, protecting people’s privacy, informing the public about how AI is used, and ensuring there are human options available when needed. The administration also wants to work with other countries to promote responsible AI development around the world.
FAQs
What is the main focus of the recent Executive Order and Memorandum on Artificial Intelligence (AI)?
The Executive Order and Memorandum aim to ensure the safe, secure, and trustworthy development and use of AI, particularly in the context of national security. They address a wide range of concerns, including potential misuse of AI for malicious purposes, safeguarding American AI innovation from foreign threats, and promoting responsible AI development within the U.S. government.
What are “dual-use foundation models” and why are they of particular concern?
Dual-use foundation models are AI models trained on broad data, highly adaptable, and capable of high performance across various tasks. They are labeled “dual-use” because while they offer significant potential benefits, they also present serious risks to national security, economic security, and public health and safety. These risks include lowering barriers for non-experts to develop dangerous weapons, exploit software vulnerabilities, manipulate events, or even facilitate self-replication or propagation of harmful AI systems.
How does the government plan to mitigate the risks associated with dual-use foundation models?
The government is implementing several strategies to manage risks. These include:
- Mandatory reporting requirements: Companies developing or possessing these models, as well as large-scale computing clusters used for training, must report this information to the Department of Commerce.
- AI red-teaming tests: Developers are encouraged to conduct rigorous testing to identify vulnerabilities and ensure their systems are safe, secure, and trustworthy.
- Procurement screening for synthetic nucleic acids: To prevent misuse in bioweapons, the government is establishing a framework for screening synthetic nucleic acid sequences and implementing standards and incentives for providers.
- International collaboration: The U.S. will work with allies and international organizations to develop global standards and best practices for responsible AI development and use.
What steps are being taken to protect American AI technology from foreign adversaries?
Recognizing that foreign states may attempt to steal or exploit American AI advancements, the government is prioritizing:
- Attracting and retaining AI talent: Efforts will be made to attract and retain individuals with expertise in AI and related fields, such as semiconductor technology.
- Investing in strategic AI technologies: Both public and private investments in AI research and development, both domestically and internationally, will be encouraged.
- Enhancing intelligence priorities: The intelligence community will prioritize identifying and assessing foreign threats targeting the U.S. AI ecosystem, particularly in sectors like semiconductor production.
How will the government ensure the responsible use of AI in its own operations, particularly for national security purposes?
Several measures are being implemented:
- AI risk management framework: A framework will be established to guide the development, accreditation, acquisition, use, and evaluation of AI for national security purposes.
- Rigorous testing and evaluation: Both classified and unclassified testing will be conducted to assess AI systems’ capabilities, limitations, and potential risks, particularly in critical areas like cybersecurity, nuclear security, and biological and chemical threats.
- Monitoring and mitigation of risks: Agencies will be required to monitor, assess, and mitigate risks tied to their use of AI, ensuring it is used in a way that respects human rights, civil liberties, and democratic values.
What role does the National Institute of Standards and Technology (NIST) play in promoting trustworthy AI?
NIST plays a critical role in developing technical standards, benchmarks, and best practices for AI systems. It is tasked with:
- Developing benchmarks for AI system evaluation: NIST will develop benchmarks to assess the capabilities and limitations of AI systems in various areas, including science, mathematics, code generation, and reasoning.
- Promoting voluntary AI system testing: NIST will encourage and facilitate voluntary testing of frontier AI models before their public release to assess potential national security threats.
- Providing guidance on AI risk management: NIST will collaborate with other agencies to develop best practices for mitigating AI risks, particularly in critical infrastructure.
How will the government promote innovation and competition in the AI industry?
Several initiatives aim to foster innovation and competition:
- Supporting the semiconductor industry: Recognizing that semiconductors are crucial for AI, the government will promote competition and innovation in this industry through initiatives like the CHIPS Act.
- Encouraging public-private partnerships: Partnerships between government agencies, industry, academia, and civil society will be promoted to leverage expertise and resources.
- Clarifying intellectual property issues: The United States Patent and Trademark Office (USPTO) will provide guidance on AI and inventorship, particularly in the context of generative AI, to foster innovation and protect intellectual property rights.
How can the public stay informed about the government’s efforts related to AI?
The government is committed to transparency and will regularly:
- Publish reports on AI-related activities: Agencies will provide detailed reports on their actions in response to the Executive Order and Memorandum.
- Solicit public input: The government will seek input from the private sector, academia, civil society, and other stakeholders on various AI-related issues.
- Engage in public forums: The government will participate in public forums and discussions to raise awareness about AI and its implications.
Sources
- Blueprint for an AI Bill of Rights
- Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence
- FACT SHEET: Biden-Harris Administration Outlines Coordinated Approach to Harness Power of AI for U.S. National Security
- FACT SHEET: Biden-Harris Administration Takes New Steps to Advance Responsible Artificial Intelligence Research, Development, and Deployment
- FACT SHEET: Vice President Harris Announces OMB Policy to Advance Governance, Innovation, and Risk Management in Federal Agencies’ Use of Artificial Intelligence
- Memorandum on Advancing the United States’ Leadership in Artificial Intelligence; Harnessing Artificial Intelligence to Fulfill National Security Objectives; and Fostering the Safety, Security, and Trustworthiness of Artificial Intelligence
- FRAMEWORK TO ADVANCE AI GOVERNANCE AND RISK MANAGEMENT IN NATIONAL SECURITY
- OMB Memorandum M-24-10, Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence
- OMB Memorandum M-24-18, Advancing the Responsible Acquisition of Artificial Intelligence in Government