U.S. Announces AI Export Controls and Diffusion Rule to Strengthen National Security and Economic Leadership
The U.S. government is taking a multi-faceted approach to safeguard its leadership in artificial intelligence (AI) technology while protecting national security. The Department of Commerce’s Bureau of Industry and Security (BIS) has introduced new regulations designed to control the export of advanced computing chips and AI model weights. These measures are intended to prevent adversaries from misusing AI technology for harmful purposes, such as developing weapons or launching cyberattacks, while ensuring that trusted allies and partners have access to cutting-edge innovations under secure conditions.
The initiative includes a series of export controls, including the new Artificial Intelligence Authorization and exceptions for low-volume exports and specific supply chains. By managing the export of dual-use technologies—those with both civilian and military applications—the U.S. aims to curb the potential misuse of AI systems while maintaining its competitive edge in the global AI race.
In tandem with these export controls, the Biden administration has introduced the Interim Final Rule on Artificial Intelligence Diffusion, a set of regulations aimed at streamlining licensing processes and reinforcing security standards. This rule is designed to facilitate AI technology’s responsible diffusion, particularly for trusted nations, while ensuring that U.S.-developed AI remains secure and out of the hands of adversaries.
The rule features six key provisions to enhance U.S. national security and economic strength:
No Restrictions for Key Allies: The rule allows chip sales to critical U.S. allies and partners without licensing requirements, ensuring that these countries, which have robust technology protection regimes, can access large-scale U.S. AI technology. These countries are: Australia, Belgium, Canada, Denmark, Finland, France, Germany, Ireland, Italy, Japan,
the Netherlands, New Zealand, Norway, Republic of Korea, Spain, Sweden, Taiwan, the
United Kingdom, and the United States.
Streamlined Licensing for Low-Risk Orders: Chip orders involving up to 1,700 advanced GPUs—typically placed by universities, medical institutions, and research organizations for non-threatening purposes—will no longer require licensing. This change accelerates the delivery of low-risk shipments globally.
Universal Verified End User (UVEU) Status: Trusted entities headquartered in close allies and partners can obtain UVEU status, allowing them to deploy up to 7% of their global AI computational capacity, including hundreds of thousands of chips, to various countries. This status strengthens U.S. leadership and ensures AI frontier training remains domestically controlled.
National Verified End User (NVEU) Status: Trusted entities located in non-adversarial countries can apply for NVEU status, enabling them to purchase up to 320,000 advanced GPUs over the next two years. This provision allows national entities to serve local and regional needs while minimizing diversion risks.
Caps for Non-VEU Entities: Non-verified entities in countries outside trusted allies can still purchase up to 50,000 advanced GPUs per country, ensuring U.S. technology remains available to foreign governments and businesses, but within controlled limits.
Government-to-Government Agreements: The U.S. aims to foster international cooperation through agreements with foreign governments that align with its export control, clean energy, and technology security policies. These arrangements could double the available chip caps for participating nations.
While the diffusion rule encourages the global spread of U.S. technology, it also imposes significant restrictions on countries of concern. These include measures to prevent adversaries from using advanced semiconductors for training AI models and restrict the transfer of model weights for closed-weight AI systems to non-trusted actors. The rule ensures these weights are securely stored and protected from illicit access, further safeguarding against national security risks.
The countries of concern are: Afghanistan, Belarus, Burma, Cambodia, Central African Republic, PRC (including Hong Kong and Macau), Democratic Republic of Congo, Cuba, Eritrea, Haiti, Iran, Iraq, North Korea, Lebanon, Libya, Nicaragua, Russia, Somalia, South Sudan, Syria, Venezuela, Zimbabwe.
These new regulations build on prior chip controls, such as those introduced in October 2022 and 2023, and reflect ongoing consultations with bipartisan Congressional members, industry stakeholders, and international allies. Together, these efforts aim to ensure that the U.S. remains at the forefront of AI innovation while minimizing risks associated with its misuse by adversarial actors.
Comments