Featured Insight,

AI race heats up: What U.S. export controls mean for Asia

AU | EN
Current site :    AU   |   EN
Australia
China
China Hong Kong SAR
Japan
Singapore
United States
Global

In its final days in January 2025, the Biden administration expanded controls on AI technology exports with the release of two interim rules. Confusion and concern ensued from impacted businesses in the US and beyond.

Popularly known as the ‘AI diffusion rules’, they have an ambitiously wide application with the aim of controlling how and where advanced AI clusters can be deployed and how models can be trained on them (we will get to what that means later). Businesses are grappling with the implications.

What does it mean for a data centre operator setting up in Southeast Asia? A fintech looking to deploy a fraud detection model in Vietnam? Or a bank considering financing GPUs or a GPU-as-a-service business in one of Asia’s tightly restricted markets?

While uncertain, expectations are that the Trump administration will take the controls further and increasingly shift its focus to end users of AI technology and restrict access to US-originating cloud products by entities and individuals of concern.  

Regardless of where your organisation is located, if you deal with advanced AI chips, model weights or cloud infrastructure originating from the US, you need to understand these rules.

In this insight, the first in our series on the future of AI competition in Asia, we explain the interim rules and how they might impact businesses. How did we get here? How do the rules work? Are there exceptions? 

In an already turbulent economic and geopolitical environment, these interim rules are quietly reshaping the global AI landscape. Companies and investors across the AI ecosystem need to pay attention. The rules could seriously impact your business models, investments, supply chains and licensing strategies. There are also large pockets of opportunity."  - Daryl Cox | KWM Partner, Singapore

 

The story so far

US export controls on advanced AI chips are not new. Until now, they have focused on preventing specific adversaries of the US from gaining access to the computational power necessary for training sophisticated AI models.

The latest changes go further, both in scope and substance.

First, the geographical net is cast wider and the circle of trust has narrowed: the controls now apply to countries and intermediaries that could serve as conduits for circumvention – 'Tier 2' countries. This is intended to address longstanding concerns that, despite prior restrictions, advanced AI chips have continued to reach adversaries through ‘grey routing’.

Second, the controls now target AI model weights, the essential output of AI training that determines the quality of an AI model and how it functions. The US government has formed the view that controlling access to chips alone is no longer sufficient - even if restricted countries cannot obtain high-performance chips directly, they could still benefit from pre-trained models developed using US-originated infrastructure.

The result is a far more complex and layered set of controls, with expanded compliance obligations for companies operating in multiple jurisdictions.

The US Bureau of Industry and Security (BIS) – the agency behind the export controls, tasked with advancing ‘national security, foreign policy, and economic objectives’ via strategic technology - has repeatedly cited US national security concerns as the primary driver of restrictions on export of advanced AI technologies.  BIS states that these technologies present significant risks to the US if acquired by its adversaries particularly if used in military applications.  By expanding the scope of AI export regulations, BIS aims to prevent the misuse of these technologies and safeguard US strategic interests.

Beyond national security, there is geopolitical strategy: limiting the ability of competitors to achieve AI breakthroughs while maintaining technological advantage. Technology leadership has become a pivotal source of geopolitical influence. By limiting access to advanced AI technologies, the US considers that it can preserve its advantage and influence.

Before we look to the history of AI diffusion controls, here are some key terms explained...

EXPAND

AI chips are the lifeblood of data centres and machine learning.  They are specialised hardware designed to process immense amounts of data and complex calculations used in applications, such as large language models, advanced robotics and image recognition systems.  The availability and performance of AI chips influence how quickly and effectively developers can train new AI models.  Crucially, AI chips underpin critical applications in defence (for example, drones, cybersecurity) and national infrastructure (for example, energy grids). 

For ease, we’ll refer to ‘AI chips’ – but only certain advanced AI chips are covered by the rules. When we talk about ‘AI chips’ in the context of the rules, we’re talking about those that are covered.

Model weights are parameters within a machine learning model that determines how inputs are transformed into outputs.  These model weights are constantly adjusted during the training process to improve performance and accuracy.  Controlling access to model weights is critical, as unrestricted access can allow users to alter or bypass safeguards that prevent the model from performing dangerous tasks, such as providing information on developing weapons.

ECCN stands for Export Control Classification Number.  An ECCN is an alphanumeric designation used in the Commerce Control List to identify items for export control purposes. It helps determine what kind of restrictions apply when you want to send goods, software or technology from the US to another country. 

Each ECCN tells you: (1) what the item is, (2) why it is controlled, and (3) whether you need a licence to export it to certain countries.

IaaS stands for Infrastructure as a Service. It's a form of cloud computing that provides virtualised computing resources over the internet. With IaaS, businesses can rent servers, storage and networking without the need for physical hardware. It offers scalability, flexibility and cost-effectiveness. Users manage the software and applications while the provider handles the infrastructure. This model supports companies in rapidly deploying and scaling their operations without hefty upfront costs.

GPU stands for graphics processing unit. Originally built for gaming and 3D graphics, GPUs are designed to handle many tasks at once, making them ideal for training AI models. GPUs are now widely used for tasks like AI training and data analysis because they can process large volumes of data faster than regular computer processors. Well known providers of GPUs include NVIDIA and AMD.

GPUaaS stands for GPU-as-a-service.  It is a form of IaaS where companies rent access to powerful GPUs over the internet, instead of buying and maintaining the hardware themselves. A common use case is training AI models, which requires intense computing power. Rather than investing in costly GPUs, companies rent GPU capacity to accelerate the training process.

TPP stands for total processing performance.  It measures the overall computational capability of a chip – how much work it can perform over a given period.  It reflects both the speed of individual processing units and the system’s ability to handle tasks in parallel.  A higher TPP means more work can be done, faster and at greater scale.

VEU stands for Validated End User.  Under the AI diffusion rules, a VEU is a pre-approved entity authorised to receive a larger number of controlled AI exports without individual licenses, though still subject to certain restrictions.  For further details, see VEU exception to allow activity in non-restricted locations

Tightening the net on AI tech - the history of AI export controls

And then…

Since the release of the interim rules, the AI landscape has continued to evolve rapidly.

DeepSeek, a Chinese AI company, release its 'R1' reasoning model on 20 January 2025 (just days after the interim rules were made). The model highlights the accelerating progress in AI inference capabilities, and demonstrates that advanced reasoning can be achieved with greater efficiency, potentially reducing the reliance on high-end chips for certain inference workloads. While this development may ease immediate demand for advanced AI chips in some inference applications, its impact on the infrastructure required for training cutting-edge AI models may be limited.  Training remains highly resource intensive, and controlling access to this layer appears to be a core strategic objective of US policy.  

Geopolitical wrangling also continues. Just recently, the Trump administration imposed new licence requirements on NVIDIA’s H20 and AMD’s MI308 chips for export to restricted countries. These lower performing AI chips were created to avoid export controls, specifically for the Chinese market, but are still considered by the US to be of concern. Rather than amending regulations, BIS issued direct notices to the companies - a mechanism it appears to be using more frequently to act quickly. We expect to see more ‘cat and mouse’ games like this being played.

The AI Diffusion Rules 101

The interim rules amend the Export Administration Regulations (EAR) enforced by BIS. At a high level, the interim rules significantly expand existing controls by introducing a global licensing regime for advanced AI chips and AI model weights, with some exceptions for close US allies. The restricted chips and model weights are identified by ECCNs, which are codes used to classify items based on their technical characteristics and the level of control required under US export regulations.

The restrictions – in one diagram

The rules are ‘interim’ - will they change?

The Framework Rule includes a 120-day comment period, and BIS may issue revisions based on feedback. So far, BIS has not issued any revisions.

Consultation for the Due Diligence Rule ended on 14 March 2025, and to date, BIS has not issued any comments on the feedback received.

Bloomberg reported on 25 March 2025 that the Trump administration may consider making amendments to ‘strengthen and streamline’ the interim rules – we are watching developments.

When does compliance start? The clock is ticking…

The Framework Rule immediately took effect on 13 January 2025, but has staggered compliance dates over the coming year:

  • Most provisions, including the controls on the AI chips and model weights, require compliance starting 15 May 2025, when the consultation period ends.  It includes a grace period for chips that are already en route before 15 May 2025, provided they are delivered by 16 June 2025. This carve-out has prompted a surge in last-minute shipments as companies looked to move products ahead of the new restrictions taking full effect.
  •  Some specific technical requirements have a delayed compliance date of 15 January 2026

The Due Diligence Rule took effect earlier, with compliance required from 31 January 2025. It applies only to the specific EAR provisions it updates and does not alter the timeline for any other rules.

The details: restrictions

Global licence requirement for AI chips

Under the Framework Rule, certain AI chips are now subject to a global licence requirement.  This means a licence is required for any export, re-export or in-country transfer of AI chips to any country, subject to certain licence exceptions. 

This represents a marked expansion of controls of these AI chips. They were previously only restricted, primarily, for exports to China and certain other embargoed jurisdictions.

Specifically, the AI chips covered are ECCNs 3A090.a, 4A090.a and related .z items, such as Nvidia's A100, A800 and H100 chips.

Global licence requirement for model weights

For the first time, certain AI model weights require licences for export, re-export or in-country transfer (even within the US), unless the model weights are publicly available.

Specifically, this applies to AI models that have undergone exceptionally advanced and extensive training using a lot of cutting-edge computing resources – in technical terms, model weights trained on 10^26 computational operations or more (ECCN 4E091).

Notably, the ruling covers AI model weights trained outside of the US if the model relies on US-controlled inputs

End users in Tier 1 countries (see below) are eligible for licence exemptions, provided that specific security measures are instituted to reduce the risk that the AI model weights are shared with restricted entities.

Prohibitions on IaaS to train advanced AI models

Entities with VEU status, including their parent and subsidiary companies, are prohibited from providing IaaS products to train advanced AI models outside of Tier 1 countries without BIS authorisation. However, fine-tuning is allowed, provided it accounts for no more than 25% of the original model’s training. API and IaaS access for AI inference remains permitted.

Due diligence ‘red flag’

BIS has also included a new due diligence ‘red flag’, cautioning US IaaS providers, that offer IaaS products to train an advanced AI model for a US subsidiary of a Tier 2 or 3 country-headquartered entity:

  • ‘creates a substantial risk’ that the trained model weights will be diverted to the subsidiary’s ultimate parent company, and
  • could result in the IaaS provider to have ‘aided and abetted’ a violation of the export controls.

It recommends that these IaaS providers conduct extra due diligence on such customers to verify the use of the model weights and ensure proper licences are in place.

The details: location-based 'tiers'

The restrictions and exceptions that apply to an export or transfer of AI technology depends on the location of the recipient and deployment. These range from the highest level of restrictions (banned Tier 3 jurisdictions) to the lowest (allied Tier 1 jurisdictions). Exports to Tier 1 locations pose a low risk for diversion to, and misuse by, US adversaries in Tier 3 jurisdictions.

The details: exceptions

AIA Exception to allow activity in Tier 1 locations

Entities located and headquartered in the US or Tier 1 jurisdictions (and who do not have an ultimate parent company headquartered outside of these jurisdictions) are eligible for the AIA exception. This is the effective ‘get out of global restrictions’ card that allows activity, as shown in the table above. This is provided the exporter receives a certification from the ultimate consignee that it will not:

  • re-export the AI chips and model weights to a non-Tier 1 country or to any prohibited end user as designated by the EAR, and
  • use the AI chips and model weights to provide IaaS to train an AI model which uses the AI model weights to an entity headquartered (or which has an ultimate parent company headquartered), or located outside of, a Tier 1 country.

This provides the licensing exemption.

Any exports exceeding 253,000,000 TPP requires certification to be provided to BIS prior to export. 

VEU exception to allow activity in non-restricted locations

To streamline AI chip exports to trusted foreign entities, eligible data centres can receive AI chips under a general authorisation, eliminating the requirement for individual export licences.  To qualify as a VEU, data centres must undergo a rigorous assessment that involves consideration of several factors, including:

  • physical and cyber security measures
  • adherence to the geographical allocation quotas or advanced AI chips
  • commitments to prevent transfers of AI chips to restricted persons or jurisdictions, and
  • assurances of no ties to military end users.

There are two tiers of VEU status:

Low processing performance exception

Allows limited exports of AI chips to a single ultimate consignee per year, up to 26,900,000 TPP, and strict restrictions on redistribution. This exception does not permit in-country transfers.  Exporters must obtain a signed certification from the recipient confirming compliance with the TPP cap and Framework Rule.  Exporters and consignees must report shipments to BIS, especially if they approach or hit the annual limit.

Advanced compute manufacturing exception

Permits exports of AI chips to ‘private-sector end users’ located outside of (and not headquartered in) a Tier 3 country, if such end users are engaged in the development, production, or storage of AI chips.  Notably, this exception does not authorise training an AI model as a permitted end use.

What next?

We anticipate a growing focus on IaaS (particularly GPUaaS), building on existing restrictions that effectively bar IaaS providers in non-Tier 1 countries from training advanced AI models through the VEU framework. In practice, to train advanced AI models, companies need a sufficient supply of advanced AI chips – access that will increasingly depend on obtaining VEU status.  However, VEU status comes with strict conditions, including a prohibition on using those chips to train controlled models, effectively closing off this route for companies seeking to scale frontier AI development.

While it is difficult to be certain where the Trump administration will take the export controls, they are unlikely to be loosened. Future measures may further tighten controls on how cloud and AI resources are provisioned and accessed across borders, particularly in or from high-risk jurisdictions. As the regulatory perimeter expands, companies will need to ensure that their cloud and AI operations, partnerships, financiers and customer base remain aligned with emerging compliance expectations and risks.

Data centre operators and AI businesses in Asia can proactively adapt by securing access to alternative compute resources, implementing compliance measures, and ensuring contractual and regulatory readiness. Understand the risks and adapt. As the regulatory environment evolves, those who prepare now will be best positioned to navigate future AI trade restrictions while maintaining operational resilience and competitive advantage.

How to prepare for AI trade restrictions – three actions to take now

  • Secure access to alternative compute resources
  • Implement compliance measures
  • Ensure contractual & regulatory readiness

In our ‘Future of AI competition in Asia’ series, our team will explore how the rules apply to specific scenarios, and we will share developments when they happen. Stay tuned.

Subscribe to the KWM View from Asia for updates and upcoming webinars.

View from Asia: Driving growth, embracing change

Discover the trends shaping tomorrow's markets. Stay ahead in Asia.

LATEST THINKING
Insight
Cybersecurity considerations are increasingly critical in the management of consumer energy resources.

09 July 2025

Insight
The private health insurance space remains competitive and dynamic – with many insurers revisiting their long-term strategies.

08 July 2025

Insight
Public-Private Partnerships (PPPs) have been a strategic tool to develop infrastructure in Indonesia since 2005. The government has since refined PPP processes with evolving regulations, fostering a pipeline of opportunities. Understanding the framework is crucial for investors keen on seizing this market's potential.

08 July 2025