top of page

The US AI-related Rules and their China Implications By Subramanyam Sridharan

Image Courtesy: Emmanuel -Pernot


Article 12/2025



Synopsis

There are three areas of technology that define the geopolitical race in the extant new Cold War, namely, Space, Quantum Computing, and Artificial Intelligence. The determined thrust by Xi Jinping to displace the US from the top perch has led to a serious trade and technology-denial war by the US in retaliation, after three decades of benign US policies in transferring technology and helping China’s industrial growth in the 1970s through early 2000s, followed by almost a decade of slow fraying of relationship. The war was declared openly by Trump in his first term and, contrary to popular perception, the Biden administration further tightened the technology denials, even though it did not do as much on the tariff front. The incumbent Trump administration has already hit the ground running with various pronouncements and action plans vis-à-vis the ‘technology war’ with China. 


The recent challenge posed by China’s DeepSeek chatbot AI model to those of established American companies such as OpenAI is a case in point. The current competition is on building ‘AI Agents’ which go farther beyond mere natural-language processing by being capable of problem-solving and decision-making. In early March, 2025, a less-known Chinese company, Butterfly Effect, released its agent, Manus, claiming to be superior to OpenAI’s Deep Research agent. While these Chinese claims are not uniformly true, there is nevertheless a huge challenge that the US AI industry is facing from China. Therefore, what has really caught one’s attention is the series of actions taken by the just concluded Biden administration against China especially in the last few months before its end-of-term, in the field of Artificial Intelligence (AI). We analyze the restrictions, reasons and resultant implications for China.


Introduction

Apart from retaining its global leadership role in AI, the US is engaged in five aspects of controlling the proliferation of AI to China through these laws. Firstly, it is ensuring that advanced semiconductor devices are not accessible to China ( through CHIPS and Science Act, export controls etc.); secondly, it is firewalling AI systems in American allies and partner countries against Chinese exploitation (through Enhancing National Frameworks for Overseas Restriction of Critical Exports [ENFORCE] Act); thirdly, it is enforcing controls worldwide so that technologies and products that would enable China to make th advanced semiconductor devices needed for AI do not reach them (Foreign Direct Product Rule); fourthly, the US is on-shoring manufacturing of these devices which are today made in the foundries of Taiwan and South Korea; and, fifthly it is investing big time in AI data centres and research to overcome the Chinese challenge. 


The spate of new rules announced, especially since October 7, 2022, enforces these objectives. Since China is the largest market for semiconductors and semiconductor-making equipment, the US attempts have been directed to hit hard at that core. Since semiconductors are manufactured through a worldwide division of labour such as ‘semiconductor-equipment-making OEMs’ - US companies or ASML (The Netherlands) or Tokyo Electron (Japan) – Electronic Design Automation (EDA) makers – largely US companies -  ‘Chip Designers’ – largely US and Taiwan companies, Huawei-subsidiary HiSilicon – and ‘foundries’ which actually convert the design into chips - largely Taiwan, South Korea, US and China, containment of China requires a global effort by the US. The foundries can be captive like Intel, Micron, Texas Instruments (all US) or Samsung, SK Hynix (both South Korea) or ‘fabless foundries’ like Global Foundries (US), TSMC (Taiwan) and SMIC (China). This ecosystem will help in understanding the US actions as the attempt here is to decode these rules and US moves. 


AI, which takes the frontiers of computing to a new level where computers can do certain things that were hitherto capable of being done only by human beings, is built on two separate but interlinked processes: Training and Inference. Trained AI models can be useful to end-users in inferring which is the use case for an AI model. The more widely and deeply an AI model is trained, the more accurate will be the inference. In order to achieve these two aspects, AI relies on three important elements. These are: algorithms which are based on complex mathematical models, computing power and memory. While mathematical models can be developed by anyone with talent, the other two require technologies where the US is still the leader and that is why it is restricting availability of these to China. Since training consumes the most significant computing resources, and since the US leads in innovative computing, the US endeavour has lately been to cut the Chinese access to US computing innovations. At the same time, proliferation of use of AI capabilities, also known as ‘diffusion’, is also seen as a challenge by China to US supremacy necessitating curbs on such deployments too.


AI Models

A brief understanding of AI models and the hardware necessary to realize them, is needed to understand these US AI-rules. 


AI today is still in very formative stage because today’s AI processing is based on ‘training’ where a huge set of multimodal data – text, speech, image, video - is input to the AI model which then produces an output, called the ‘actual output’, vis-à-vis the desired ‘target output’. The gap between the two determines the accuracy of the underlying AI model. The iterative process of training is to reduce or eliminate this gap in order to make the AI model, the Artificial Neural Network (ANN), accurately mimics the brain (Biological Neural Network, BNN) functions. Training also mimics our own brain functions too, where repeated training is essential for learning something new, like a new language or a trade etc. These trainings establish neural pathways among the billions and billions of the neurons in the brain making up to one quadrillion connections, so that it becomes easier later. Thus, neural networks eventually become functional networks. This is exactly what an ANN attempts to do. At its lowest level, ANN is trained with labelled (or ‘categorized’) data; in ‘deep learning’ which is a superset of the foundational level, unstructured data can be used to train the model (as in ChatGPT); Machine Learning (ML) is a simpler superset of ‘deep learning’ and hence is not as capable and accurate as the latter. With the development of ‘transformer’ model of machine-learning, which understands ‘context’ and ‘meaning’, unlabeled data (uncategorized data) processing has become possible on a larger and faster scale. 


In BNN, the neuron-to-neuron connection happens over a gap between them, called synapse, over which a neurotransmitter (a chemical) sends signals. In ANN, the neurons are represented as nodes and there are multiple layers of these nodes between the input and output nodes. Each node has a threshold value, a transfer function and a weight. When the sum of all input signals to a node exceeds the threshold value, the node performs a transfer function (also known as activation) to the next node. Several types of transfer functions such as a simple ‘Unit Step’, or Linear or Gaussian etc. can be employed to generate the output.


Like the ANN model, which is best suited for natural language processing (NLP), there are other foundational AI models like Convolution Neural Network (CNN) for computer-vision etc. However, all these models need voluminous data and repetitive training. Training is a computing intensive phase where AI accelerators like Graphical Processing Units (GPUs), matrix manipulation processors known as Tensor Processing Units (TPUs), Neural Processing Units (NPUs) etc. are crucial. Once trained, the model can be put to regular use by end users, a process known as ‘inference’. Inference requires relatively less processing power though reinforcement learning would be periodically needed. 


AI Hardware and Software

Since our brains have parallel processing and associative memory capabilities, the AI processors and their memory units have to mimic them too. Parallel processing involves handling multiple inputs at the same time while associative memory involves parallelly searching the memory to retrieve the correct information. Because of the big data involved, the processors must be capable of hyper-scalability, that is the ability to provide more computing power with dynamically increasing demands. 


Among the various processors, the GPUs, especially the Nvidia AI Chips (A100 and H100) though originally designed for gaming purposes, have become dominant for reasons of processing power, hyper-scalability, tensor manipulation (tensors are matrices with more than two dimensions), and parallel and distributed processing capabilities. Nvidia AI chips have over 80% market share worldwide which it does not want to lose because of certain US government bans, often leading to tensions between it and the US Administration. AMD, Intel, IBM, Qualcomm et al. are other American manufacturers of AI chips while Amazon and Google design their custom-made AI chips. 


 ‘Deep learning’ involves ‘pattern recognition’ in input data. ML has heavily relied on GPUs over the years, especially the industry dominant Nvidia-built GPUs. The Nvidia H100 which uses 4nm technology, has 80 billion transistors, over 15000 cores, 80 GB memory and 50 GB cache on every chip with a bandwidth of 2 Terabits/sec. The Nvidia GPUs have not only improved their architecture incrementally but also reduced the energy consumption, an important factor for AI Data Centres whose power requirements can easily go up to several Mega Watts (MWs). Elon Musk’s xAI constructed its Colossus data centre, with 200,000 Nvidia H100 GPUs to train the company’s Grok models, consuming 250 MW of power. 


One of the last Executive Orders by Biden before he relinquished Presidency, was to establish three AI data centres each by the US department of Defense (DoD) and the Department of Energy (DoE) based on green energy and operational before c. 2027. The energy requirements would be in GWs. This will ensure overwhelming superiority over China in AI for years to come, especially as AI moves towards the pinnacle of Artificial General Intelligence (AGI) which allows a machine to possess cognitive abilities like a human being in order to face unfamiliar or ‘untrained’ inputs and act. China’s DeepSeek, Alibaba, and ByteDance are also involved in AGI. Just recently OpenAI, Oracle and Japan’s SoftBank announced an investment of USD 500B in the ‘Stargate’ AI infrastructure in the US to ‘solidify American AI leadership and national security’.


How has China Outwitted US Laws

The Chinese, in order to defeat the US restrictions, are doing two things: as computing infrastructure has traditionally been China’s weakest link, the Chinese AI companies use global capability centres (GCCs) in many countries or use cloud services such as AWS or Microsoft Azure to access other AI servers for ‘training’ their models. The new US laws are also an attempt to curb other Chinese access to US technologies. Hyper-scalers like Microsoft Azure, Amazon Web Services (AWS) have offered access to AI models of openAI et al. and rented out the Nvidia H100 servers through their clouds which the Chinese have been able to access. The Chinese company o1.AI, exploited Meta’s Large Language Model Meta AI, LLaMA, (Large Language Models or LLMs are foundational models for language processing) when it suddenly sprang-up in November 2023. Some other US AI companies like Meta and xAI of Elon Musk run their own proprietary data centres with cloud access. Therefore, the US Commerce Department’s Bureau of Industry and Security (BIS), introduced a ‘Verified End User’ (VEU) scheme in September 2024 to validate the usage of these data centres. 


Because of the dominance of the Nvidia GPUS in AI, the new US laws are therefore based on these GPUs as the basic units of power on which the restrictions are applied to the three groups of countries: allies, friends, and others. 


The Biden administration has announced four rules since October 7, 2022, that directly impact China’s progress in AI. While the Chinese quest for knowledge has been immemorial (like Fa-Hien and Xuanzang carting away literature from India), the modern world does not tolerate such practices as stealing IP, reverse engineering, illegal trade etc. These rules restricted certain AI chips made with American technology from being exported to China. They also restricted the sale of US semiconductor manufacturing equipment to any Chinese firm. China called these measures as ‘technological hegemony’. On October 7,2024, the Biden administration placed further restrictions on any American investment in AI, semiconductors, microelectronics, and Quantum Computers in China and Hong Kong SAR to require prior US government approval. 


The US determination to stop the Chinese AI progress by denying them computing power actually started seriously in c. 2018 when the Trump administration persuaded The Netherlands not to export extreme ultraviolet (EUV) lithography equipment, manufactured by its Advanced Semiconductors Material Lithography (ASML) company. These are essential for making the A100/H100 type of GPUs as they need extraordinarily narrow circuits on silicon wafers. Under Pres. Trump’s trade demands, China had pledged to buy USD 200B worth more of US goods for fiscals 2020 and 2021. Though China only nominally improved imports from the US, the one area where it scored the most was in ‘semiconductor and chip-making equipment’. This prompted the Trump administration to later place restrictions on these. This included selling equipment even to American allies for fear that Huawei could benefit from such sales especially in its 5G infrastructure roll-out worldwide. Since other manufacturers of equipment or designers of products and foundries could help China, the US imposed sales restrictions on chip-making equipment under the foreign-produced direct product (FDP) rule by August 2020. A month later, the Chinese chip-manufacturer Semiconductor Manufacturing International Corporation (SMIC) was added to its ‘Entities List’. 


On October 7, 2022, BIS announced new rules to restrict China’s ability to ‘obtain advanced computing chips, develop and maintain supercomputers, and manufacture advanced semiconductors.’ The BIS stopped export of Nvidia’s A100 & H100 chips to China, including any foreign-made products containing these chips. Countries like Malaysia, Japan & Indonesia made products using the Nvidia chips that ended up in China through Hong Kong where the Nvidia chips were removed and used in AI servers by China. Organized smuggling from countries such as UAE, Malaysia, and Singapore have also been reported.


Meanwhile, Nvidia has gone a level higher with its faster H200 and Blackwell chips, the latter being built for particularly LLM needs. The Chinese AI company, DeepSeek, whose DeepSeek R1 AI model released on January 20, 2025 coinciding with the day that Trump took office, which is better or at least comparable in performance to OpenAI’s general purpose LLM GPT-4o1 (which followed GPT-4o as a successor to GPT-3) unlike ChatGPT which is tailored as a conversational tool on top of its GPT model, is supposed to have exploited the loophole and bought 20,000 of these chips though  DeepSeek claims it used only 2,048 Nvidia H800 chips. A corollary to DeepSeek’s success was the immediate and historic crash in price of Nvidia shares because the smart algorithms and techniques used by DeepSeek are supposed to have reduced the number of Nvidia chips needed for output, irrespective of whether they were H200, or H100, or H800. These smart algorithms are claimed to have enabled DeepSeek to offer it at a fraction of the cost of o1 though there is also another opinion that the Chinese are simply undercutting the American companies. While GPT-3 is said to have cost only USD 3M to develop in c. 2020, ChatGPT is said to have cost USD 100M three years later. Similarly, while Google’s PaLM cost about USD 12M to develop in c. 2022, its successor Gemini cost USD 200M. 


Two days after the R1 release, ByteDance updated its existing model and claimed it also outperformed GPT-4o1. About ten days later, on the Chinese New Year Day of January 29, 2025, Alibaba released its Qwen 2.5 Max that it claimed surpassed the highly acclaimed DeepSeek-V3. Such bunched releases are also in line with the usual AI industry practice when Google’s Gemini in December 2023 followed openAI’s ChatGPT-4 in March. While DeepSeek R1 uses inferencing capabilities, the V3 was an update to its earlier V2 Chatbot-like Natural Language Processing (NLP) model, using a new paradigm of Mixture-of-Experts.


In October 2024, Taiwan’s TSMC informed the US of potential breach of earlier US rules by Huawei which had acquired banned AI Chips in tens of thousands, through its front-end company Sophgo. In November 2024, BIS found out that the Huawei Ascend 910B chips (manufactured by the Chinese chip-maker, Semiconductor Manufacturing International Corporation or SMIC), which China claimed achieved an efficiency of 80% of Nvidia’s A100 chips while ‘training’ AI models,  actually contained the Nvidia A100 chips that Taiwan’s TSMC had sold to Sophgo. Immediately afterwards, BIS directed TSMC to stop its most-advanced AI chip sales to China. The SMIC-manufactured Ascend 910B chips were manufactured using the older Deep Ultraviolet (DUV) lithography machines and the BIS therefore banned them also in the next iteration of rules in early December. 


The Final Act of Biden

The BIS announced a further package of rules on December 2, 2024, the third in the last half of the Biden Administration, that it said were aimed ‘to further impair the People’s Republic of China’s (PRC) capability . . . in artificial intelligence (AI) and advanced computing’. The rules control ‘24 types of semiconductor manufacturing equipment and 3 types of software tools for developing or producing semiconductors; new controls on high-bandwidth memory (HBM)’. They also added 140 Chinese companies which included Huawei's semiconductor facilities, to the export control Entities List (EL) maintained by the US Department of Commerce. This was the third major US restriction since c.2022. 


The Nvidia A100 chips essentially have two components inside, the graphics processor, which is also known as AI accelerator, and High Bandwidth Memory (HBM). The HBM memory are a generation ahead of the earlier Synchronous Dynamic Random Access Memory (SDRAM) such as GDDR (Graphics Double Data Rate), and are suitable for the high memory requirements of the AI processors because they have low latency (faster time to retrieve data), parallel access, lower power needs and scalability which are all important requirements for AI. The HBM4 chips are made by only US Micron and South Korean Samsung and SK Hynix (which makes them in the TSMC foundry in Taiwan) and the December 2, 2024 restrictions stopped the sales of these to China. However, the US lawmakers criticized the December 2 notification for allowing export of certain HBM items to China under the "License Exception Restricted Fabrication Facility".  They also criticized the exclusion of SMIC and two of the Huawei subsidiaries such as SwaySure Technology and Shenzhen Pengxinxu Technology from the EL. This shows the depth of US concerns against China in the AI arena. 


The fourth and last act of the exiting Biden administration was on January 13, 2025, when the outgoing Biden administration announced an interim final ‘Export Control Framework for the Diffusion of Artificial Intelligence, what has loosely been called as ‘AI Diffusion Rule”. The US said, “Over the past decade, AI models have shown striking performance improvements across many domains, giving everyday people increased access to tools that previously required specialized skills. As models continue to improve, this increased access will enable malicious actors to engage in activities that pose profound risks to U.S. national security and foreign policy, including enabling the development of chemical or biological weapons; supporting powerful offensive cyber operations; and further aiding human rights abuses, including mass surveillance.”


‘AI Diffusion Rule’ divides the world into three tiers: Tier 1 is US plus 18 US security allies and partners (Australia, Belgium, Canada, Denmark, Finland, France, Germany, Ireland, Italy, Japan, the Netherlands, New Zealand, Norway, South Korea, Spain, Sweden, Taiwan, and the UK) who face no restrictions; Tier-3 is China, Russia, Iran, North Korea, and Venezuela with stringent export controls; and, Tier-2 is the rest of the world which receive managed and mediated access to AI Chips. Tier-2 countries can import only 50,000 advanced AI-chips through 2027. Governments that sign commitments with Washington to align their export control, energy, and technology security arrangements with the United States can double their cap, to 100,000 chips. In some cases, countries may be exempted and allowed 1700 H-100 equivalent chips. It also stipulates that Tier-1 companies can ‘train’ their AI models only in secure facilities located in the US or other Tier-1 countries.


Obviously Nvidia dislikes the curbs on lucrative sales to China and has issued a statement criticizing heavily the ‘AI Diffusion Rule’. Earlier, the US Commerce Secretary had said that these restrictions affect only a ‘small fraction of chip exports to China’. It remains to be seen how Trump, who is supposed to be industry-friendly, would react. Though he has reversed several decisions of the Biden government within his first month of the second term, he has not done anything about this specific rule. On an earlier occasion, when Biden banned the sale of A100 and H100 chips to China, Nvidia tried to circumvent that by making slightly modified A800 and H800 chips which were good enough for Chinese AI requirements. This led to the October 17, 2023 sanctions by the Biden administration stopping export of these derivative chips also to China. China’s Tencent immediately announced that it had enough stockpile to continue its AI development models for several generations. The US ban also extended to Nvidia’s DGX and HGX server systems used in AI deep learning. Nvidia then designed the H20 China-specific chip that complied with all US restrictions over and above the H800 which itself was compliant with the US laws than the H100 chips. In terms of computational power, the H20 is 700 times less powerful than the H100. However, it has a bigger on-chip memory (96GB compared to H100’s 80 GB) and has a higher memory bandwidth (4.0 TBs/sec compared to H100’s 3.4 TBs/sec). The Chinese DeepSeek runs on H20 chips and the chip’s sales have skyrocketed in China. The Trump administration has determined since early April, 2025 that the sale of H20s to China requires a licence, but China is already set to have accumulated a million H20 chips. 


Conclusion

The effect of these technology denials and restrictions would not be felt for a few years at least for two reasons: one, China has amassed the now-banned AI chips in sufficient numbers to keep them going for research at current levels and two, the need for more numbers and even more powerful AI accelerators would not be felt until more processing requirements emerge as the models mature in a few years’ time. Even if DeepSeek has  used some clever algorithms to overcome the hardware limitations, it has been alleged that it used the ‘distillation’ process in which the ‘larger’ US-based LLM is used to train a more ‘specialized’ model at a fraction of the cost and time. US lawmakers have demanded Pres. Trump to implement two further restrictions on China. One, not to acquire any system based on Chinese AI Models, and constrict flow of chips through friendly countries to China. Pres. Trump has also removed, on day one of his new term, the AI guardrail restrictions announced by Pres. Biden regarding safety requirements such as US government approval before deploying AI in certain areas. China has begun to use AI, especially DeepSeek R1, in its international diplomacy as a tool. Very soon, it will weaponize AI as well. China is also frantically demanding relaxation of the US restrictions and is insisting on global-level efforts at AI governance. Coming months would be interesting to watch as to how Trump pursues the AI policy vis-à-vis China.


(Subramanyam Sridharan is the Distinguished Member, C3S. The views expressed are those of the author and do not reflect the views of C3S.)

LATEST
bottom of page