China is turning to supercomputing technology to advance its artificial intelligence (AI) capabilities due to ongoing U.S. sanctions on advanced semiconductors and chip-making equipment, SCMP reported.
As U.S. restrictions continue to impede China's access to cutting-edge GPUs, experts believe that leveraging supercomputers could help overcome these obstacles and support the development of large language models (LLMs), which are crucial for AI advancements.
Zhang Yunquan, a researcher at the Institute of Computing Technology under the Chinese Academy of Sciences (CAS), emphasized that supercomputing systems, developed by China over the past decade, are essential for training LLMs.
These systems offer an alternative to the traditional data center setups that require extensive arrays of GPUs, often numbering between 10,000 and 100,000, to handle the intensive computing needs of AI models.
US Sanctions on China's Tech
While there have been various attempts to improve the development of large language models (LLMs), Chen Runsheng, another researcher at CAS, emphasizes that more progress is needed in the basic theories of computing. This progress is necessary to make LLMs more efficient and to reduce the amount of energy they consume during their development and operation.
In addition to supercomputing advancements, Chinese firms are focusing on optimizing their infrastructure. Tencent Holdings, for example, has developed its Xingmai HPC network, capable of supporting computing clusters with over 100,000 GPUs.
The push for advanced computing platforms comes as China's AI progress is constrained by limited GPU options due to U.S. sanctions, which have barred major GPU supplier Nvidia from delivering its latest technology to China.
In response, VCPost reported that Nvidia is working on a modified version of its flagship AI chips that complies with current U.S. export controls.
Join the Conversation