The rapid rise and evolution in artificial intelligence (AI) constitutes one of the most transformative technological developments in human history.
Navigating the “New Normal”
Developments in AI present marvelous opportunities by accelerating the processing, organization, and application of data, which expand life experience through greater efficiency, precision, and volume of our life experiences. From self-driving cars to automated composition of text, from traffic and infrastructure management to innovation in bio-technology and the health sciences, there is much that AI has to offer to humanity.
Yet the rapid surge in AI also presents daunting challenges for countries across the world, and can pose a serious threat to nations and individuals. These may come from infringements on privacy and security, algorithm-based discrimination against minority populations, to flawed decision-making processes that omit core aspects of human societal interests. AI can readily access vast volumes of data, processing it in a split second. It creates levels of complexity that are increasingly unexplainable to even the most knowledgeable of scientific experts. We have the uneasy feeling that cutting-edge AI is generating risks that can neither be quantified nor identified.
Governments around the world have realized the need to respond promptly to such risks. In 2022, the White House issued a Blueprint for an AI Bill of Rights. Following its principles, it recently issued an Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. This landmark document encompassed regulations on public and private sector research on AI, commercial and non-commercial usages of AI, as well as the initiation of targeted programs aimed at addressing “vulnerabilities in critical software” and “how agencies collect and use commercially available information”.
China has also moved to manage the risks of AI. In July 2023, after several rounds of consultation and revision, the Cyberspace Administration of China released Interim Measures for Administration of Generative Artificial Intelligence Services, which took effect mid-August. This provides guidelines for generative AI-based products by foreign and domestic firms. It demands that firms provide responsible data training and annotation, holds AI designers and disseminators to account, and upholds the national and data security interests of the country.
Both China and the US have followed in the footsteps of the European Union, which announced the world’s first comprehensive AI law in June this year – a set of laws that oriented around individual rights, including the rights to privacy, to security, as well as to knowledge and understanding of AI.
AI in the era of great power rivalry
What remains lacking at present is alignment and coordination across different nation states and spheres of regulation. As Anu Bradford argues in Digital Empires (2023), the US, China, and the EU each has distinctive approaches to digital governance. These divergent regulatory models are presently engaged in intense competition, as opposed to cooperation. with one another. Up until recently, Washington had placed significant faith in the ability of the private tech sector to self-regulate and impose guardrails or brakes on AI development. Beijing has long combined a desire to keep up with, if not overtake eventually, the US on digital and advanced technologies, with a doctrinal commitment to prevent any technology from posing a challenge to the authority and stability of the regime.
Neither side appears structurally willing to communicate or collaborate on AI at present. Both China and the US view the race over AI as deeply intertwined with their broader great power rivalry over global tech supremacy. As my colleague Angela Zhang at the University of Hong Kong puts it so eloquently, continuous American efforts at constraining China’s technological developments have only “encourage[d] a whole-of-society effort [in China] to achieve technological self-sufficiency”. As the AI race heats up, American politicians and big tech firms have turned to inflating the ‘China threat’. They advocate against restrictions that might handicap America’s ability to take on a serious rival. They view the rivalry as much more serious than Cold War competition with the Soviet Union, or even Japan in the 1980s, when it comes to the rapidity and breadth of technological advances.
There is a dangerous lack of trust between the two sides of the Pacific. Cynicism is apparent across all dimensions, including conversations over AI and technology. Ryan Fedasiuk writes that “retired Chinese military leaders […] view the US Defense Department’s AI ethics principles and broader approach to ‘responsible AI’ as bad-faith efforts to skirt multilateral negotiations” concerning regulations of autonomous weapons. Many US observers seem convinced that the Chinese state’s AI policymaking and decision-making remains within a black box. They are alarmed by what they view as the omnipresent development and deployment of civilian-military dual use AI by the Chinese authorities and state-owned enterprises. Such concerns feed into a fundamental wariness of disclosure or engagement with Chinese counterparts.
The problem of Sino-American non-cooperation
AI not only amplifies the stakes involved when it comes to the geopolitical antagonism between China and the US – it could also contribute towards further deterioration in bilateral relations. The current climate gives rise to two critical sets of problems.
First, increasing data fragmentationmeans that Chinese and American actors are training their AI on different data sets. China’s strict data rules, overseen by the newly established National Data Bureau, have prompted international multinationals to ‘silo’ their data in their ‘on-shore’ and ‘off-shore’ operations. Further blacklists on training data for generative AI models within China, as well as US sanctions on some of the largest data-owning entities in China – SenseTime for example – are accelerating data decoupling between the two economies. This, in turn, could generate AI that behaves with very different, even conflicting values and judgments in areas where China and the US have historically come into conflict.
The misalignment between Chinese and American AI means that a military, economic, or financial confrontation between the two countries could lead to uncontrolled escalation through generative AI applied to defense and offense. Short of an actual conflict, the algorithms could create undue ethnic and citizenship bias against individual users. Consider, for one, an AI-powered credit score algorithm in an American bank that over-weighs prospective borrowers deemed to be connected with the Chinese state, which are then judged to be hostile to American interests because of the inputs fed into the training process.
A failure to calibrate the deployment of AI in military contexts could result in heightened risks. These might stem from the compression of time policymakers have to determine critical decisions, as well as a surge in misinformation and deep fakes generated automatically by AI-powered generative systems on both sides.
Second, we must beware of a race to the bottomin the hyper-competitive ethos undergirding the technology-defense communities across China and the US. Both Beijing and Washington view control over both advanced generative AI and discriminative AI (AI that is used to classify and predict data, as opposed to creating new data) as instrumental to their ability to project power and influence across the world. Politicians and think-tank specialists have taken to portraying an AI race as key to the balance of power between China and the US. This view implicitly reinforces a zero-sum worldview on bilateral relations: America’s gains are framed as China’s losses, and vice versa.
In reality, unpredictable risks stem both from the drive to succeed at any cost, and the view that sharing information and insights with the other party undermines one’s own position. These also create headwinds caused by a failure to manage risks responsibly. Concerns about espionage and infiltration by scientists and researchers from the other side – partially legitimate but largely overblown – have made collaboration between Chinese and American AI scientists nearly impossible, especially if they are under state employment or receiving state funding. Given the heavy involvement of the Chinese state in scientific R&D, the crackdown on research by Chinese nationals or ethnic Chinese in the US has precipitated the exodus of thousands of top scientists. China and the US have much to learn from one another on AI safety, regulation, and commercialisation. Such exchanges cannot occur if neither side possesses the minimum of trust at the working level.
China and the US can work together. Here is how.
On October 26, the United Nations Secretary-General announced the creation of a new AI Advisory Body on risks, opportunities, and international governance of AI. Two Chinese academics and three American academics sit in this body. Such admirable multilateral efforts, however, cannot be a substitute for bilateral engagement and dialogue.
First and foremost, the business sectors of China and the US must a play a role. Closed-door, high-level, and representative conversations between private players (“Track 2 dialogues” in diplomacy-speak) could help to align Chinese and American AI and digital companies in their views of AI risks and AI regulation.
During Californian Governor Gavin Newsom’s visit to the University of Hong Kong in October, a new technology-driven San Francisco Bay and Greater Bay Area collaboration was announced. There is no reason why Hong Kong cannot play an instrumental role in hosting a China-US AI Cooperation Summit. Indeed, with longstanding university-to-university ties between Hong Kong and California, and the depth of capital and funding eager to support relevant discussions, Hong Kong is well-placed to host top-level exchanges and dialogues between corporate, academic, and intellectual leaders on AI governance and management.
With sufficient goodwill and trust between the Chinese and American corporate sectors, Beijing and Washington must continue the conversation on the government-to-government level. Both the Chinese and US administrations should appoint an envoy – as they did on climate change, with Special Representative Xie Zhenhua and Special Envoy John Kerry – in charge of international cooperation over AI.
These two envoys should in turn lead teams of interlocutors, including academics, regulators, officials, and business rein discussing areas in which greater alignment between China and the US can be accomplished. The end goal for these discussions should be a communique delineating binding, enforceable, and credible guarantees concerning legislation and regulation on AI development, research, usage, and dissemination.
Such a communique has parallels with the Treaty on the Non-Proliferation of Nuclear Weapons (NPT) signed in 1968, but only to a degree. Nuclear weapons remain, to this day, owned and wielded predominantly by a small number of state actors. Artificial Intelligence, on the other hand, is considerably more accessible, easily weaponized, and influenced by the actions of non-state actors.
Beijing and Washington would benefit from comparing notes on how to handle collaboration with their respective private sectors. For one, the Chinese state engages in significantly more oversight and supervision of enterprises involved in AI development, yet it also stands to gain from speaking with the architects of the US Executive Order unveiled last month. More concretely, the least the Chinese and American parties should and can do, is to agree upon setting limits as to where and how AI should be deployed in military settings. This would help ensure that we do not compound the existential risks of nuclear war with artificial intelligence, as warned by some.
That Chinese representatives from governmental agencies and the private sector attended the Bletchley Park conference on AI in the United Kingdom earlier this month was a reassuring sign. That at the sidelines of the Asia Pacific Economic Cooperation (APEC) summit in San Francisco, Presidents Xi Jinping and Joe Biden brought up and discussed the need for concrete limits on where and how to deploy artificial intelligence in military contexts, was also most encouraging. Yet these must be taken as the starting point of deeper and further dialogues between the executive and private sectors on both sides of the Pacific
Looking ahead to 2024, the Sino-American relationship will remain volatile, with an impending election in November 2024. Yet it is never too late to act responsibly, and never too early for great powers to pre-emptively manage clearly foreseeable risks.
Dr. Brian Wong is an Assistant Professor in Philosophy at the University of Hong Kong. He teaches and researches on non-democratic regimes, reparations for historical and contemporary injustices, AI ethics and political philosophy. A Rhodes Scholar holding a DPhil in Politics from Balliol College, University of Oxford, Brian advises multinational corporations and private equity firms on macro-geopolitical risks in Asia.