fb tracking

The EU and China Are Taking the Lead on AI Regulation – The U.S. Must Not Be Left Behind

By Rishab Bailey, Research Director, Digital Trade Policy & Jeanne Huang, Associate Professor, University of Sydney

The last few years have seen unprecedented development of the artificial intelligence (AI) sector around the world, with the release of several large language models such as ChatGPT, Bard, and the like capturing public attention. However, there is also increasing evidence that the unregulated development of AI could expose citizens and consumers to a range of harms, including the increased spread of misinformation and fraudulent content, breaches of privacy, and concerns about AI tools exacerbating bias and discrimination in critical public and private services.

This has led policymakers all over the world to consider implementing public interest regulation of AI systems, with China and the European Union (EU) among the first jurisdictions to do so. The U.S. lags in comparison, with no comprehensive federal regulation of AI. At the federal level, the U.S. has sought to regulate the AI ecosystem through executive instruments (notably the Biden administration’s Executive Order on AI and Blueprint for an AI Bill of Rights) as well as interventions by various sectoral regulators such as the Federal Communications Commission. While a number of laws have been proposed at the federal level, it is primarily states who have implemented laws governing various aspects of the AI ecosystem, leading to the development of a patchwork regulatory framework.

A briefing note published recently by the Digital Trade Alliance examines how China and the EU have approached AI regulation. While adopting very different methods of regulation, each of these jurisdictions has implemented AI-related regulation as part of a broader regulatory framework aimed at ensuring the digital ecosystem is fairer for businesses and safer for consumers. 

The EU’s AI Act, which comes into effect in August this year, provides a comprehensive approach to dealing with the AI ecosystem. The AI Act, which applies to both the private and public sector use of AI systems, seeks to implement obligations based on the perceived risk of certain types of AI models and use cases. The Act identifies and bars certain AI systems as having “unacceptable risks,” while imposing a series of risk mitigation, security, transparency, and accountability related obligations to AI systems delineated as posing a “high risk.” Systems that pose “limited” or “minimal risks” are relatively unregulated, with transparency obligations imposed on the former. General Purpose AI (GPAI) systems are also regulated under the law, with greater obligations imposed on those that pose a “systemic risk.” The Act establishes new institutions at the EU level charged with oversight and development of standards, while existing consumer protection and other market surveillance authorities are tasked with enforcement at the national level.

In comparison to the EU, the Chinese government appears to still be at a “testing” stage insofar as regulation of the AI ecosystem is concerned. While it has issued three administrative regulations governing various types of AI, none of these are a statute yet. Administrative regulations, while binding, do not carry the same legal weight as a statute. 

The Chinese AI framework comprises three regulations that govern specific types of AI systems: generative AI services, deep synthesis services, and algorithmic recommendation systems. Each of these primarily focuses on assigning and clarifying service provider liability for violations of Chinese civil and criminal laws enabled by use of the AI service in question. Systems are also required to be designed to be transparent, trustworthy, and to protect consumers from manipulation and deception. 

Comparing the two approaches, it appears that the Chinese framework focuses on assigning responsibility and dealing with the harms that specific types of AI systems can cause. It is more reactive and limited than the EU’s approach, which aims to be broader and more forward looking. While prima facie, the EU approach appears preferable, there could, however, be benefits and drawbacks to each.

The Chinese method allows the government to intervene whenever needed. To date, only certain specific commercial, user-facing AI services have been the subject of regulation. While the Chinese framework recognizes problems such as AI discrimination and bias, it largely leaves it to service providers and the government to scrutinize and act on problematic content. The EU approach leaves more room for interpretation as to its scope, though critically it also recognizes various fundamental rights for users and provides them with more agency. That said, the EU approach has also been criticised as moving away from a “rights-based approach,” as exemplified by the General Data Protection Regulation (the EU’s landmark privacy law), to a “risk-based approach,” which foregrounds innovation and commercial development of AI technologies over protection of fundamental human rights.

The EU approach is less intrusive than the Chinese approach in that it attempts to implement proportionate and light touch regulation (for instance, through permitting self-assessment of AI systems). However, the EU framework does bar the use of certain AI systems, whereas the Chinese framework does not. That said, the AI Act in the EU will likely impose higher compliance costs connected to documentation, reporting requirements, and the like. The EU framework also makes more demands of regulatory capacity. The framework in China appears far more liberal in this specific respect.

Crucially, the European AI law applies to both the private and public sector, whereas the primary object of regulation in China is the private sector. This could be concerning in the context of rights protection, access to benefits, etc., particularly given the user surveillance mechanisms built into the Chinese regulatory framework.

As more and more jurisdictions around the world seek to regulate the AI ecosystem in the public interest, one is likely to see the models developed in China, and particularly the EU, to be of great interest to governments around the world. We have already seen how the EU’s privacy regulation has influenced countries around the world to adopt similar regulatory frameworks. This has both economic and political benefits.

Implementing clear legal frameworks can promote user acceptance and adoption of AI systems (including by making systems safer), while also promoting business interests by ensuring legal certainty. Regulation can also ensure that AI systems do not undercut hard won fundamental rights and consumer protections. AI is a profoundly impactful technology that is likely to influence every sector of our economies. It is therefore vital that legislators get ahead of the issue and regulate to protect consumer interests before Pandora’s box is completely opened. While some may advocate the need to refrain from regulating what is still a nascent industry, this may not be a preferable option. There is already considerable evidence of the various types of harms that could occur due to the large-scale adoption of AI systems in several critical areas. Innovation does not have to come at the cost of consumer and public safety and welfare. Finally, it must also be kept in mind that regulatory compatibility between two jurisdictions can make it easier for companies from one jurisdiction to access markets in another. 

While every country should adopt rules tailored to its own circumstances and public interest priorities, the EU and China show that it is practicable to adopt and implement AI regulation designed to protect consumers, address bias, and advance safety. Countries that do not proactively regulate AI may risk the welfare of their people.