Q&A with the creators of China’s expert draft AI law
On April 16th this year, a group of 20 leading scholars and experts, as part of a government-funded research project under the China Academy for Social Sciences, released a draft for an Artificial Intelligence Law. The draft law alongside translations into English, French, and Spanish can be found here. A first version was published mid-2023 (and commented on by experts here). Expert law drafts like these are typically first steps towards legislative action, however, the degree to which legislators choose to rely on and work with expert drafts has varied in the past. We talked to the drafters about the thinking behind making a law on the possibly most complex and consequential subject of our decade – artificial intelligence.
Is there a need for an Artificial Intelligence Law in China?
China now needs legislation specifically for artificial intelligence (AI). AI has the potential to significantly enhance productivity by empowering various industries and creating production methods and business models that were previously unimaginable. However, it also poses disruptive threats, such as the recent development of generative AI, which allows individuals to easily generate complex images or even videos through natural language interaction. This clearly exacerbates issues such as misinformation and polarisation online. Additionally, the increasing demand for training data and computing power by AI raises concerns about intellectual property protection and fair resource allocation.
Therefore, there is a need for dedicated legislation to both promote the development of AI, maximising its benefits, and mitigate its threats. Such legislation would establish institutional arrangements at a macro level to stabilise expectations from the government, market, society, and other stakeholders.
In fact, as early as May 2023, the State Council of China released the “Legislative Work Plan of the State Council for 2023,” which proposed the preparation of a draft AI law for review by the Standing Committee of the National People’s Congress. In response, in its Legislative Plan released in September 2023, the Standing Committee indicated that it will “promote scientific and technological innovation and the healthy development of AI, improve the legal system concerning foreign affairs, and formulate, revise, repeal, and interpret relevant laws, or make relevant decisions as needed by the National People’s Congress and its Standing Committee.”
What are the main societal issues this law addresses?
The overall approach is to both promote the development of the AI industry and address its potential negative impacts.
(1) The issue of how the artificial intelligence industry should develop:
To address the development of AI, it is necessary to formulate specialized development plans, proactively establish computing power infrastructure, design personal information protection systems compatible with AI development, and take measures to encourage more data to be utilized in AI training. Additionally, legislation should encourage universities to cultivate specialized AI talents, require all levels of government to allocate specific portions of the budget to support AI development, and establish special tax incentives for AI research and development providers. In particular, we encourage the establishment of AI special zones at the national level, where these zones can enact specialized legislation authorized by the National People’s Congress, even in the presence of flexible provisions regarding national legislation.
(2) The potential negative impacts of artificial intelligence technology:
This includes concerns regarding the security of AI usage and how AI can be understood by its users. It also involves efforts to prevent AI from discriminating against individuals and causing other potential ethical harms.
Who is the drafting team – did you consult practitioners, officials in regulatory agencies, lawyers or judges in the drafting process?
Our drafting team comprises legal scholars, industry professionals, personnel from international organizations such ISO, and government research institutions [a list of participants can be found in the draft law].
We extensively solicited input from practitioners and maintained constant communication with regulatory authorities. Court rulings were particularly insightful for us, especially in drafting Article 70, which addresses the balance between AI development and intellectual property protection when AI-generated content infringes upon others’ intellectual property rights. This was directly inspired by rulings from the Guangzhou Internet Court regarding disputes triggered by AI-generated content.
What are some of the key principles that appear in this law and why did you include them?
Firstly, in governing AI, we specifically stipulate the principle of coordinated development and security, implementing inclusive and prudent supervision. This draws from our valuable experience in developing emerging industries in our country. The development of emerging industries inherently carries uncertainties, often leading to breakthroughs in existing legal systems. By emphasising the premise of security, we aim to foster more cautious intervention through regulation, thus preserving greater space for the development of emerging industries empowered by AI.
Secondly, concerning the purpose of AI governance, we explicitly define the principle of human-centeredness. Specifically, it aims to “ensure that humans can always supervise and control artificial intelligence, with the ultimate goal of promoting human well-being.” This serves as the anchor value for all regulatory activities.
Thirdly, as an extension of the human-centric principle, we affirm the principles of fairness and equality. This requires that the use of AI must not result in unjust discrimination against specific individuals or organisations. Additionally, it should fully consider the needs of minors, the elderly, and people with disabilities to prevent AI development from exacerbating the digital divide. Furthermore, we emphasise the principles of transparency, interpretability, and accountability, requiring developers to provide explanations of the purpose, principles, and effects of the AI they develop and use. Organisations or individuals involved in the development, provision, and use of AI should also be accountable for its actions.
Finally, human applications of AI must be sustainable, hence we establish the green principle, which mandates the development of energy-efficient and emission-reducing technologies for AI applications.
We see that other nations continuously iterate laws and policies, promote public computing power and data sharing, and establish intellectual property rights systems compatible with AI development. AI regulation necessitates extensive international cooperation as AI concerns the well-being and security of all humanity. Principles of open collaboration and joint formulation of international norms and standards around AI are thus included.
What legal rights and obligations are put on the table, and in how far are they new in China’s overall legal framework?
As users of AI technology, each of us is relatively disadvantaged compared to AI developers and providers. To bridge the power disparity between the two sides, the AI model law prominently emphasises the obligations for AI developers and providers. These include obligations to ensure safety, to manage vulnerabilities, to offer remedial channels and to notify subjects when discovering risks, as well as obligations to establish risk management systems, and to conduct ethical reviews, among others.
We also establish negative list system for the obligations of AI developers and providers. For AI entering the negative list and serving as foundational models, we will further supplement and strengthen these obligations. The negative list, with its contents regularly updated, is laid down by the national AI regulatory authority based on the importance of the AI, as well as the potential harm to national security, public interests, social stability, environmental protection, etc., caused by attacks, tampering, destruction, or illegal theft and utilisation.
How can subjects raise rights violations, what procedures and mechanisms are there? What role can administrative litigation play, are there other, more feasible channels?
According to this question, we can distinguish two types of “subjects.” One type is AI developers and providers. According to Article 72 of the AI model law, if they disagree with administrative acts such as administrative licensing or penalties issued by administrative agencies, they can initiate administrative reconsideration or administrative litigation. The other type includes other subjects, such as AI product users or third parties affected by AI, such as those whose intellectual property rights have been infringed by AI products. For these subjects, if they are dissatisfied with the aforementioned administrative acts and have a direct interest in their own rights, they also have the right to initiate administrative reconsideration or litigation, such as demanding stricter supervision by administrative agencies. Of course, if the rights of these subjects are infringed upon, according to the provisions of Article 70, they can also file civil lawsuits to claim compensation.
It is important to emphasize that promoting AI development is a key objective of this law. Therefore, we have also established a system of compliance and exemption for AI developers and providers. Regarding civil liability, for cases where AI infringes upon others’ intellectual property rights, according to Article 70, as long as the provider of the AI product that infringes upon others’ rights can prove that they have labeled the AI-generated content, notified users through user agreements not to infringe upon others’ intellectual property rights, and established rules for intellectual property protection and a complaint mechanism within their own organization, then the provider may not be held jointly liable with the developer. Regarding administrative liability, if the internal compliance construction of AI developers and providers meets effectiveness standards after assessment, according to Article 75, they may be exempted from punishment or given lenient punishment. Similar provisions apply to criminal liability.
What other laws, regulations, or standards did you draw on when drafting this law and how does it fit in with existing legislation on AI like the 2023 draft measures on generative AI and the 2022 provisions on deep synthesis?
China’s internet regulatory agencies have been issuing regulatory documents related to AI and data in recent years. These regulatory arrangements are exploratory in nature and provide valuable experience for future specialised AI legislation. Our drafting of the AI model law weaves in these regulations. For example, the requirement for marking AI-generated content originates from regulatory documents concerning generative AI issued in 2023.
Compared to existing regulations related to AI, the AI model law is more systematic. If we consider existing AI-related regulatory documents as “patches” where a document is issued to address a specific problem, then the AI model law is akin to a “tailored suit” designed based on the characteristics of AI technology. It aims to provide systematic and structural solutions for AI governance.
What legislation from other countries did you consult, what do you think China’s AI law can learn from laws elsewhere?
We have referenced legislation and regulatory policies related to AI from the United States, the European Union, and the United Kingdom. These countries universally emphasise the threats posed by AI but are simultaneously committed to its vigorous development. Concerning AI threats, issues such as discrimination, misinformation, and intellectual property are widely acknowledged. In terms of regulatory strategies, we have drawn inspiration from the EU’s tiered classification management of AI and its institutional design for AI open-source to some extent.
Legislators globally deal with similar issues when it comes to regulating AI. What contribution can the ideas in this model law bring to the frontiers of AI legislation?
In my opinion, the AI model law primarily demonstrates innovation in both intellectual property protection mechanisms and open-source governance.
Regarding intellectual property protection, Article 10 of the AI model law emphasises the need to protect intellectual property rights in the AI field while also advocating for the development of statutory licensing and fair use systems compatible with AI development. This reflects the balance between rights protection and technological advancement. It authorises the national intellectual property regulatory authority to establish supporting systems, addressing the issue of fair and reasonable distribution of benefits arising from AI technology development. Article 71 further specifies the allocation of civil liability in cases of AI infringement of intellectual property rights. In principle, both AI providers and users should bear joint liability for AI infringement of intellectual property rights. However, the law also establishes conditions for the exemption of liability for AI providers if they 1. appropriately label AI-generated content; 2. notify users through user agreements not to infringe on intellectual property rights; 3. establish intellectual property protection systems, with measures such as warning or punishment for infringement, and 4. establish an intellectual property complaint mechanism. These provisions are to encourage AI providers to make more compliance efforts by reducing legal liability, thus seeking a balance between technological development and rights protection. It is worth noting that this legislative design to some extent draws on the latest judgements of the Guangzhou Internet Court.
Regarding open-source governance, we believe that open-source will benefit the further development of AI technology. Therefore, the law encourages open-sourcing of AI technology as a policy orientation. The state encourages the establishment of open-source development platforms and the setting up of open-source AI foundations. Promoting secure and compliant applications of open-source software projects is recognised as a goal that the law should foster. Specific encouragement policies include Article 21, which encourages governments at all levels and state-owned enterprises to procure open-source AI products that comply with national standards. Article 22 authorises the State Council to establish special tax incentives for open-source AI. Article 59 stipulates that the national AI regulatory authority should develop specific compliance guidelines for open-source AI developers. Article 71 also provides liability exemptions for AI where it provides open-source resources. Specifically, for certain code modules of AI, as long as their functionality and security risks are clearly disclosed, no legal liability will be borne for damages caused by them. For AI itself, as long as the provider can prove that a sufficiently strict governance system is in place and relevant security governance measures are implemented, liability can be mitigated or exempted.
Which state organs are relevant actors in the legal framework you envision?
Given the broad scope of AI governance, we recommend establishing a dedicated national-level AI regulatory agency. Additionally, we suggest setting up corresponding AI regulatory agencies within provincial-level governments and some municipal governments.
How does the path forward look like – what do you hope to achieve by publishing this model law?
This is the second version of an AI model law. We aim to propose a comprehensive legal framework that provides guidance for legislators to advance related legislation, anticipates market and industry development, and seeks to harness the wisdom of scholars from around the world to collectively improve the governance system surrounding AI.
We thank Tianhao Chen from Tsinghua University and the drafting team for sharing their thoughts.