Technology is racing ahead. And the music you hear fills you with dread—Is it really the voice you know, or just an AI putting on a show? AI songs stand at the crossroads of innovation and controversy. Recently, AI-generated songs that clone the real voices of celebrities have sparked intense debate globally. In China, songs mimicking famous Chinese singer Stefanie Sun’s voice have become particularly controversial. Fans create tracks in her style, despite her never having sung them, and share these on social media platforms without her consent. A popular tool for this is “SO-VITS-SVC,” an open-source program that can clone celebrity voices, enabling anyone to create an AI model that can “sing” in the trained voice.
Against this backdrop, the article first examines whether current Chinese law is resilient enough to adapt to new technology in granting celebrities control rights over these AI songs. In many jurisdictions, a person’s voice is seen as part of their identity and deserves protection. In China, it is widely accepted that a person’s unique voice is part of their identity, which entails certain personality interests, especially those related to dignity. However, scholars disagree on whether the law should give a separate right to voice or just recognize the personality interests connected to it. The PRC Civil Code, promulgated in 2020, took a small step in protecting individuals’ voices by acknowledging personality interests in unique voices, rather than creating a separate right to voice. This distinction between standalone rights and recognized personality interests is significant under the Chinese civil law system, as rights typically receive more systematic and extensive protection than personality interests. By interpreting the Civil Code, this article concludes that it is feasible to construe the relevant provisions in a way that grants celebrities control rights over AI songs.
However, apart from conducting doctrinal and descriptive analysis, this article delves into the larger theoretical question of whether at all and when celebrities should be allowed to control these AI songs. Should we adopt a strict interpretation of the Civil Code which is clearly favourable to celebrities? I draw on several theories, including incentive rationale, economic efficiency, labor theory, individual liberty and dignity interests, consumer welfare, and dilution theory, to answer the question. Most of these theories do not offer justification for celebrities to fully control AI songs created on the basis of their works. For example, while utilitarianism provides reasons for allowing individuals to control their own voices, doubts remain as to whether identity holders should receive all the benefits derived from their voices. Labor theory acknowledges the contribution of voice holders to AI songs but also emphasizes the contributions of other market participants, making absolute control questionable. Consumer protection is one potential justification for celebrities to control their voices in AI songs, as it could prevent confusion over the authenticity of the song on the part of the consumers. However, confusion is not typically an issue in the AI songs context, and there are more direct ways to address any potential ambiguity over a song’s creator. Some scholars invoke dilution theory to justify control rights over AI songs, arguing that it prevents weakening the association between celebrities and their voices. Yet, this article doubts whether such dilution by AI songs actually occurs in practice. Ultimately, the only plausible justification lies in dignitary interests, which may support a creator’s limited but not absolute control over AI songs.
None of these theories provides strong reasons to interpret the relevant provisions of the PRC Civil Code in the manner that is strongly favourable to the artists, as is the result of mere doctrinal analysis shown above. In China, where laws focus on dignitary interests, policymakers might naturally want to expand personality rights. However, while it is important to consider these dignitary interests of artists, the reference to other theories can help balance the many different interests involved. It is recommended that policymakers consider all these theories instead of just focusing on one or two.
Building on the doctrinal view and the discussion of relevant theories, this article then puts forward a short proposal for policymakers to serve as initiation for debates on future legislation. First, a general right to control AI songs is recommended to protect individual dignity and liberty of artists. Second, while decision-makers may be inclined to grant broad control rights as a way to reward the invested labor on the part of artists, they should also consider the contributions of other participants and design more balanced, qualified rights. Third, to prevent consumer confusion over the authenticity of songs, policymakers can implement more direct measures, such as requiring platforms or content uploaders to display clear indications for AI-generated products, rather than establishing new control rights. Finally, any general right to control AI songs based on dignitary interests should also take the public interest into account, incorporating exceptions for selected situations. Building on these insights, the article further proposes specific reform suggestions for the PRC Civil Code.
The question of whether celebrities should have rights to control AI songs is just one of many challenges policymakers face regarding personality rights in the new technological age. This article warns against the trend of sloppily broadening the scope of personality rights in China as a solution whenever there are issues arising from new technology. It recommends decision-makers to consider different theories and ideas when addressing new legal and technological issues to form a more balanced solution.
The paper “Is Chinese Law Well-Prepared for AI Songs? A Note of Caution on the Over-Expansion of Personality Rights” is published in the Cardozo Arts & Entertainment Law Journal Vol. 42(2), 2024 (SSRN draft available here). The author thanks Kaijing XU, a JD student at CityU School of Law, for the research assistance in preparing this post. Yang Chen is an assistant professor at the City University of Hong Kong. He has received an LL.B from China University of Political Science and Law, an LL.M from London School of Economics, and another LL.M and SJD from the University of Pennsylvania Carey Law School. Yang works primarily in the areas of intellectual property law, with a keen interest in particularly trade secrets law and right of publicity. He also researches trademark law and copyright law. His works have appeared in several journals such as the Columbia Journal of Law and the Arts, the University of Pittsburgh Law Review, and the University of Pennsylvania Journal of Business Law.
Q&A with the creators of China’s expert draft AI law
On April 16th this year, a group of 20 leading scholars and experts, as part of a government-funded research project under the China Academy for Social Sciences, released a draft for an Artificial Intelligence Law. The draft law alongside translations into English, French, and Spanish can be found here. A first version was published mid-2023 (and commented on by experts here). Expert law drafts like these are typically first steps towards legislative action, however, the degree to which legislators choose to rely on and work with expert drafts has varied in the past. We talked to the drafters about the thinking behind making a law on the possibly most complex and consequential subject of our decade – artificial intelligence.
Is there a need for an Artificial Intelligence Law in China?
China now needs legislation specifically for artificial intelligence (AI). AI has the potential to significantly enhance productivity by empowering various industries and creating production methods and business models that were previously unimaginable. However, it also poses disruptive threats, such as the recent development of generative AI, which allows individuals to easily generate complex images or even videos through natural language interaction. This clearly exacerbates issues such as misinformation and polarisation online. Additionally, the increasing demand for training data and computing power by AI raises concerns about intellectual property protection and fair resource allocation.
Therefore, there is a need for dedicated legislation to both promote the development of AI, maximising its benefits, and mitigate its threats. Such legislation would establish institutional arrangements at a macro level to stabilise expectations from the government, market, society, and other stakeholders.
In fact, as early as May 2023, the State Council of China released the “Legislative Work Plan of the State Council for 2023,” which proposed the preparation of a draft AI law for review by the Standing Committee of the National People’s Congress. In response, in its Legislative Plan released in September 2023, the Standing Committee indicated that it will “promote scientific and technological innovation and the healthy development of AI, improve the legal system concerning foreign affairs, and formulate, revise, repeal, and interpret relevant laws, or make relevant decisions as needed by the National People’s Congress and its Standing Committee.”
What are the main societal issues this law addresses?
The overall approach is to both promote the development of the AI industry and address its potential negative impacts.
(1) The issue of how the artificial intelligence industry should develop:
To address the development of AI, it is necessary to formulate specialized development plans, proactively establish computing power infrastructure, design personal information protection systems compatible with AI development, and take measures to encourage more data to be utilized in AI training. Additionally, legislation should encourage universities to cultivate specialized AI talents, require all levels of government to allocate specific portions of the budget to support AI development, and establish special tax incentives for AI research and development providers. In particular, we encourage the establishment of AI special zones at the national level, where these zones can enact specialized legislation authorized by the National People’s Congress, even in the presence of flexible provisions regarding national legislation.
(2) The potential negative impacts of artificial intelligence technology:
This includes concerns regarding the security of AI usage and how AI can be understood by its users. It also involves efforts to prevent AI from discriminating against individuals and causing other potential ethical harms.
Who is the drafting team – did you consult practitioners, officials in regulatory agencies, lawyers or judges in the drafting process?
Our drafting team comprises legal scholars, industry professionals, personnel from international organizations such ISO, and government research institutions [a list of participants can be found in the draft law].
We extensively solicited input from practitioners and maintained constant communication with regulatory authorities. Court rulings were particularly insightful for us, especially in drafting Article 70, which addresses the balance between AI development and intellectual property protection when AI-generated content infringes upon others’ intellectual property rights. This was directly inspired by rulings from the Guangzhou Internet Court regarding disputes triggered by AI-generated content.
What are some of the key principles that appear in this law and why did you include them?
Firstly, in governing AI, we specifically stipulate the principle of coordinated development and security, implementing inclusive and prudent supervision. This draws from our valuable experience in developing emerging industries in our country. The development of emerging industries inherently carries uncertainties, often leading to breakthroughs in existing legal systems. By emphasising the premise of security, we aim to foster more cautious intervention through regulation, thus preserving greater space for the development of emerging industries empowered by AI.
Secondly, concerning the purpose of AI governance, we explicitly define the principle of human-centeredness. Specifically, it aims to “ensure that humans can always supervise and control artificial intelligence, with the ultimate goal of promoting human well-being.” This serves as the anchor value for all regulatory activities.
Thirdly, as an extension of the human-centric principle, we affirm the principles of fairness and equality. This requires that the use of AI must not result in unjust discrimination against specific individuals or organisations. Additionally, it should fully consider the needs of minors, the elderly, and people with disabilities to prevent AI development from exacerbating the digital divide. Furthermore, we emphasise the principles of transparency, interpretability, and accountability, requiring developers to provide explanations of the purpose, principles, and effects of the AI they develop and use. Organisations or individuals involved in the development, provision, and use of AI should also be accountable for its actions.
Finally, human applications of AI must be sustainable, hence we establish the green principle, which mandates the development of energy-efficient and emission-reducing technologies for AI applications.
We see that other nations continuously iterate laws and policies, promote public computing power and data sharing, and establish intellectual property rights systems compatible with AI development. AI regulation necessitates extensive international cooperation as AI concerns the well-being and security of all humanity. Principles of open collaboration and joint formulation of international norms and standards around AI are thus included.
What legal rights and obligations are put on the table, and in how far are they new in China’s overall legal framework?
As users of AI technology, each of us is relatively disadvantaged compared to AI developers and providers. To bridge the power disparity between the two sides, the AI model law prominently emphasises the obligations for AI developers and providers. These include obligations to ensure safety, to manage vulnerabilities, to offer remedial channels and to notify subjects when discovering risks, as well as obligations to establish risk management systems, and to conduct ethical reviews, among others.
We also establish negative list system for the obligations of AI developers and providers. For AI entering the negative list and serving as foundational models, we will further supplement and strengthen these obligations. The negative list, with its contents regularly updated, is laid down by the national AI regulatory authority based on the importance of the AI, as well as the potential harm to national security, public interests, social stability, environmental protection, etc., caused by attacks, tampering, destruction, or illegal theft and utilisation.
How can subjects raise rights violations, what procedures and mechanisms are there? What role can administrative litigation play, are there other, more feasible channels?
According to this question, we can distinguish two types of “subjects.” One type is AI developers and providers. According to Article 72 of the AI model law, if they disagree with administrative acts such as administrative licensing or penalties issued by administrative agencies, they can initiate administrative reconsideration or administrative litigation. The other type includes other subjects, such as AI product users or third parties affected by AI, such as those whose intellectual property rights have been infringed by AI products. For these subjects, if they are dissatisfied with the aforementioned administrative acts and have a direct interest in their own rights, they also have the right to initiate administrative reconsideration or litigation, such as demanding stricter supervision by administrative agencies. Of course, if the rights of these subjects are infringed upon, according to the provisions of Article 70, they can also file civil lawsuits to claim compensation.
It is important to emphasize that promoting AI development is a key objective of this law. Therefore, we have also established a system of compliance and exemption for AI developers and providers. Regarding civil liability, for cases where AI infringes upon others’ intellectual property rights, according to Article 70, as long as the provider of the AI product that infringes upon others’ rights can prove that they have labeled the AI-generated content, notified users through user agreements not to infringe upon others’ intellectual property rights, and established rules for intellectual property protection and a complaint mechanism within their own organization, then the provider may not be held jointly liable with the developer. Regarding administrative liability, if the internal compliance construction of AI developers and providers meets effectiveness standards after assessment, according to Article 75, they may be exempted from punishment or given lenient punishment. Similar provisions apply to criminal liability.
China’s internet regulatory agencies have been issuing regulatory documents related to AI and data in recent years. These regulatory arrangements are exploratory in nature and provide valuable experience for future specialised AI legislation. Our drafting of the AI model law weaves in these regulations. For example, the requirement for marking AI-generated content originates from regulatory documents concerning generative AI issued in 2023.
Compared to existing regulations related to AI, the AI model law is more systematic. If we consider existing AI-related regulatory documents as “patches” where a document is issued to address a specific problem, then the AI model law is akin to a “tailored suit” designed based on the characteristics of AI technology. It aims to provide systematic and structural solutions for AI governance.
What legislation from other countries did you consult, what do you think China’s AI law can learn from laws elsewhere?
We have referenced legislation and regulatory policies related to AI from the United States, the European Union, and the United Kingdom. These countries universally emphasise the threats posed by AI but are simultaneously committed to its vigorous development. Concerning AI threats, issues such as discrimination, misinformation, and intellectual property are widely acknowledged. In terms of regulatory strategies, we have drawn inspiration from the EU’s tiered classification management of AI and its institutional design for AI open-source to some extent.
Legislators globally deal with similar issues when it comes to regulating AI. What contribution can the ideas in this model law bring to the frontiers of AI legislation?
In my opinion, the AI model law primarily demonstrates innovation in both intellectual property protection mechanisms and open-source governance.
Regarding intellectual property protection, Article 10 of the AI model law emphasises the need to protect intellectual property rights in the AI field while also advocating for the development of statutory licensing and fair use systems compatible with AI development. This reflects the balance between rights protection and technological advancement. It authorises the national intellectual property regulatory authority to establish supporting systems, addressing the issue of fair and reasonable distribution of benefits arising from AI technology development. Article 71 further specifies the allocation of civil liability in cases of AI infringement of intellectual property rights. In principle, both AI providers and users should bear joint liability for AI infringement of intellectual property rights. However, the law also establishes conditions for the exemption of liability for AI providers if they 1. appropriately label AI-generated content; 2. notify users through user agreements not to infringe on intellectual property rights; 3. establish intellectual property protection systems, with measures such as warning or punishment for infringement, and 4. establish an intellectual property complaint mechanism. These provisions are to encourage AI providers to make more compliance efforts by reducing legal liability, thus seeking a balance between technological development and rights protection. It is worth noting that this legislative design to some extent draws on the latest judgements of the Guangzhou Internet Court.
Regarding open-source governance, we believe that open-source will benefit the further development of AI technology. Therefore, the law encourages open-sourcing of AI technology as a policy orientation. The state encourages the establishment of open-source development platforms and the setting up of open-source AI foundations. Promoting secure and compliant applications of open-source software projects is recognised as a goal that the law should foster. Specific encouragement policies include Article 21, which encourages governments at all levels and state-owned enterprises to procure open-source AI products that comply with national standards. Article 22 authorises the State Council to establish special tax incentives for open-source AI. Article 59 stipulates that the national AI regulatory authority should develop specific compliance guidelines for open-source AI developers. Article 71 also provides liability exemptions for AI where it provides open-source resources. Specifically, for certain code modules of AI, as long as their functionality and security risks are clearly disclosed, no legal liability will be borne for damages caused by them. For AI itself, as long as the provider can prove that a sufficiently strict governance system is in place and relevant security governance measures are implemented, liability can be mitigated or exempted.
Which state organs are relevant actors in the legal framework you envision?
Given the broad scope of AI governance, we recommend establishing a dedicated national-level AI regulatory agency. Additionally, we suggest setting up corresponding AI regulatory agencies within provincial-level governments and some municipal governments.
How does the path forward look like – what do you hope to achieve by publishing this model law?
This is the second version of an AI model law. We aim to propose a comprehensive legal framework that provides guidance for legislators to advance related legislation, anticipates market and industry development, and seeks to harness the wisdom of scholars from around the world to collectively improve the governance system surrounding AI.
We thank Tianhao Chen from Tsinghua University and the drafting team for sharing their thoughts.
In recent years, China has emerged as a pioneer in formulating some of the earliest and most comprehensive legislations regulating recommendation algorithms, deepfakes, and generative AI services. This has left the impression that China has stood at the forefront as a global leader in regulating AI. Matt Sheehan, a highly-regarded expert in Chinese AI policy suggests that the U.S. can gain valuable insights from China’s approach to AI governance. Industry observers therefore view Beijing’s regulatory approach as a potential obstacle to Chinese innovation. Such concerns are not unwarranted. In 2020-2022, China undertook a sweeping crackdown on its tech firms. The erratic nature of Chinese tech policy has unnerved investors, precipitating severe and unintended consequences of deterring investment and entry into the consumer tech business.
However, this perception that China has stood at the forefront in regulating AI fails to account for the intricate dynamics of the Chinese political economy. Authoritarian states face a dual-challenge with emerging technologies, as these technologies can empower civil society on one hand, while enhancing government surveillance capabilities and strengthening social stability on the other. Furthermore, technological advancements are crucial for economic growth and national competitiveness. To balance the need for stability and the desire to foster innovation, China has adopted a bifurcated approach to AI regulation: strict information control juxtaposed with industry-friendly regulation. This approach keenly reflects the complex utility function of the Chinese Communist Party, who seeks legitimacy through multiple sources including growth, stability, and nationalism.
Yet striking a balance between regulation and innovation is far from easy. The Chinese government assumes multiple roles in the AI ecosystem as a policymaker, an investor, a supplier, a customer, and a regulator. Given the government’s extensive involvement, it lacks a strong commitment to regulate the industry. Moreover, although AI can pose many social harms, they have not yet evolved into immediate threats to social and political stability. AI safety risks remain speculative, despite warnings from experts. The Chinese government also recognizes the economic benefits AI promises, amidst the intense Sino-US tech rivalry. The tightening of US export restrictions, which hinder Chinese AI firms’ access to advanced chips, have only intensified this competitive pressure, thereby diminishing the government’s incentive for strict regulation.
The Chinese government also faces significant constraints in imposing strict regulation on AI. China’s tech crackdown in 2020-2022 has demonstrated that harsh regulatory measures can generate strong repercussion in the market. Since early 2023, the Chinese economy has entered into a slump. The government’s focus has thus shifted towards revitalizing the economy and boosting market confidence. Consequently, despite appearances of proactive intervention, Chinese regulators have focused on fostering AI growth. The regulatory rules being adopted have sent strong pro-growth signals while attempting to facilitate stakeholder coordination to advance AI development. This close integration of industrial policy and law is a defining feature of Chinese AI regulation.
Understanding the nuances of China’s AI regulatory strategy is crucial not only for predicting the trajectory of its technological development but also for assessing its implications on the global tech rivalry. Major jurisdictions including both the U.S. and the EU are actively exploring the establishment of a comprehensive AI regulatory framework, as exemplified by the AI Act and Biden’s executive order. Leading US AI firms are involved in various litigations and face mounting pressure to negotiate licenses with media for the use of their content as training data. In contrast, China’s relatively more relaxed regulatory environment may offer its AI firms a short-term competitive advantage over their EU and U.S. counterparts.
Meanwhile, China’s approach could give rise to serious regulatory lag. This situation is aggravated by China’s weak market conditions, poor legal institutions, and the tightly coupled political system, potentially leading to latent risks that could escalate into AI-related crises. For example, the Chinese government is invigorating a “whole of society” approach to push forward AI development without necessarily taking effective precautionary measures. Under such a command-and-control strategy, by the time the full impact of AI harms become apparent to top policymakers, it could be too late for effective reversal or mitigation. This dynamic complexity of China’s AI regulation therefore underscores the urgent need for increased international dialogue and collaboration with the country to tackle the safety challenges in AI regulation.