Skip to content

European Chinese Law Research Hub

  • Home
  • About Us
  • Follow Us
European Chinese Law Research Hub

Tag: Civil Law

Is Chinese Law Prepared for AI Songs?

5. December 2024
A paper by Yang CHEN
Credit: Free Malaysia Today

Technology is racing ahead. And the music you hear fills you with dread—Is it really the voice you know, or just an AI putting on a show? AI songs stand at the crossroads of innovation and controversy. Recently, AI-generated songs that clone the real voices of celebrities have sparked intense debate globally. In China, songs mimicking famous Chinese singer Stefanie Sun’s voice have become particularly controversial. Fans create tracks in her style, despite her never having sung them, and share these on social media platforms without her consent. A popular tool for this is “SO-VITS-SVC,” an open-source program that can clone celebrity voices, enabling anyone to create an AI model that can “sing” in the trained voice.

Against this backdrop, the article first examines whether current Chinese law is resilient enough to adapt to new technology in granting celebrities control rights over these AI songs. In many jurisdictions, a person’s voice is seen as part of their identity and deserves protection. In China, it is widely accepted that a person’s unique voice is part of their identity, which entails certain personality interests, especially those related to dignity. However, scholars disagree on whether the law should give a separate right to voice or just recognize the personality interests connected to it. The PRC Civil Code, promulgated in 2020, took a small step in protecting individuals’ voices by acknowledging personality interests in unique voices, rather than creating a separate right to voice. This distinction between standalone rights and recognized personality interests is significant under the Chinese civil law system, as rights typically receive more systematic and extensive protection than personality interests. By interpreting the Civil Code, this article concludes that it is feasible to construe the relevant provisions in a way that grants celebrities control rights over AI songs.

However, apart from conducting doctrinal and descriptive analysis, this article delves into the larger theoretical question of whether at all and when celebrities should be allowed to control these AI songs. Should we adopt a strict interpretation of the Civil Code which is clearly favourable to celebrities? I draw on several theories, including incentive rationale, economic efficiency, labor theory, individual liberty and dignity interests, consumer welfare, and dilution theory, to answer the question. Most of these theories do not offer justification for celebrities to fully control AI songs created on the basis of their works. For example, while utilitarianism provides reasons for allowing individuals to control their own voices, doubts remain as to whether identity holders should receive all the benefits derived from their voices. Labor theory acknowledges the contribution of voice holders to AI songs but also emphasizes the contributions of other market participants, making absolute control questionable. Consumer protection is one potential justification for celebrities to control their voices in AI songs, as it could prevent confusion over the authenticity of the song on the part of the consumers. However, confusion is not typically an issue in the AI songs context, and there are more direct ways to address any potential ambiguity over a song’s creator. Some scholars invoke dilution theory to justify control rights over AI songs, arguing that it prevents weakening the association between celebrities and their voices. Yet, this article doubts whether such dilution by AI songs actually occurs in practice. Ultimately, the only plausible justification lies in dignitary interests, which may support a creator’s limited but not absolute control over AI songs.

None of these theories provides strong reasons to interpret the relevant provisions of the PRC Civil Code in the manner that is strongly favourable to the artists, as is the result of mere doctrinal analysis shown above. In China, where laws focus on dignitary interests, policymakers might naturally want to expand personality rights. However, while it is important to consider these dignitary interests of artists, the reference to other theories can help balance the many different interests involved. It is recommended that policymakers consider all these theories instead of just focusing on one or two.

Building on the doctrinal view and the discussion of relevant theories, this article then puts forward a short proposal for policymakers to serve as initiation for debates on future legislation. First, a general right to control AI songs is recommended to protect individual dignity and liberty of artists. Second, while decision-makers may be inclined to grant broad control rights as a way to reward the invested labor on the part of artists, they should also consider the contributions of other participants and design more balanced, qualified rights. Third, to prevent consumer confusion over the authenticity of songs, policymakers can implement more direct measures, such as requiring platforms or content uploaders to display clear indications for AI-generated products, rather than establishing new control rights. Finally, any general right to control AI songs based on dignitary interests should also take the public interest into account, incorporating exceptions for selected situations. Building on these insights, the article further proposes specific reform suggestions for the PRC Civil Code.

The question of whether celebrities should have rights to control AI songs is just one of many challenges policymakers face regarding personality rights in the new technological age. This article warns against the trend of sloppily broadening the scope of personality rights in China as a solution whenever there are issues arising from new technology. It recommends decision-makers to consider different theories and ideas when addressing new legal and technological issues to form a more balanced solution.

The paper “Is Chinese Law Well-Prepared for AI Songs? A Note of Caution on the Over-Expansion of Personality Rights” is published in the Cardozo Arts & Entertainment Law Journal Vol. 42(2), 2024 (SSRN draft available here). The author thanks Kaijing XU, a JD student at CityU School of Law, for the research assistance in preparing this post.
Yang Chen is an assistant professor at the City University of Hong Kong. He has received an LL.B from China University of Political Science and Law, an LL.M from London School of Economics, and another LL.M and SJD from the University of Pennsylvania Carey Law School. Yang works primarily in the areas of intellectual property law, with a keen interest in particularly trade secrets law and right of publicity. He also researches trademark law and copyright law. His works have appeared in several journals such as the Columbia Journal of Law and the Arts, the University of Pittsburgh Law Review, and the University of Pennsylvania Journal of Business Law.

General Civil Code, Civil Law, Regulation of AI

Making the Private Public: Regulating Content Moderation

22. December 2023
A paper by Baiyang Xiao
Capture of the video installation “Unerasable Characters II” by Winnie Soon: Drawing on the Weiboscope database, she designed software that visualizes Weibo posts that have been erased on a daily basis during the pandemic. Exhibit “Data Relations“, Australian Centre for Contemporary Art, Melbourne

Internet service providers (ISPs) globally are increasingly legally obliged to monitor and regulate content on their service. In general, such obligations may emanate from explicit legislative mandates, such as Article 17 of the EU’s Directive on Copyright in the Digital Single Market, or from the imposition of strict liability for user-generated content by judicial authorities, effectively requiring intermediaries to actively monitor and moderate illegal content to circumvent liability. China implemented a dual-track legal mechanism on content moderation that emphasizes the public and private distinction. Specifically, ISPs are exempted from monitoring obligations in private law, while public law explicitly imposes monitoring obligations for ISPs, requiring them to take on the role of gatekeepers who have a responsibility towards the public interest. This study aims to explain what legal measures China adopted to serve the needs of content control and compares the framework with the regulatory approach of the EU.

What is the current legal framework for content moderation?

On the one hand, the Chinese jurisprudence has reached consensus that the principle of prohibition on general monitoring obligations applies in private sphere and leaves certain room for monitoring obligations in cases of specific natures. In its authoritative interpretation of Article 1197 of the Civil Code, the Legislative Affairs Commission referred to international conventional practice and clarified that ‘ISPs that provide technical services are not subject to general monitoring obligations,’ but did not preclude the possibility of monitoring obligations of a specific nature. Moreover, the Supreme People’s Court (SPC) clarifies that the court shall not determine an ISP is at fault where it fails to conduct proactive monitoring regarding a user’s infringement. In another Guiding Opinion, the SPC explicitly stated that ‘[courts shall] not impose a general obligation of prior review and a relatively high degree of duty of care upon the ISPs […].’

On the other hand, under public law, ISPs are required to review, monitor, and inspect information prohibited from being disseminated by laws and administrative regulations. When they ‘discover’ illegal content disseminated on their services, they must fulfil their proactive monitoring obligations by taking certain measures to prevent the transmission of such content. In addition to technical filtering mechanisms, platforms must also employ trained personnel to conduct human reviews of uploaded content. Otherwise, they will face penalties for their failure to perform their monitoring obligations. Unsurprisingly, the scope of monitoring can be considered comprehensive, as the ISPs are required to monitor almost all online content in accordance with various laws, administrative regulations, and even ‘relevant state provisions.’

How did online platforms implement legal rules in practice?

Law enforcement agencies fully utilize the advantages of platforms in discovering, identifying, and handling illegal content, and entrust ISPs to proactively engage in collateral censorship through private ordering. Thus, platforms’ house rules act as a critical supplement to state legislation by restricting otherwise-legal content or activities. In practice, these house rules classify all the illegal, harmful and undesirable content as prohibited content, and ignore the distinction between prohibited content and undesirable content made in relevant administrative regulations. In fact, major Chinese platforms adopted a crafty approach by introducing more blurred and abstract concepts to explain the ambiguous language of legislation, thus worsening the predictability of house rules. Although commentators voice concerns about legal uncertainty deriving from ambiguous rules, the platforms frame them as ‘flexible’. With their expansive monitoring and an erratic and opaque decision-making process, mega platforms exercise much stronger control over the flow of information, regardless of more serious consequences that impact the fundamental rights of users.

On the one hand, in the broad T&Cs and Community Guidelines, a vast space is left for platforms to apply alternative mechanisms, which are often not transparent and not subject to external oversight, to moderate content. Within this frame, platforms adopt diverse measures to conduct content moderation, both preventive (ex-ante) and reactive (ex-post). Reactive measures such as region- and service-specific methods are employed to control the availability, visibility and accessibility of certain content, or restrict users’ ability to provide information, independently or in response to government mandates. Meanwhile, preventive content moderation, which aims to make content contingent on the prior consent of a designated public authority, usually takes the form of automated content filtering of unpublished content.

On the other hand, platforms extend the scope of content moderation with the substantial quasi-legislative power obtained from house rules. By introducing more uncertain concepts to elaborate on vague terms in public law, the predictability and transparency of house rules are further diminished. Under this parental state, other types of political heterodox speeches, legal speeches that violate widely held social norms and moral beliefs, or infrastructural values of platforms, are removed or blocked in practice.

When lacking systematic and institutional constraints, the constantly expanding content moderation practices are characterized by being quasi-legislative (T&Cs and Community Guidelines), quasi-executing (content moderation measures), and quasi-judicial (determination of illegal and harmful). Evidently, under the top-down collateral censorship mechanism, platforms try to adopt various stricter content moderation measures and further extend the scope of monitoring to eliminate potential uncertainties and risks. Such practices can further empower platforms, giving them greater control in terms of moderation technologies used and the making of norms for acceptable online content.

How did Chinese courts interpret content moderation in judicial practice?

Public law monitoring obligations encompass not only content that violates public law norms, but also content that violates private law norms. In judicial practice, the public law monitoring obligation is often interpreted as a duty of care.1 Courts thus deem that ISPs failed to fulfil their duty of care where they failed to perform public law monitoring obligations against online illegal content. The logic behind such legal reasoning indicates that, by virtue of their public law monitoring obligation, ISPs are presumed to have a corresponding monitoring obligation under private law. More importantly, courts implied that platforms should bear civil liability if they failed to perform their public law monitoring obligations.

In addition, fulfilling public law monitoring obligations may expose platforms to civil liability due to their actual knowledge of the existence of infringing content. In other cases, courts ruled that platforms risk losing their safe harbor protection if they take proactive measures to address illegal and harmful content.2 In certain exceptional circumstances, the level of duty of care for ISPs may be significantly elevated. For example, an ISP providing information storage space services is deemed to have constructive knowledge of a user’s infringement of the right of communication to the public on information networks, if the ISP substantially accesses the disputed content of popular movies and TV series or establishes a dedicated ranking for them on its own initiative. The legal reasoning in this decision implies that, since ISPs must fulfil their public law monitoring obligations, they should also be aware of potential copyright infringement within the content being monitored. 

Therefore, platforms face a dilemma: If they fail to fulfil their monitoring obligation set by public law, they are deemed to have committed an act that contributes to the occurrence of the infringement, for which they must assume administrative liability; at the same time, they need to conduct ex ante monitoring of content uploaded in order to fulfil the monitoring obligation set by public law, which means they have had constructive knowledge of the existence of infringing content and thus may bear a higher level of duty of care. Where infringing content appears on a platform, it is likely that the platform will be deemed to have knowledge regarding the existence of such content and thus be held liable. Particularly, law enforcement agencies are prone to fall into ‘results-oriented’ reasoning by presuming that ISPs failed to fulfil monitoring obligations.

Overall, the regulation of content moderation serves as a ‘policy lever’ used by public authorities to obtain control over the big tech powerhouses. At the same time, platforms are vested with a potent power, which has substantially mitigated not only illicit but also ‘lawful but awful’ online content to a large extent. However, this has accelerated the fragmentation of online law enforcement and generated the need for algorithmic recommendation and filtering systems. In the long run, excessively vague rules, inconsistent enforcement, paired with excessive reliance on algorithms will render the expansive collateral censorship of online content an inevitable failure, since it burdens ISPs with significant compliance costs and impacts freedom of expression, access to information and media pluralism at large.

The paper ‘Making the private public: Regulating content moderation under Chinese law’ was published in the Computer Law & Security Review. Baiyang Xiao is a PhD Candidate from University of Szeged, Institute of Comparative Law and Legal Theory. He is also a scholarship holder at the Max Planck Institute for Innovation and Competition. His main research interest is copyright law, intermediary liability, and AI governance in comparative perspectives.

  1. E.g. (2004)苏中民三初字第098号民事判决书; (2008)穗中法民三终字第119号民事判决书 ↩︎
  2.  E.g. (2021)京73民终220号民事判决书; (2019)京0491民初16240号民事判决书 ↩︎
General Civil Law, Content Moderation, Internet Service Providers, Online Speech

Recent Posts

  • The Arbitrability of Public-Private Partnership Contract Disputes in China
  • The Infrastructure of Control: Rethinking Party Discipline in China’s Political-Legal System
  • The Authoritarian Commons: Q&A with Shitong Qiao
  • The Juridification of Government Accountability in China: Addressing Mass Actions
  • Is Chinese Law Prepared for AI Songs?

Tags

Adjudication Administrative Enforcement Administrative Litigation Administrative Procedure Anti-Monopoly Law Arbitration Authoritarian Legality China International Commercial Court Chinese courts Civil Code Civil Law Civil litigation Comparative Law Constitutional Law Contract Law Covid-19 Criminal Law Criminal Procedure Data Protection Democracy Fintech Force Majeure Guiding Cases Hong Kong Human Rights International Law Judges Judicial Reform Judicial Reforms Labour Law Law and Development Lawyers Legal Culture Legal History Legal Theory Migration One Belt One Road Open Public Data Public International Law Regulation Regulation of AI Rule of Law Social Credit System State-owned companies Supreme People's Procuratorate

Subscribe to our newsletter…

...to be the first to learn about new blogposts.

Idealist by NewMediaThemes