Skip to content

European Chinese Law Research Hub

  • Home
  • About Us
  • Follow Us
European Chinese Law Research Hub

When Algorithms Meet Justice: A Deep Dive into AI-assisted Criminal Proceedings in China

By Wanqiang Wu and Xifen Lin
This is contribution #4 in our series SMART COURTS AND SMART GOVERNANCE IN CHINA, outcome of our workshop in July 2025 at Cologne University.

Picture a prosecutor in Shanghai opening their computer to review a theft case. Instead of manually searching through thousands of precedents, they turn to an AI system that instantly analyzes the evidence and recommends similar cases from a database of millions. Within minutes, the system suggests whether to detain the suspect and even predicts the likely sentence. This isn’t science fiction anymore, it is the reality of China’s “206” system, arguably one of the most ambitious experiments in AI-assisted criminal justice worldwide.

The Rise of Smart Justice in China

Since 2016, China rapidly embraced what it calls “Smart Justice” (智慧司法), integrating artificial intelligence throughout its legal system. The Shanghai “206” system represents one frontline pioneer of this transformation. Unlike AI applications in “Western” courts that focus on specific tasks like risk assessment, China’s system attempts something far more comprehensive: it assists judges and prosecutors across the entire criminal process, from pre-trial detention decisions to sentencing recommendations.

What makes this particularly fascinating is not just the technology itself, but the institutional context. China’s judicial system, operating without the constraints of judicial review traditions found in many “Western” courts, has adopted AI technologies with remarkable speed and minimal resistance. The AI-assisted system now processes hundreds and thousands of cases annually, with the “206” system expected to be utilized in all criminal cases in Shanghai.

How Does It Actually Work?

The “206” system’s core feature is its Similar Case Recommendation function, which operates like a sophisticated legal search engine on steroids. When a prosecutor inputs case details, the system uses deep learning algorithms trained on past verdicts to identify patterns and recommend outcomes. It considers over 50 variables, from the suspect’s employment history to whether victims have forgiven the accused, to generate sentence recommendations.

The system does not just help with individual decisions. For example, in plea leniency cases (which now account for 87% of criminal cases in China), prosecutors may display the AI’s predictions to negotiate with defendants. “Look,” they might say, “based on our AI system, you’re likely facing 3-5 years of imprisonment. If you plead guilty now, we can recommend the lower end according to the law.” The system has changed the dynamics of criminal justice negotiations.

The Good, The Bad, and The Algorithmic

Our research, building on exclusive data from Shanghai’s procuratorate, reveals a complex picture. On the positive side, the data suggest genuine improvements: cases processed with AI assistance took 23% less time to complete, and sentencing recommendations made with AI support were accepted by judges 75.8% of the time, whereas they only accepted 65.6% of the sentencing recommendations without AI support. The system appears to reduce arbitrary detention and increase consistency in sentencing.

However, three critical concerns emerged based on our fieldwork. First, the anchoring effect: when prosecutors see the AI’s recommendation first, it becomes incredibly difficult for them to deviate, even when case specifics might warrant a deviation from the recommended sentence. Once the prosecutor sees the recommended number on the screen, she or he ascribes a sometimes unwarranted authority to it.

Second, accountability avoidance: the system’s complexity creates a perfect excuse for passing the buck should errors occur in sentencing. If something goes wrong, was it the algorithm’s fault? The fault of the system developer? The fault of the prosecutor or judge who relied on the system? This diffusion of responsibility poses serious challenges to China’s judicial accountability reforms.

Third, and perhaps most troubling, is the compression of the rights to a proper defense. Defense lawyers have no access to the system, cannot challenge its algorithms, and often do not even know that it is being used. While prosecutors wield sophisticated AI tools funded by public money, defendants and their lawyers are left in the dark. This to some extent amplifies the already existing problems of an unequal playing field in criminal justice.

Lessons for the Global Legal Community

What can the rest of the world learn from China’s bold experiment? First, procedural design matters more than technological sophistication. Our research suggests that many of the risks associated with AI in criminal justice are not inherent to the technology but arise from how it is implemented. Simple procedural safeguards, like requiring judges to form initial opinions before consulting AI, or ensuring that the defense has access to algorithmic tools, could mitigate many concerns.

Second, transparency isn’t optional. The closed nature of China’s system, where algorithms operate as black boxes hidden behind claims of trade secrets of technology providing companies, undermines procedural justice. Any jurisdiction considering adopting AI for prosecution or adjudication work must grapple with balancing technological innovation with fundamental legal principles like the right to a fair defense.

Finally, China’s experience confirms a widely observed paradox: AI systems designed to reduce human bias and increase consistency may actually entrench existing patterns of injustice if they are trained on historical data reflecting those very biases. The algorithm does not innovate; it replicates and amplifies patterns that have previously manifested.

Looking Ahead

As courts worldwide grapple with backlogs and inconsistencies, China’s aggressive adoption of AI offers both inspiration and cautionary tales. The technology clearly has potential to improve efficiency and consistency in criminal justice. But our research suggests that without careful attention to procedural safeguards, transparency, and equal access, AI applications risk creating a two-tiered system of justice where algorithmic efficiency trumps fundamental fairness. The question is not whether AI will transform criminal justice, but whether we can design systems that harness AI’s benefits while preserving the procedural protections that define justice itself.

The full article titled Access to technology, access to justice: China’s artificial intelligence application in criminal proceedings can be accessed here.

Wanqiang (Aiden) Wu is a Yat-sen Postdoctoral Fellow at Sun Yat-sen University Law School. He received his Ph.D. in Criminal Procedure Law (Cum Laude) from Shanghai Jiao Tong University in 2025. His research focuses on criminal procedure, empirical legal studies, and the intersection of technology and criminal justice in China. He has published extensively on China’s procuratorial system and judicial reforms in leading journals including Modern China, Hong Kong Law Journal, and International Journal of Law, Crime and Justice. Contact him via email.

Xifen Lin is Professor of Law and Vice Dean at KoGuan School of Law, Shanghai Jiao Tong University. Professor Lin is a leading scholar in Chinese criminal procedure and judicial reform, with particular expertise in prosecutorial systems and empirical legal studies. Contact him via email.

General, Smart Courts, Smart Governance Series Automated Sentencing, Criminal justice, Criminal Procedure, Procuratorates, Smart courts

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Idealist by NewMediaThemes