China has released version 2.0 of its Artificial Intelligence (AI) Safety Governance Framework, a significant update aimed at bolstering the nation’s approach to managing the rapidly evolving field of AI. This new iteration, announced on Monday, September 15, 2025, builds upon the original framework launched in September 2024 and seeks to strengthen AI risk assessment, control, and safeguards to ensure the sector’s steady and responsible growth.
The updated framework, developed under the guidance of the Cyberspace Administration of China (CAC) and led by the National Computer Network Emergency Response Technical Team/Coordination Center of China (CNCERT/CC), incorporates recent AI advancements and addresses emerging risks. Key enhancements include refining risk categories, optimizing risk grading strategies, and dynamically updating prevention and governance measures. Framework 2.0 provides more detailed guidance on classifying AI safety risks, developing technological countermeasures, and implementing comprehensive governance strategies.
Addressing Emerging Risks and Enhancing Governance
Version 2.0 specifically addresses secondary risks associated with AI applications and introduces four new governance measures. These measures are designed to foster collaboration among developers, providers, users, regulators, and civil society, thereby improving the overall regulatory landscape. The framework also establishes clearer principles for assessing AI risks and developing trustworthy AI, emphasizing a people-centered approach and the principle of developing AI for good. It classifies AI safety risks into two overarching categories: inherent risks from the technology itself and risks posed by its application, with a focus on proactively identifying and mitigating these dangers throughout the AI lifecycle.
Alignment with Global Trends and International Cooperation
China’s updated AI Safety Governance Framework 2.0 is designed to align with global AI development trends. The framework encourages international cooperation, promotes ethical standards, and advocates for the equitable distribution of AI benefits worldwide. This initiative is part of China’s broader strategy to foster a safe, trustworthy, and controllable AI ecosystem and to build a collaborative governance system that spans borders, sectors, and industries.
Premier Li Qiang has previously emphasized the need for global coordination in AI governance, proposing the establishment of an international AI cooperation organization to facilitate dialogue and share technological advancements, particularly with developing nations. This reflects China’s ambition to play a leading role in shaping the global AI governance landscape, balancing technological innovation with robust safety and ethical considerations.
Background and Broader AI Strategy
The initial AI Safety Governance Framework was released in September 2024, stemming from China’s Global AI Governance Initiative launched in October 2023. The country has been actively developing its AI governance, aiming to be a global leader in AI by 2030. Recent policy developments, such as the “AI Plus” action plan and measures for labeling AI-generated content, further underscore China’s comprehensive approach to integrating and regulating artificial intelligence across various facets of society and the economy. This evolving regulatory environment highlights China’s commitment to navigating the opportunities and challenges presented by trending AI technology and its news for global technology.