BEIJING — China’s cyber regulator has released draft regulations aimed at tightening oversight of artificial intelligence services that simulate human personalities and engage in emotional interactions with users. Issued on Saturday by the Cyberspace Administration of China, the rules target AI products designed to mirror human thinking patterns, communication styles, and personality traits across text, audio, and video formats.
The proposed framework requires service providers to assume full safety responsibilities throughout a product’s lifecycle. This includes establishing robust systems for algorithm reviews, data security, and the protection of personal information. Crucially, the draft mandates that providers monitor user states to assess emotional dependence. If a user exhibits signs of addiction or extreme emotional distress, the provider must intervene with necessary measures.
These measures also reinforce Beijing’s strict content standards. AI services are prohibited from generating material that endangers national security, spreads rumors, or promotes violence and obscenity. By implementing these requirements, Beijing seeks to mitigate the psychological risks associated with human-like AI while shaping the ethical rollout of consumer-facing technology.
Analysis: The Rise of the “Synthetic Companion” and Regulatory Pushback
The rapid proliferation of “companion AI”—software designed to provide emotional support or simulate romantic interests—has created a new frontier for digital ethics. While these tools offer a remedy for loneliness, they also introduce significant psychological risks, such as digital addiction and the erosion of real-world social skills. For example, apps like Replika or Character.ai have seen millions of users form deep, sometimes obsessive, bonds with chatbots.
China’s draft rules represent one of the world’s first proactive attempts to codify “emotional safety” in AI. By requiring companies to detect “extreme emotions,” the government is essentially demanding that algorithms be programmed with a “safety valve” to prevent psychological harm. This moves beyond traditional data privacy into the realm of mental health regulation, forcing developers to balance user engagement with the responsibility of preventing “algorithmic dependency.”





