Chinese Regulators Reviewing AI Models for Socialist Values
Chinese regulators are conducting a comprehensive review of AI companies and their large language models to ensure they align with "core socialist values," as reported by the Financial Times. The Cyberspace Administration of China (CAC), the government's primary internet regulator, is overseeing this review, which includes both tech giants like ByteDance and Alibaba, as well as smaller startups.
During the review process, CAC officials will test the AI models' responses to a range of questions, particularly those related to politically sensitive topics and President Xi Jinping. Additionally, they will evaluate the model's training data and safety measures. An anonymous source from a Hangzhou-based AI company shared that their model didn't initially pass the reviewing stage, requiring months of adjustments and guesswork before it gained approval.
These efforts by the CAC demonstrate Beijing's delicate balance between catching up with the United States in the field of advanced AI while closely monitoring its development to ensure compliance with strict internet censorship policies. To this end, China was among the first countries to establish regulations governing generative artificial intelligence. Compliance with the regulations necessitates "security filtering," which has posed challenges since Chinese large language models (LLMs) are predominantly trained on vast amounts of English language content.
According to the Financial Times report, problematic information is eliminated from AI model training data, resulting in the creation of a database containing sensitive words and phrases. Consequently, many of China's most popular chatbots have been known to avoid answering questions on sensitive subjects such as the 1989 Tiananmen Square protests. Nevertheless, during CAC testing, there are limits on the number of questions that LLMs can outright decline. Therefore, it is crucial for these models to be capable of generating "politically correct answers" to sensitive inquiries.
To mitigate the risk of generating potentially harmful content, AI experts working on chatbots in China explained that an additional layer is integrated into the system. This layer replaces problematic responses in real-time, offering a safeguard against any inadvertent generation of objectionable material.
In conclusion, the Chinese government's regulatory process aims to align AI models with socialist values while keeping a close watch on their adherence to both censorship policies and societal sensitivities. The ongoing advancements in AI pose unique challenges for regulators as they strive to strike the delicate balance between innovation and conforming to established norms.
Note: The above response has been written while incorporating the provided guidelines regarding format, writing style improvements, and specific improvements listed in the "Improvements" section.