By Micah Musser, Georgetown University
In March, new regulations entered into effect in China that require companies deploying recommendation algorithms to file details about those algorithms with the Cyberspace Administration of China (CAC). In August, the CAC published summaries of 30 recommendation algorithms used by some of China’s largest tech companies. The release sparked a round of flawed commentary on China’s unprecedented attempt to regulate some types of artificial intelligence (AI), which has largely framed the goals of the regulation in maximalist terms without acknowledging other possible functions of the regulation.
In particular, much of this commentary mischaracterizes the actual impact of the regulation. Media outlets from Bloomberg News to the BBC and CNBC have all speculated that, despite the superficial nature of the recently publicized information, the regulation may have compelled companies to share with the government sensitive proprietary information about their algorithms, such as their source code, “business secrets,” or “inner workings.” This coverage represents a widespread assumption that the core function of the new regulations is to provide a government pretext for vacuuming up detailed technical information from tech companies, including companies like ByteDance, which owns TikTok.
This assumption seems unwarranted. One review of the portals through which companies are required to file their algorithms suggests that many key pieces of requested information are optional, answered with multiple-choice questions, or described in 500 characters or less. This is hardly enough information for the government to access the “secret sauce” behind these recommendation algorithms. While the Chinese government may demand access to companies’ closely guarded algorithms in the future—and would have the authority to do so under its 2015 National Security Law—the implementation of the current AI regulation does not appear to push toward that goal.