INSIGHTS | Beijing AI Principles: A step in the right direction, but still not enough

6 min read
(Image credit: BigStock/kentoh)

A version of this first appeared in our members-only newsletter on June 8, 2019. Freely available on our site now, it soon won’t be. Become a member and read it first.

On May 28, the Beijing Academy of Artificial Intelligence (BAAI) released the “Beijing AI Principles,” an outline to guide the research and development, implementation, and governance of AI. Endorsed by Peking University; Tsinghua University; the Chinese Academy of Sciences’ Institute of Automation and Institute of Computing Technology; and companies such as Baidu, Alibaba, and Tencent, the principles are the latest global entry in a long list of statements about what AI is and should be. On the surface, a bland and unsurprising take on AI ethics, the document actually pushes forward the global discussion on what AI should look like.

Bottom line: Say what you will about the current tension between the US and China, but the fact remains that the Western-built world order is slowly eroding and China is steadily filling in the gaps. While the principles were not written or endorsed by the State Council (the country’s chief administrative authority), BAAI is backed by the Ministry of Science and Technology which, very likely, approved the statement for publication.

Because it’s not a central government document, it does not carry the full weight of the leadership, but it is a step towards an official stance on ethical AI. It is the first public sector declaration of AI principles to come out of China, bringing Beijing into a conversation dominated by Western voices.

However, like other principles published globally before, it doesn’t do enough to address actual real world development and use cases. Instead of fluffy, feel-good utterances we can all agree on, the global AI community needs to go beyond just words and give us concrete examples of how AI can represent our highest values.

A brief timeline

For a full list of statements and position papers on AI principles, check out Linking Artificial Intelligence Principles, a project started by one of the key authors of the Beijing AI Principles, Yi Zeng.

Chinese voices: As with AI research, there has been a substantial asymmetry in statements about AI ethics: many Chinese researchers and practitioners can speak English and know English language statements, but few outside China are aware of Tencent’s and Baidu’s. Neither have published their principles and most statements from their representatives have been made in Chinese for domestic consumption. By publishing in English, the Beijing AI Principles invites the rest of the world to have an open discussion with researchers, academics, and entrepreneurs in China how AI should be developed and implemented.

It’s different this time: While still lacking concrete steps for implementation, the Beijing Principles do have more substance than the recent OECD document. Consisting of five principles (all single sentences) and five suggestions to government (all single sentences), the OECD principles on AI are bland and mirror almost every other government document on the topic. The Beijing AI Principles, on the hand, outlines specific domains (research, use, and governance) and even acknowledges the risks of Artificial General Intelligence and Superintelligence.

The most comprehensive public discussion of AI principles out of China actually comes from Tencent, a company which has been criticized in China for lacking a comprehensive AI strategy. At the 2018 Peking-Stanford University Internet Law and Public Policy, Jason Si gave a speech outlining, in quite specific terms, how Tencent approaches AI development. Even he, however, doesn’t go into enough detail for any reasonable skeptic to believe in their principles without seeing it.

The dark side: The Beijing AI Principles were likely written in good faith, but it is hard to take them seriously given what we know about how China is currently using AI. Domestic policy doesn’t give much credence to either Western notions of limiting the power of the state over the individual or avoiding systems of variable rights by race.

As has been widely covered by Western media outlets, the Chinese government has been paying special attention to people of certain ethnicities living in certain regions. Afraid of losing control, the government has created mass surveillance systems in a bid to ensure stability in these areas.

In August 2018, researchers funded by the National Natural Science Foundation of China and China Education & Research Network Innovation Project published a paper in English detailing AI models that could identify ethnicity. China’s facial recognition companies have had a hard time with non-Han Chinese due to lack of data, but this goes far beyond just wanting to verify someone’s identity. Indeed, the application of AI here isn’t about who someone is, but rather which group they belong to for the purposes of profiling. But it’s not just here, but also in China’s smart city push that we can see AI’s power to monitor individual citizens.

The government’s Skynet and Sharp Eyes programs make it clear that policy prioritizes watching, tracking, and monitoring people inside China’s borders. Anytime you hear “smart city” in China, you can guarantee that video surveillance and facial recognition is a key component.

Almost every single tech giant is involved too.

Alibaba has long been developing their own smart city solutions in Hangzhou and in May 2019, brought that system to Shanghai, including 1,100 biometric facial recognition cameras. Huawei has been developing smart city hardware for many years, including smart cameras used for image and facial recognition. Baidu is helping to build Xiong’an, China’s high-tech powered “second capital.” PingAn, an insurance company that’s pivoted into tech, is working with the Hong Kong government to help build the city’s e-ID system.

End of the fluffy cycle

We’ve got enough [AI principles]. We need to start getting into the details. We need to get real about the conversations we should be having.

—Chris Byrd, research fellow at the Future of Humanity Institute at Oxford University

The bad news is that the Beijing AI Principles is yet another list of feel-good sentiments about problems in AI development. The good news is that almost everyone and their mother has published one by now: the territory is marked and it’s time for some real exploration. What all AI principles lack to-date are concrete and technical details about how to create artificial intelligence systems that actually adhere to our stated values. Not only is this possible (contrary to popular belief, non-engineers can understand how technology works if they take the time to ask), but becoming increasingly necessary as AI quickly becomes integrated into the fabric of our societies.

The Beijing AI Principles are written by a team with a good track record and there’s every reason to believe they mean what they say. AI ethics is another area where China could set the global agenda if they wanted to. There is a real gap currently in the discussion that China could try to bridge, not just because they want the power, but also because the country’s leadership recognizes that there are real problems that need to be solved. Say what you will about their domestic policies, with AI, like carbon, if you don’t have Chinese buy-in, you don’t really have an effective norm. The world should be eager to have China contributing to the conversation.

Read more: