China’s ambitions to lead the development of AI technology have received considerable attention from Western media, policymakers, and tech leaders alike. Tech leaders continue to debate whether China’s aspirations for global leadership in technical AI research are feasible, but much less attention has been paid to China’s bid to lead the ethical conversations that surround AI technologies. Chinese researchers have proven that they can produce world-class breakthroughs in theoretical AI, and Chinese companies have proven that they can produce world-class innovations in the commercial sector. Can China prove that it can lead the world in ethical AI?

Missing from the table

As we discussed at a panel on AI ethics in China during TechNode’s Emerge conference last week, the ethical challenges posed by AI technologies are substantial. Ultimately, creative leadership on AI ethics may be at least as impactful as technical work on algorithms. Without careful attention, AI systems may fail to correct for bias in their input datasets, leading to inequitable treatment and possibly magnifying existing social inequalities. Current AI systems can be brittle, performing well in the great majority of cases, only to dramatically fail when faced with an unexpected situation. Even when the behaviors of a trained AI system are understood, its internal decision-making procedures often remain uninterpretable and opaque even to the researchers that designed the system. As companies and entrepreneurs develop new tools for automation, technological unemployment may threaten to send shockwaves throughout the global economy.

Addressing any of these challenges of these questions will be a herculean feat, and will require expert input from a diverse range of technologists, economists, ethicists, and policymakers. If current forecasts for the impact of AI are accurate, creating ethical AI systems can be viewed as a grand challenge for the 21st century.

Early efforts at AI governance have met with a common challenge: most policymakers are ill-prepared to regulate technology that they don’t fully understand. However, actors in the private, academic, and non-profit sectors have stepped up to the plate, creating new frameworks to aggregate the diverse expertise necessary to craft sensible policy on AI.

In 2016, a group of AI researchers from some of the world’s largest technology companies founded the Partnership on AI, a multi-stakeholder consortium intended to promote dialogue among industry-leading AI researchers on the safe and ethical deployment of AI systems. In 2017, the Future of Life Institute released a set of ethical principles for AI, the culmination of an intensive conference of multi-disciplinary experts on AI and its potential impact. More recently, in April 2019, the IEEE Global Initiative on Ethics of Autonomous and Intelligence Systems released the first edition of their Ethically Aligned Design standards for AI systems.

However promising, the reach of these efforts is limited by a lack Chinese engagement. Despite the important of Chinese companies in the development of AI, the Partnership on AI was founded without any Chinese companies, gaining their first Chinese member—Baidu—only in 2018. Chinese researchers were heavily underrepresented at the 2017 conference that produced the Beneficial AI principles, and in the IEEE working groups that designed the Ethically Aligned Design standards. Action is needed if China is to make good on its aspirations to lead the world in the ethics of AI—and engaging China in the formation of AI ethics principles is key to arriving at a framework that China will acknowledge and respect.

Getting companies involved

So, can China achieve ethical AI? Yes it can. And it can benefit in the process—but only if the private sector companies that are spearheading deployment of AI technologies are allowed to take the lead. Three key features of the AI ethics landscape inform this corporation-centered strategy:

First, governments, including the Chinese government, don’t have the technical expertise of private sector companies. It is difficult to address the ethical implications of an AI technology without a solid understanding of the inner workings of the system. As AI applications become more common, and better understood, governments will build more capacity to regulate these systems, but at least in the interim private actors are an invaluable source of the technical know-how necessary to address the challenges of AI ethics. In some cases, novel technical approaches may create solutions to otherwise intractable ethical problems: for example, the application of homomorphic encryption techniques to the collection of user data, which can allow companies to perform AI-assisted analysis on user activity without collecting any non-encrypted information about the users themselves.

Second, encouraging companies to self-police on AI ethics will avoid loopholes and other implementation challenges in the future. As the primary implementers, and leading beneficiaries, of commercialized AI technologies, the behavior of private sector companies will shape every aspect of AI’s social impact. Rather than relying on governments to respond to problematic applications of AI as they occur, placing the onus of responsibility on companies can create more robust, proactive norms, encouraging companies to consider the implications of their AI products before they are released into the world.

This approach also sidesteps the familiar corporate strategy of shirking responsibility by begging for regulation. No company wants to be regulated. Calls for regulation are a way to avoid blame and, in some cases, to encourage a regulatory environment that is favorable to the company’s interests. Government regulations on AI ethics, if imperfectly crafted, may allow loopholes for companies to avoid legal responsibility for harms caused by their systems, or may stifle competition by smaller actors that are less capable of bearing the costs of compliance.

Finally, in the current global context, China has a special incentive to rely on companies to lead its AI ethics agenda—the persistent global concern about blurred boundaries between Chinese companies and the party-state. A key point of contention in the US-led campaign to block Huawei’s access to international 5G network infrastructure is the allegation that the company cannot refuse requests by Chinese authorities to access or modify traffic on their networks.

Given the advantages to company-led AI governance outlined above, AI ethics is a chance for Chinese companies to prove that they can set their own agenda on how their technologies are used, and a chance for the party-state to prove that it can stay out of the way. If China wants to be a leader on the global stage, in AI and beyond, it needs to assure others that its private sector can be truly private. AI ethics is an excellent opportunity for China to prove this to the world, and to truly realize its vision of leading the world in AI—as a technology and as a force for social change.

All opinions are my own and not the views of my employer.

Chris Byrd is a GovAI research fellow at the Future of Humanity Institute, and a graduate researcher at Tsinghua University and Johns Hopkins SAIS.

Leave a comment

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.