Cross-sector collaboration is key to achieving ethical AI: Chris Byrd

1 min read

If you can’t see the YouTube player above, try watching here instead.

Private and public sector actors should cooperate internationally to come up with a framework for ethical implementation of artificial intelligence (AI). “If we ignore those options for constructive dialogue and cooperation because there are other things where it is harder to make progress, then we are doing ourselves a disservice, collectively,” said Chris Byrd, research fellow at the Future of Humanity Institute at Oxford University at last week’s Emerge by TechNode conference in Shanghai.

Despite the different problems China may face in comparison to the rest of the world, there is a lot of overlap. Byrd pointed to the example of algorithm bias: China has a more ethnically homogeneous population so bias is stronger in the initial data sets. This doesn’t mean that nothing can be done, merely that more legwork is required to find data points signaling ethnic minorities, much like US companies must do.

These common points present an opportunity to learn from one another. However, Chinese AI companies and relevant institutions have not been as involved in the global conversation because, in part, the west hasn’t made serious attempts to include them, Byrd said in an interview after the AI panel. This is slowly changing; Baidu, for example, was the first Chinese company to enter the Partnership on AI, a global industry consortium seeking to establish best practices in the AI field.

China has some advantages in implementing policy because it has a more unified system, according to Byrd. At the same time, all of the problematic implications of AI must be treated as a its own topic; algorithm bias, job loss, and safety require different kinds of solutions and thinking.

“Governments are slightly out of their depth when it comes to emerging technologies,” Byrd said. Those with technical skills who understand how the technology will be used don’t know how to solve governance problems, and vice versa. To construct laws and regulations that will bring about AI without unforeseen, negative effects, the two sides must work together.