Google has been recognized as a leader driving the global AI revolution, but the tools it offers to developers might not be as safe as many thought. A security team of China’s social and gaming giant Tencent recently claimed (in Chinese) that it had found a “significant security loophole” in Google’s machine-learning platform TensorFlow and that programmers are prone to malicious attack when editing codes using the platform.
“Simply put, if the design professionals happen to be using the vulnerable component when coding a robot, it’s likely that the hacker can control the robot through that loophole. This is very scary. So far we have only made a small step in security for AI. We look forward to making AI better and safer with the help of more technical talents,” says Yang Yong, head of Blade, a team under Tencent’s security division.
If an unsafe code is edited into an AI use case such as face recognition, the hacker can gain full control over the system, steal the design model from the designer, invade user’s privacy and cause even more serious damage, Yang adds.
In 2015, Google unveiled the free, cloud-based machine-learning platform TensorFlow to simplify programming steps for AI. Blade discovered security vulnerabilities while conducting code reviews on TensorFlow and has reported the matter to Google, who officially opened its AI center in Beijing less than a week ago.
This isn’t the first time Chinese hackers have safety flaws in overseas players products. In 2014, security company Qihoo 360 claimed (in Chinese) it gained control of some Tesla Model S functions including the lock, horn, flashing lights and sunroof. This July, Tencent’s renowned Keen Security Lab managed to remotely hack a Tesla for the second year in a row. The lab reported all related exploits to Tesla.