Google has been recognized as a leader driving the global AI revolution, but the tools it offers to developers might not be as safe as many thought. A security team of China’s social and gaming giant Tencent recently claimed (in Chinese) that it had found a “significant security loophole” in Google’s machine-learning platform TensorFlow and that programmers are prone to malicious attack when editing codes using the platform.

“Simply put, if the design professionals happen to be using the vulnerable component when coding a robot, it’s likely that the hacker can control the robot through that loophole. This is very scary. So far we have only made a small step in security for AI. We look forward to making AI better and safer with the help of more technical talents,” says Yang Yong, head of Blade, a team under Tencent’s security division.

If an unsafe code is edited into an AI use case such as face recognition, the hacker can gain full control over the system, steal the design model from the designer, invade user’s privacy and cause even more serious damage, Yang adds.

Start your free trial now.

Get instant access to all our premium content, archives, newsletters, and online community.

Monthly Membership

Yearly Membership

What you get

Full access to all premium content and our full archives

Members'-only newsletters

Preferential access and discounts to all TechNode events

Direct access to the TechNode newsroom

Start your free trial now.

Get instant access to all our premium content, archives, newsletters, and online community.

Monthly Membership

Yearly Membership

Rita Liao

Telling the uncommon China stories through tech. I can be reached at ritacyliao [at] gmail [dot] com.