A team of computer experts from the Network Security Lab (NSL) at the University of Washington proved this week that a determined adversary could trick Google's Cloud Vision API into misclassifying submitted images.

This research comes as AI-based image classification systems are becoming more popular and are increasingly adopted by more online services, where they are used to catch and block images of abuse, adult, violent, or other inappropriate content before they are published on their live sites.

Despite working on top of a complex machine-learning-powered algorithm, the research team says it found a simple method of deceiving Google's Cloud Vision service.

Google Cloud Vision API vulnerable to trivial attack

Their attack relies on adding low quantities of "noise" to an image. The noise level varied from 10% to 30% but was enough to deceive Google's AI, while also keeping the images clear and visible for human observers.

Google AI attack

Adding noise to an image is a trivial procedure and doesn't require advanced technical skills, but only a image editing application.

Researchers argue that terrorist groups that want to host violent or propaganda images on image hosting sites wouldn't be protected against such an attack, even if they implemented Google's highly touted Cloud Vision API, advertised as a way to catch such content.

Furthermore, Google's own image search, which also uses this API, may misclassify abusive images, and show them as suggestions for other images, helping spread violent or adult content to unintended audiences.

There's an easy fix

Google engineers don't need to panic, though. The research team says the fix for this issue is as trivial as the attack itself.

To prevent noise-addition attacks, Google only needs to implement a basic "noise filter" before running its image classification algorithm. Tests carried out by the research team proved the efficiency of such a noise filter, with the Google Cloud Vision API starting to reclassify content in the proper categories.

Google AI attack

The full research paper is entitled Google’s Cloud Vision API Is Not Robust To Noise and was authored by the same team of researchers who previously found a way to fool Google's new Cloud Video Intelligence API in a similar manner.

That attack relied on inserting the same image at every 2 seconds inside a video, and in the end, the Google video classification AI tagged the video based on the repeating image, and not the video's content.

Related Articles:

Frustration grows over Google's AI Overviews feature, how to disable

Google Pixel 6 series phones bricked after factory reset

Google now pays $250,000 for KVM zero-day vulnerabilities

Google Chrome to let Isolated Web App access sensitive USB devices

EC-Council to Decrease AI Chasm with Free Cyber AI Toolkit for Members