Security Experts Find Gaps in AI Camera Systems

Joint research by several Australian and Korean institutions revealed gaps in the security offered by artificial intelligence-powered cameras, said ZDNet. The loopholes can result in exploits, bypassing, and infiltration of object detection cameras.

Researchers from the Commonwealth Scientific and Industrial Research Organization’s (CSIRO) Data61 and the Australian Cyber Security Cooperative Research Center (CSCRC), in cooperation with Sungkyunkwan University in South Korea demonstrated gaps in the YOLO system.

YOLO or You Only Look Once, a popular object detection algorithm is applied to security cameras to automatically determine the presence of a subject. However, an experiment by the researchers showed its flaws.

Experts Find Gaps in AI Camera Systems

The researchers used a “trigger” to test the reliability of the system. Using a red beanie for the first testing round, they were able to demonstrate that a subject without the red beanie is detected by a YOLO camera, but the same subject wearing the piece of clothing was not.

The same result was also observed with two people wearing the same shift, but with different colors.

Towards Data Science explained that the YOLO algorithm works not by selecting a distinct part of an image, but by predicting the class and bounds of the entire image. This means that the system “splits the image into cells,” with each cell in charge of predicting bounding boxes.

According to the article, the system chooses “the class with the maximum probability” and assigns it to that grid, allowing the algorithm to determine the object.

However, Data61 cybersecurity researcher Sharif Abuadubba said, “The problem with artificial intelligence, despite its effectiveness and ability to recognize so many things, is it’s adversarial in nature.”

Ai systems can be tested in a certain environment, which limits its experience n more varied situations. Models that have not been trained to work in various scenarios can result in serious security risks, like the one demonstrated by the team.

Abuadabba highlighted the need for organizations to create their own datasets and train their model according to their data.

A similar issue has been observed by Twitter users after the social media platform launched its AI-powered preview cropping tool. According to users, the system was “automatically favoring white faces over someone who was Black.”

The issue was brought to light by Colin Madland after observing a similar issue with Zoom when the video conference program erased his Black colleague to blend with the virtual background.

No posts to display