Breaking AIs to make them better

Breaking AIs to make them better

Image recognition AIs are powerful but inflexible and cannot recognize images unless they are trained on specific data. In Raw Zero-Shot Learning, researchers give these image recognition AIs a variety of data and observe the patterns in their responses. The research team hopes this methodology can help improve the robustness of future AI. Credit: Hiroko Uchida.

Today’s artificial intelligence systems used for image recognition are incredibly powerful with enormous potential for commercial applications. Nevertheless, today’s artificial neural networks – the deep learning algorithms that enable image recognition – have one huge flaw: they can be easily broken by images that have been modified even slightly.

This lack of “ruggedness” is a major hurdle for researchers looking to build better AIs. However, exactly why this phenomenon occurs, and the underlying mechanisms behind it, remain largely unknown.

With the aim of one day fixing these shortcomings, researchers from Kyushu University’s Faculty of Information Science and Electrical Engineering have published in PLOS ONE a method called “Raw Zero-Shot” that assesses how neural networks deal with unknown elements. The results could help researchers identify common features that make AIs “unrobust” and develop methods to solve their problems.

“There are a range of real-world applications for neural networks for image recognition, including self-driving cars and diagnostic tools in healthcare,” explains Danilo Vasconcellos Vargas, who led the study. “No matter how well the AI ​​is trained, it can fail with even a small change to an image.”

In practice, image recognition AIs are “trained” on many sample images before being asked to identify one. For example, if you want an AI to identify ducks, you must first train it on many images of ducks.

Yet even the best trained AIs can be misled. Researchers have even found that an image can be manipulated in such a way that an AI cannot accurately identify it, although it may appear unchanged to the human eye. Even a single pixel change in the image can cause confusion.

To better understand why this happens, the team set out to examine various image recognition AIs in the hopes of identifying patterns in how they behave when confronted by monsters they are not trained with, i.e. elements that the AI ​​does not know.

“If you give an image to an AI, it will try to tell you what it is, whether that answer is correct or not. So we took today’s 12 most common AIs and applied a new method called ‘Raw Zero- Shot Learning’,” continues Vargas. “Basically, we gave the AIs a series of images with no hints or training. Our hypothesis was that there would be correlations in how they responded. They would be wrong, but similarly wrong.”

What they found was just that. In all cases, the image recognition AI would provide an answer and the answers – although wrong – would be consistent, i.e. they would clump together. The density of each cluster would indicate how the AI ​​processed the unknown images based on its fundamental knowledge of different images.

“If we understand what the AI ​​was doing and what it learned when processing unknown images, we can use that same understanding to analyze why AIs break when faced with images with some pixel changes or minor adjustments,” Vargas said. “Using the knowledge we have gained to solve one problem by applying it to another but related problem is known as transferability.”

The team noted that Capsule Networks, also known as CapsNet, produced the densest clusters, giving it the best portability between neural networks† They think this is due to the dynamic nature of CapsNet.

“While current AIs are accurate, they lack the robustness for further utility. We need to understand what the problem is and why it occurs. In this work, we have shown a possible strategy to study these problems,” Vargas says.

“Instead of just focusing on accuracy, we need to explore ways to improve robustness and flexibility. Then maybe we can develop a real artificial intelligence.”


Imaging via arbitrary diffusers directly without a computer


More information:
Shashank Kotyan et al, Neural Network Function Portability, Links to Hostile Attacks and Defenses, PLOS ONE (2022). DOI: 10.1371/journal.pone.0266060

Provided by Kyushu University

Quote: Breaking AIs to Make Them Better (2022, June 30) Retrieved June 30, 2022 from https://techxplore.com/news/2022-06-ais.html

This document is copyrighted. Other than fair dealing for personal study or research, nothing may be reproduced without written permission. The content is provided for informational purposes only.

Leave a Comment

Your email address will not be published. Required fields are marked *