Researchers improve robustness of biometrics algorithms, image recognition applications

Artificial intelligence isn’t just about pattern recognition accuracy. Algorithms must be able to handle problems and keep going.

With that in mind, researchers at Kyushu University say they’ve created a new way to improve the robustness of facial recognition algorithms — the so-called raw zero-shot method.

Lead researcher Danilo Vasconcellos Vargas says too much is being made of accuracy and too little attention has been paid to how AI works outside the lab.

“We need to explore ways to improve robustness and flexibility,” Vargas says. “Then maybe we can develop a real artificial intelligence.”

Described in an article in the scientific journal PLOS ONE, the raw zero-shot method is designed to assess how neural networks interact with unknown elements. This could be beneficial in understanding how: generative hostile networks can be used to defeat biometric algorithms and other AI systems.

“There are a range of real-world applications for neural networks for image recognition, including self-driving cars and diagnostic tools in healthcare,” said Vargas, of Kyushu’s Faculty of Information Science and Electrical Engineering.

“No matter how well the AI ​​is trained, it can fail with even a small change in an image,” he says. And of course the quality from data sets is paramount for proper training of machine learning algorithms.

In fact very accurate algorithms are sometimes interrupted by elements that are impossible to detect with the human eye.

To understand problems associated with image recognition malfunction, the Kyushu researchers applied the raw zero-shot method to 12 artificial intelligence algorithms.

“If you give an image to an AI, it will try to tell you what it is, regardless of whether that answer is correct or not,” Vargas explains.

“Basically, we gave the AIs a series of images with no hints or training. Our hypothesis was that there would be correlations in how they responded. They would be wrong, but similarly wrong.”

The rationale behind the research was to understand how the AI ​​reacted when processing unknown images. That method can then be used to analyze why algorithms break when faced with changes in a single pixel.

Of the algorithms analyzed by Vargas’s team, Capsule Networks (commonly called CapsNet) reportedly produced the densest clusters, resulting in the best portability of problem-solving knowledge between neural networks.

“While today’s AIs are accurate, they lack the robustness for further utility. We need to understand what the problem is and why it is happening. In this work, we have shown a possible strategy to study these problems,” he adds.

The study results come weeks after Kyushu University published another paper focused on biometrics about breath recognition as a potential chemical biometric identifier.

Article Topics

hostile attackAIbiometricsimage recognitionmachine learning

Leave a Comment

Your email address will not be published. Required fields are marked *