5G networks vulnerable to hostile ML attacks

A research paper published this week has called into question security measures on 5G networks.

A team of academic researchers from the University of Liechtenstein claimed that a surprisingly simple network blocking strategy could allow an uninformed attacker to disrupt traffic on next-generation networks, even with sophisticated defenses. The key to the attacks, according to the research team, is the use of a hostile machine learning (ML) technique that does not rely on any prior knowledge or exploration of the target network.

In a research paper published on July 4, the team described how the shift to 5G networks has enabled a new class of hostile machine learning attacks. The article, titled “Wild Networks: Exposure of 5G Network Infrastructures to Adversarial Examples”, was written by Giovanni Apruzzese, Rodion Vladimirov, Aliya Tastemirova and Pavel Laskov.

As 5G networks are deployed and more devices that start using networks to move traffic, current management practices network packets don’t stop anymore. To compensate for this, the researchers noted, many carriers plan to use machine learning models that can better sort and prioritize traffic.

Those machine learning models proved to be the attacker’s weak point, as confusing them and refocusing their priorities could allow attackers to tinker with how traffic is handled. The researchers suggested that by flooding the network with garbage traffic, a technique known as a “myopic attack” could disable a 5G mobile installation.

The basic idea, the researchers wrote, lies in making small changes to the data set. Doing something as simple as requesting a data packet, supplemented with additional data, would give a machine learning configuration unexpected information. Over time, those poisoned requests can alter the machine learning software’s behavior to thwart legitimate network traffic and ultimately slow or stop the flow of data.

While the real results would depend on the type of 5G network and machine learning model deployed, the research team’s academic tests have produced resounding results. In five of the six lab experiments conducted, the network was taken off the air using a technique that required no knowledge of the carrier, its infrastructure or machine learning technology.

“It’s just necessary to add unwanted data to the network packets,” Apruzzese told SearchSecurity. “Indeed, [one example] focuses on a model that is agnostic of the real payload of network packets.”

The results are relatively favorable in terms of the long-term effects, but by causing service outages and slowing network traffic, they would certainly cause a problem for those who want to use the target network.

More importantly, the team said, is how the study underscores the need for a better model to test and address vulnerabilities in the machine learning models that future networks want to deploy in the wild.

“The 5G paradigm enables a new class of malicious hostile ML attacks with a low barrier to entry, which cannot be formalized with existing hostile ML threat models,” the team wrote. “In addition, such vulnerabilities should be proactively assessed.”

Conflicting machine learning and artificial intelligence are concerns within the infosec community for some time. While the number of attacks in the wild is believed to be extremely small, many experts have warned that algorithmic models could be vulnerable to poisoned data and influenced by threat actors.

Leave a Comment

Your email address will not be published.