Natural Language Processing and “Mindful” AI Allow for More Sophisticated Attacks from Bad Bots

The evolution of attacks from humans to bots

In the past few years of my cybersecurity career, I have been fortunate to work with professionals who have researched and developed new solutions for detecting and preventing cybersecurity that block sophisticated cyberattacks. Initially, these attacks were caused by humans and later by advanced bad bots† I felt like I had seen it all, or so I thought…

In my current position at Imperva’s Innovation Office, our team had to make a drastic mind shift. Rather than developing new cyber defenses for today’s threats, we were tasked with analyzing and researching trends outside the current cybersecurity landscape to predict and prepare for tomorrow’s threats.

DevOps Connect:DevSecOps @ RSAC 2022

Today, most bad bots mask themselves and attempt to interact with applications in the same way a legitimate user would, making them harder to detect and block. Bad bots are used by a large number of malicious operators; it could be competitors operating in the gray area, attackers looking to make a profit, and even hostile governments. There are many types of bone attacksmost include high volume attacks, while others at lower volumes are designed to target a specific audience.

Bad bots: what do they do?

Bad bots are generally software applications that perform automated tasks with malicious intent. Bad bots are programmed and controlled to perform various activities such as: scrape webcompetitive data mining, personal and financial data collection, digital asset theft, brute force logindigital advertising fraud, denial of service (DoS), inventory rejection, spam, transaction fraud and more.

In this post, we will focus on how bad bots can evolve to adapt to perform criminal behavior. For example, behavioral attacks specifically designed to facilitate competitive data mining, personal and financial data collection, transaction fraud, and digital asset theft.

How Bad Bots Are Harming Businesses Today

Here are some examples of how bad bots are used to harm companies today:

scrape price – Competitors scrape your prices to beat you in the market. You lose sales because your competitor wins the SEO search by price. Customer lifetime value deteriorates.
scrape content – Own content is your business. When others steal your content, they act as a parasite that robs you of your efforts. Duplicate content hurts your SEO rankings.
Account Takeover – Bad actors test stolen credentials on your site. If successful, the consequences will be account lockouts, financial fraud and more customer complaints that affect customer loyalty and future revenue.
create an account – Cyber ​​criminals use free accounts that are used to spam messages or amplify propaganda. They exploit all new account promotional credits (e.g. cash, points, free play, etc.).
Credit card fraud – Criminals test credit card numbers to identify missing information (e.g. expiration date, CVV, etc.). This hurts the company’s fraud score and leads to increased customer service costs to process fraudulent chargebacks.
Check balance of gift cards – Fraudsters steal money from gift cards that contain a balance. This results in a bad customer reputation and loss of future sales.

For a comprehensive look at how bad bots harm businesses, download Imperva’s 2022 Imperva Bad Bot Report

Where can bad bots go?

The evolution and progress that has been made in Machine Learning (ML) and Artificial Intelligence (AI) is remarkable; and when used for good purposes, they have proved indispensable in improving our lives in many ways.

Advanced chatbot AI takes psychological, behavioral and social engineering factors into play. Bad AI bots can use the ability to learn and mimic the language and behavior patterns of the target user, which in turn can be used to gain blind faith in their malicious requests. Unfortunately, bad bot operators are quickly using these technologies to develop new malicious campaigns that incorporate machine intelligence in ways never seen before. In recent years, chatbots have gained significant momentum in consumer-facing activities such as sales, customer service, and relationship management.

We see these technologies being adopted by malicious operators inspired by legitimate companies who abuse them and demonstrate the potential harm they can cause.

A notable example of this is tay, a bot created by Microsoft. Tay is designed to mimic the language patterns of an American teenage girl and learn from interacting with human Twitter users.

Nnaturally llanguage processing (NLP), a machine learning technology, was the foundation of Tay. It was the first bot that understand the text, data, and social patterns presented during social interactions, and then respond with custom text semantics of its own. That means a bad bot can now adapt to text or speech data, the social and behavioral patterns of the victim it communicates with.

In Tay’s case, some users on Twitter began tweeting politically incorrect sentences and learning incendiary messages revolving around common themes across the Internet. As a result, Tay started spreading racist and sexually offensive messages in response to tweets from other users.

How AI makes a bot evil

Service interruption (DoS)

Malicious operators can train the AI\ML to learn language patterns of specific audiences and massively message an organization’s resources, whether human or digital, it can confuse or overwhelm customer-facing services for various reasons.

Sabotage of corporate and brand reputation

In several political election seasons, national security agencies and social application providers identified networks of human-like chatbots with fabricated online identities that spread false claims about candidates before the election. With enough chatbots with “Mindful” AI behind them, more advanced techniques can be used to effectively destroy competitors and brands.

Guess and Scrape Coupon

Criminals who collect commissions from affiliates use bad bots to guess or scrape marketing coupons from legitimate marketing partners. These bots massively hit websites, affect their performance and abuse the campaigns for which the coupons were intended. NLP can be used for guessing coupon codes, especially if they are event related or contain a textual pattern that can ‘mindful’ predict NLP.

A Hostile Takeover of Legitimate Chatbots

In June 2021, Ticketmaster got a vulnerability caused by modifying the chatbot customer support service (by Inbenta). Names, addresses, e-mail addresses, telephone numbers, payment details and Ticketmaster login details of 40,000 customers were accessed and stolen.

Now imagine these examples of what these “legitimate” bots can do next.

imitation

Tinder is a dating app with about five million daily users. Tinder has warned that the service has been “invaded by bots” posing as humans. Those bots are usually programmed to impersonate women and ask victims to provide their payment card details for various purposes.

These types of well-known attacks can inspire malicious operators to take it to the next level and establish trust and extract relationships with both business users and consumers via email, other messaging applications, or even social applications (Shadow IT). valuable assets that can be exploited.

game fraud

Gaming bots are used by cheaters to gain unfair competitive advantages in multiplayer games. There are many types of gaming bots that are meant to cheat, such as farming bots, pre-recorded macros and the most common example – “aimbot” which allows a player to automatically aim in a shooting game.

In some cases, these bots are used to make a profit. In 2019, it was estimated that the game industry lost about $29 billion in revenue due to cheaters.

Conclusion

Cybersecurity is on the cusp of a major shift in its challenges. This shift may require developing the ability to successfully mitigate cyberthreats driven by mindful bad bots. Cybersecurity vendors will have to design new detection and mitigation technologies where identifying and classifying attackers’ reputations, text patterns and their intentions simply isn’t good enough anymore. As malicious operators use new NLP technologies that provide personalized, trust-based communications, security vendors must also take action, and sooner is better.

Machines are about to interact with victims and gain their trust by abusing their own language style and social and behavioral patterns, as well as the social and behavioral patterns of their peers and peers. It is fair to predict that a new generation of “Mindful” NLP technologies will be used in more sophisticated ways to gain profit and do damage.

Note: This article refers to users who are the target of malicious interactions of “Mindful” NLP bad bot. The same principles can be applied again in a different context: applications, their APIs and how they can be abused by “Mindful” mpain llanguage processing (MLP) Bad bots.

The mail Natural Language Processing and “Mindful” AI Allow for More Sophisticated Attacks from Bad Bots appeared first on blog

*** This is a Security Bloggers Network syndicated blog from blog written by Ears Gravier† Read the original message at: https://www.imperva.com/blog/natural-language-processing-and-mindful-ai-drive-more-sophisticated-bad-bot-attacks/

Leave a Comment

Your email address will not be published. Required fields are marked *