9 ways: How hackers attack with machine learning

9 ways: How hackers attack with machine learning

Machine learning or artificial intelligence (AI) is becoming a core technology in the area of ​​threat detection and response. The ability to automatically adapt to changing threat scenarios on the fly can give security teams an advantage. However, cybercriminals are also using machine learning / AI to scale up their attacks, bypass security controls and find new vulnerabilities – all at an unprecedented rate and with potentially devastating consequences.

We present the nine most common ways criminal attackers take advantage of machine learning technology.

1. Spam

Defenders have relied on machine learning to spot spam for decades, as Fernando Montenegro, analyst at Omdia, notes: “Spam prevention is the number one use case for machine learning.”

If the spam filter used works on predefined rules or creates a kind of score, this could potentially be exploited by attackers to help their own attacks be more successful, the analyst warns: “You just have to try around long enough and you can do it reconstruct the underlying model and drive a custom attack that bypasses that model.” Not only spam filters are vulnerable: According to Montenegro, every security assessment or other output provided by a security provider can potentially be misused: “Not everyone has this problem, but if you are not careful, helpful output can form the basis for malicious deliver activities.”

2. Optimized Phishing

Attackers don’t just use ML-based security tools to test whether their messages can bypass spam filters. They also use machine learning to create these emails, as Adam Malone, partner at EY management consultancy, explains: “They advertise machine learning-based services on criminal forums and use them to create better phishing emails and create fake personas for scam campaigns. Unfortunately, this is usually not just about marketing – the criminal ML services definitely work better.”

Using machine learning, the attackers could creatively optimize phishing emails so they are not detected as spam and drive as much engagement as possible in the form of clicks. According to the consultant, the cybercriminals did not limit themselves to just the email text: “With the help of AI, realistic-looking photos, social media profiles and other materials can be created to make the communication appear as legitimate as possible. “

3. Cracking passwords

Cybercriminals also use machine learning to crack passwords, as Malone explains: “This is proven by numerous systems designed to guess passwords with an impressive frequency and success rate. Cybercriminals are now creating much better dictionaries and are becoming increasingly adept at cracking stolen ones to hack hashes.”

The criminals also used machine learning to identify security controls and “guess” passwords with fewer attempts and increase the likelihood of success, the consultant said.

4. Deepfakes

Today’s deep fake tools create deceptively real video or audio files, some of which are difficult to expose as fakes. “The ability to simulate a person’s voice or face is very useful for attackers,” says Montenegro. In fact, in recent years, some high-profile cases have come to light in which deep fakes have sometimes cost companies millions of dollars.

More criminal actors are turning to AI to create realistic-looking photos, user profiles, and phishing emails, and to make their messages appear more believable. It’s a lucrative business: According to the FBI, business email compromise campaigns have caused more than $43 billion in damages since 2016.

5. Neutralize security tools

Many current security tools use some form of artificial intelligence or machine learning. Antivirus solutions, for example, increasingly look beyond basic signatures for suspicious behavior.

“All systems that are available online – especially open source – can be exploited by cybercriminals,” said Murat Kantarcioglu, a computer science professor at the University of Texas. Attackers could use the tools to tweak their malware until it can evade detection: “AI models have a lot of blind spots.”

6. Awareness raising

Machine learning can be used by cybercriminals to investigate traffic patterns, defenses, and potential vulnerabilities. However, this is not easy to achieve, as Kantarcioglu explains: “You need some skills to make use of AI. In my opinion, it is mainly state-controlled actors who use such techniques.”

However, if the technology is eventually commercialized and offered as a service in the cybercrime underground, it could become available to a wider audience, says Allie Mellen, an analyst at Forrester: “This could also happen if a nation-state threat actor develops a particular toolkit that Learning takes and makes it available to the criminal community. But the barriers to entry remain high: attackers who want to use such tools must have ML subject matter expertise.”

7. Autonomous Agents

If a company realizes it is under attack and shuts down Internet access to affected systems, malware may not be able to connect to the command and control servers for instructions.

“Cyber ​​criminals want to counteract this with intelligent machine learning models that ensure the functionality of the malware even when no direct control is possible. However, this is not relevant for ‘conventional’ criminal hackers,” Kantarcioglu cautiously gives the all-clear.

8. AI Poisoning

Attackers can trick ML models by feeding them new information – i.e. manipulating the training dataset: “The datasets could be intentionally falsified, for example,” says Alexey Rubtsov, senior research associate at the Global Risk Institute.

This is comparable to the way Microsoft’s chatbot Tay was “taught” to use racist language in 2016. The same approach can be used to teach a system that a certain type of malware is safe or that certain bot behavior is perfectly normal, Rubtsov said.

9. AI fuzzing

Serious software developers and penetration testers use fuzzing solutions to generate random inputs to test systems or find vulnerabilities. Here, too, machine learning is now often used, for example to generate more specific and organized inputs. This makes fuzzing tools useful for companies, but also for cybercriminals.

“That’s why basic cybersecurity hygiene in the form of patching, anti-phishing training, and micro-segmentation remains critical,” said Forrester analyst Mellen. “There are multiple obstacles to set up, not just one, which the attackers will end up using to their advantage.”

Investing in machine learning requires a high level of expertise, which is scarce at the moment. Also, there are usually simpler and easier options for attackers, as Mellen knows: “There are many ‘low hanging fruits’ and other ways to make money – without using ML and AI for cyber attacks. In my experience, criminal hackers make in They don’t use it in the vast majority of cases, but that could change in the future as companies continue to improve their defenses and as criminals and nation-states continue to invest in cyberattacks.” (FM)

This post is based on an article from our US sister publication, CSO Online.


#ways #hackers #attack #machine #learning

Leave a Comment

Your email address will not be published.