7 ways AI and ML are helping and hurting cybersecurity

Artificial Intelligence (AI) and Machine Learning (ML)
7 ways AI and ML are helping and hurting cybersecurity



By Andrey Shklyarov, Dmitry Vyrostkov

providers on the topic

In the right hands, AI and ML can enrich our cyber defenses. In the wrong hands, they can do significant damage.

In the right hands, AI/ML can uncover vulnerabilities and reduce incident response times. In the hands of cyber criminals, the damage can be significant.

(Image: Blue Planet Studio – stock.adobe.com )

Artificial intelligence (AI) and machine learning (ML) are part of our everyday lives today, and so is cybersecurity. In the right hands, AI/ML can uncover vulnerabilities and reduce incident response times. However, in the hands of cybercriminals, they can also cause significant damage.

Here are seven positive and seven negative impacts of AI/ML on cybersecurity.

7 Positive impact of AI/ML on cybersecurity

Fraud and anomaly detection: This is the most common way AI tools are used in cybersecurity. Composite AI fraud detection engines show excellent results in detecting complicated fraud patterns. Fraud detection systems’ advanced analytics dashboards provide comprehensive incident details. This is an extremely important area within the general field of anomaly detection.

Email spam filter: Defensive rules filter out messages containing suspicious words to identify dangerous emails. In addition, spam filters protect e-mail users and reduce the time it takes to review unwanted correspondence.

Botnet Detection: Supervised and unsupervised ML algorithms not only facilitate detection but also prevent sophisticated bot attacks. They also help identify patterns in user behavior to detect undetected attacks with an extremely low false positive rate.

Vulnerability Management: Vulnerabilities can be difficult to manage (manually or with technology tools), but AI systems make it easier. AI tools scan for potential vulnerabilities by analyzing basic user behavior, endpoints, servers, and even dark web discussions to identify code vulnerabilities and predict attacks.

Anti Malware: AI helps antivirus software recognize good and bad files and makes it possible to identify new forms of malware even if they’ve never been seen before. While the full replacement of traditional techniques with AI-based methods can speed up detection, it also increases the number of false positives. The combination of traditional methods and AI can detect 100% of malware.

Data Leak Prevention: AI helps recognize specific types of data in text and non-text documents. Trainable classifiers can be trained to recognize various types of sensitive information. These AI approaches can use suitable recognition algorithms to search for data in images, voice recordings or videos.

SIEM and SOAR: ML can leverage security information and event management (SIEM) and security orchestration, automation, and response (SOAR) tools to improve data automation and intelligence collection, identify suspicious behavior patterns, and automate response based on input .

AI/ML is used in network traffic analysis, intrusion detection systems, intrusion prevention systems, secure access service edge, user and entity behavior analysis, and most of the technology areas outlined in Gartner’s Impact Radar for Security. In fact, it’s hard to imagine a modern security tool without some kind of AI/ML magic in it.

7 Negative Impacts of AI/ML on Cybersecurity

Collecting data: Using social engineering and other techniques, ML is used to better profile victims, and cybercriminals use this information to accelerate attacks. For example, in 2018, WordPress websites were massively infected by ML-based botnet infections that gave hackers access to users’ personal data.

ransomware: Ransomware is experiencing an unfortunate renaissance. There are many examples of criminal success stories; one of the worst incidents resulted in the six-day shutdown of Colonial Pipeline and the payment of a $4.4 million ransom.

Spam, phishing and spear phishing: ML algorithms can create fake messages that look like real ones and aim to steal user’s credentials. In a Black Hat presentation, John Seymour and Philip Tully described how an ML algorithm generated viral tweets containing fake phishing links that were four times more effective than a human-crafted phishing message.

Fakes: In voice phishing, scammers use ML-generated deepfake audio technology to launch more successful attacks. Modern algorithms like Baidu’s “Deep Voice” only need a few seconds of a person’s voice to reproduce their speech, accents and tones.

malware: ML can hide malware that tracks node and endpoint behavior and creates patterns that mimic legitimate network traffic on the victim’s network. It can also build a self-destructive mechanism into malware that increases the speed of an attack. Algorithms are trained to extract data faster than a human could, making it much more difficult to prevent.

Passwords and CAPTCHAs: Neural network-based software claims to easily crack human recognition systems. ML allows cybercriminals to analyze huge password datasets to better guess passwords. PassGAN, for example, uses an ML algorithm to guess passwords more accurately than common password-cracking tools that use traditional techniques.

Attacks on AI/ML itself: Misuse of algorithms that lie at the heart of healthcare, the military, and other high-value sectors could lead to catastrophe. The Berryville Institute of Machine Learning’s Architectural Risk Analysis of Machine Learning Systems helps analyze taxonomies of known attacks on ML and performs an architectural risk analysis of ML algorithms. Security engineers need to learn how to secure ML algorithms at every stage of their lifecycle.

It’s easy to see why AI/ML is attracting so much attention. The only way to combat sneaky cyberattacks is to use the potential of AI to defend. The corporate world needs to realize how powerful ML can be when it comes to detecting anomalies (e.g. in traffic patterns or human error). With the right countermeasures, possible damage can be prevented or drastically reduced.

Overall, AI/ML is of great value in protecting against cyber threats. Some governments and companies are using or discussing the use of AI/ML to combat cybercriminals. While concerns about privacy and ethics surrounding AI/ML are valid, governments must ensure that AI/ML regulations do not prevent companies from using AI/ML for protection. Because, as we all know, cybercriminals don’t follow the rules.

About the authors

Andrey Shklyarov joined DataArt in 2016 as Chief Compliance Officer. He has more than 25 years of experience in the IT industry. He started his career as a software developer and has played many roles. He has experience leading projects, managing programs in the medical device industry, building quality and safety management systems, overseeing the adoption of agile methodologies, leading a software delivery center and leading a corporate compliance program. Andrey holds a master’s degree in Computer Science from Kharkiv National University of Radio Electronics.

Dmitry Vyrostkov joined DataArt in 2006 as a software developer/team leader. In 2012, Dmitry founded the DataArt Security Competence, a team of security experts that advises customers and assists DataArt’s development teams in implementing security best practices. Dmitry promotes the group’s services to internal and external audiences. In 2019, the group generated over $1 million in security services. Dmitry also coordinates sales activities, projects and resources, monitors the quality of services and results. Before joining DataArt, Dmitry worked as a developer and team leader at Relex, one of the leading software development companies in Voronezh. Dmitry holds an MS degree in Applied Mathematics, Informatics and Mechanics from Voronezh State University.

(ID:48442592)

#ways #helping #hurting #cybersecurity

Leave a Comment

Your email address will not be published.