AI- A passage for Hackers&nbsp

If nothing worries you, think of a world where AI commands the hacking systems. In 2016, data pundits trained an ML-powered program to mimic humans in retweeting using the hashtag #Pokemon, to demonstrate how internet users can be tricked by advances in a program that understands language. Interestingly, close to one-third of the targeted people reacted by clicking on the link that the program sent along the benign message to assess how convincing the ML-powered software was. That’s way higher compared with the 5% to 10% success rate usually for ‘robotic’ phishing messages meant to lure users into clicking malicious links to deploy malware, steal sensitive information, or compromise computer systems/networks. The AI-powered system comes close to the almost 40% pass rate of spearphishing messages designed to lure a particular person. “Whereas spearphishing is manually handcrafted and takes a couple of minutes per target, the ML-trained approach is nearly as accurate yet it is automated. What if it could be deployed at a larger scale” Says John Seymour, a top data expert at ZeroFOX.

Thankfully, this’s just a mere experiment, but an exercise that shows that bad actors can leverage AI to perform their malicious activities. In fact, they are possibly using it, although this has not yet been proven. In July 2017, hundreds of top cybersecurity pundits met in Las Vegas, Nevada to discuss the matter and trending threats that emerging technologies are posing to the industry. In a mini poll held during the conference, the participants were asked what their thinking about threat actors using AI in the future was. An astounding 62% affirmed that they expect hackers to deploy AI for offensive purposes in the coming years. “bad actors have been applying AI as ammunition for quite some time, it makes absolute sense since threat actors have a challenge of scale, attacking as many computer users as possible, launching many threats and hitting as many victims as they can, and simultaneously trying to minimize risks” says Brian Wallace, a senior data expert at Cylance Inc. AI and ML technologies, he added, can decide about who, when, and what to attack. 

Ways in which hackers use artificial intelligence to attack computer users

Generally, when we discuss artificial intelligence (AI) and machine learning (ML), we often discuss the positive side and how these technologies help us fight cyber threats. For instance, AI-powered technologies enable users/organizations to stay safe, assisting them to block cyber threats like ransomware, malware, phishing, and detecting vulnerabilities. Nevertheless, there’s another side of these technologies; the dark side where hackers leverage these technologies to create more advanced threats. threat actors have devised more advanced techniques to spoof, espionage, commit fraud, and damage using Artificial emergence as discussed in this post.

  1. Use of AI in social engineering 

Cybercriminals use social engineering techniques to trick and persuade internet users into disclosing their valuable, sensitive data or taking specific actions like making money transfers fraudulently, downloading malicious files, or installing malware on their computers. Machine learning leverages hackers’ actions by enabling them to gather critical information about organizations, employees, and business associates easily and swiftly. In other words, ML potentiates threats used to perform social engineering.

  1. Use of AI to spread spam, phishing, and spearphishing attacks

spam, phishing messages, and spearphishing are forms of cybersecurity threats that take advantage of human ignorance. That is, people who want to be cheated. Machine learning is usually leveraged in such instances to train AI with the intent of creating occasions that resemble real situations. For instance, hackers may leverage AI algorithms to grasp patterns of automated messages sent from Motorola or Microsoft to design fake emails that resemble the actual ones.

  1. Use of AI in spoofing and impersonation 

Hackers use spoofing and impersonation tactics to impersonate known brands, organizations, or persons to scam other brands, organizations, or persons. Using AI technologies, cybercriminals can analyze comprehensively various target aspects. For instance, a hacker may impersonate a CEO of a reputable company, send malicious emails to customers, and benefit from the scheme. So, the hacker can leverage AI algorithms to understand how the CEO of that known company writes, and create fake texts, fake voices, or even fake photos and videos.

  1. Use of AI to spread ransomware, spyware, and malware 

Cybercriminals use ransomware, spyware, and malware to steal, modify, compromise, or damage critical data infrastructures. Often, these threat vectors are spread through malicious email attachments and links. Hackers use AI and ML algorithms to develop increasingly clever threats that have the capacity of adapting or ‘dodging’ even more advanced protection systems; invasion tactics of malware. Data experts regard this phenomenon as a battle between bad ML vs. good ML.

  1. Use of AI for vulnerability discovery 

Hackers are increasingly using AI algorithms to discover bugs and errors in software and computer systems, efficiently and quickly. Vulnerabilities that would take months or years for hackers to discover could be identified in minutes using AI-powered hacking techniques.

  1. Use of AI in captchas and passwords

AI algorithms are being used to violate captchas and passwords. ML enables hackers to train bots to skip some security barriers. Also, AI-enabled techniques aid in the discovery of login credentials to facilitate brute-force attacks.

  1. Use of AI in bots and automation 

AI and ML enable hackers to automate some parts or phases of an attack. For instance, DDoS attacks can be executed using zombie machines referred to as botnets that often use algorithms to organize attacks and increase their severity.

Conclusion 

This post is not meant to discourage anyone or sell fear but to inform internet users, organizations, and any interested person that AI and ML are also employed by cybercriminals to advance their malicious activities. In spite of all this, these technologies offer some of the top cyber security tools/solutions for the detection, prevention, mitigation, and removal of cyber threats.

Leave a Reply

Your email address will not be published. Required fields are marked *