Using AI-enhanced malware, researchers disrupt algorithms used in antimalware

Image: iStock

Tech pundits and government sources alike are encouraged by the potential of artificial intelligence (AI) to enhance many aspects of cybersecurity, especially its ability to perceive and learn, which should finally allow for some proactivity.

However, those keeping an eye on the dark side of the internet are deeply concerned that, seeing the benefits, cybercriminals are already fully invested in AI and are hard at work incorporating machine learning into their malware platforms.

SEE: Cybersecurity in 2017: A roundup of predictions (Tech Pro Research)

The dark side of AI

The 2016 Business Insider article Artificial intelligence-powered malware is coming and it is going to be terrifying offers insight into how AI will benefit cybercriminals. In the article, columnist Rob Price interviews Dave Palmer, a seasoned cybersecurity expert and director of technology at Darktrace, a company known for its AI-based cybersecurity platforms. Palmer seems convinced that it is only a matter of time—if it hasn’t already happened—when AI-supported malware makes it debut.

Ransomware is one area, Palmer suggests, where AI will be a huge benefit to cybercriminals. He explains to Price that having machine intelligence allows smart ransomware to coordinate with other instances of ransomware and attack in concert, overloading a victim’s defenses.

Palmer also believes using AI will make it easier for cybercriminals to ransom IoT-style devices: “I’m convinced, we’ll see the extortion of assets as well as data—factory equipment, MRI scanners in hospitals, and retail equipment—stuff that you would pay to have back online because you cannot function as a business without it. Data is one thing and you can back that up, but if your machine stops working, then you are not making any money.”

Phishing is another area where Palmer feels online fraudsters will benefit from using AI. There currently exist AI platforms capable of mimicking a person’s writing styles; Palmer suggests it will not be long before the online criminal element develops AI-based malware designed to rifle through a target’s emails and documents in order to learn the victim’s writing style with the intent of using that information to deploy phishing correspondence indistinguishable from the real thing.

SEE: How AI-powered cyberattacks will make fighting hackers even harder (ZDNet)

Why not attack cybersecurity directly?

One area of likely interest to cybercriminals that Price and Palmer did not discuss involves attacking AI algorithms directly—in particular, ones associated with production cybersecurity platforms. For that discussion, we turn to Weiwei Hu and Ying Tan, researchers at Peking University’s School of Electronics Engineering and Computer Science.

In the introduction of their research paper Generating Adversarial Malware Examples for Black Box Attacks Based on GAN (PDF), Hu and Tan write, “Most researchers focused their efforts on improving the detection performance of such algorithms [algorithms that augment malware detection], but ignored the robustness of these algorithms.”

“Many machine learning algorithms are very vulnerable to intentional attacks,” add the researchers. “Machine-learning based malware detection algorithms cannot be used in real-world applications if they are easily to be bypassed by some adversarial techniques.”

Hu and Tan came to this conclusion based on research by Szegdy et al, who were able to bypass malware-detection algorithms using altered information (adversarial examples) that maximized malware classification errors, making it impossible for the detection algorithm to spot malware.

The two researchers then proceeded to build on the research of Szegdy et al by proposing the use of generative neural networks and the alteration of original samples to make input and output adversarial examples. Hu and Tan explain, “The intrinsic non-linear structure of neural networks enables them to generate more complex and flexible adversarial examples to fool the target model.”

The two called the generative neural network MalGAN. The researchers then determined how to train a MalGAN generator to create adversarial examples able to fool malware detectors. “Experimental results show that the generated adversarial examples are able to effectively bypass the malware detector,” explain Hu and Tan.

Adversarial examples’ probability distribution is controlled by the MalGAN generator. Malware authors are able to frequently change the probability distribution by retraining MalGAN to where the malware detector cannot keep up nor learn stable malware patterns from it.

The explanation is a bit simplistic, but if history is any indication, the bad guys are already developing algorithms similar to MalGAN that create adversarial examples capable of bypassing malware-detection algorithms—more or less using AI to defeat AI.

Also see

Source: Security on TechRepublic @ May 4, 2017 at 05:33AM

0
Share