All the latest quantum computer articles

See the latest stories on quantum computing from eeNews Europe

Sunday, August 12, 2018

IBM embeds AI in scary new malware

By Nick Flaherty www.flaherty.co.uk

Researchers at IBM have embedded machine learning into malware to create a scary new class of attack tool.

Every new era of computing has served attackers with new capabilities and vulnerabilities. The PC era saw malware threats emerging from viruses and worms, and the security industry responded with antivirus software. In the web era, attacks such as cross-site request forgery (CSRF) and cross-site scripting (XSS) were challenging web applications. Now, with the cloud, analytics, mobile and social (CAMS) era, advanced persistent threats (APTs) have been at tha t top of the agenda. 

The team at IBM Research has been looking at the shift to machine learning and AI as the next major progression in IT, pointing out that cybercriminals are also studying AI to use it to their advantage and weaponize it. 

The team developed its own malware called Deeplocker, a new breed of highly targeted and evasive attack tools powered by AI, to better understand how several existing AI models can be combined with current malware techniques to create a particularly challenging new breed of malware. This class of AI-powered evasive malware conceals its intent until it reaches a specific victim. It unleashes its malicious action as soon as the AI model identifies the target through indicators like facial recognition, geolocation and voice recognition.

DeepLocker is designed to be stealthy. It avoids detection until the precise moment it recognizes a specific target, infecting millions of systems without being detected.

In the 1990s, malware authors started to encrypt the malicious payload (using so-called packers), such that the malicious code would only be observable when it was decrypted into memory before its execution. The security industry responded with dynamic malware analysis, building initial versions of malware sandboxes, such as virtualized systems, in which suspicious executables (called samples) are run, their activities monitored and their nature deemed benign or malicious.
The first forms of evasive malware that actively avoid analysis were later captured in the wild. For example, the malware used checks to identify whether it was running in a virtualized environment and whether other processes known to run in malware sandboxes were present. If any were found, the malware would stop executing its malicious payload in order to avoid analysis and keep its secrets encrypted. This approach is still prevalent today, as a recent study found that 98 percent of the malware samples analyzed uses evasive techniques to varying extents.
Nevertheless, although malware evasion keeps evolving, even very recent forms of targeted malware require predefined triggers that can be exposed by defenders by checking the code, packed code, configuration files or network activity. All of these triggers are observable to skilled malware analysts with the appropriate tools.
DeepLocker takes a a fundamentally different approach from any other current evasive and targeted malware. DeepLocker hides its malicious payload in benign carrier applications, such as a video conference software, to avoid detection by most antivirus and malware scanners.

What is unique about DeepLocker is that the use of AI makes the “trigger conditions” to unlock the attack almost impossible to reverse engineer. The malicious payload will only be unlocked if the intended target is reached. It achieves this by using a deep neural network (DNN) AI model.

The AI model is trained to behave normally unless it is presented with a specific input: the trigger conditions identifying specific victims. The neural network produces the “key” needed to unlock the attack. DeepLocker can leverage several attributes to identify its target, including visual, audio, geolocation and system-level features. As it is virtually impossible to exhaustively enumerate all possible trigger conditions for the AI model, this method would make it extremely challenging for malware analysts to reverse engineer the neural network and recover the mission-critical secrets, including the attack payload and the specifics of the target. When attackers attempt to infiltrate a target with malware, a stealthy, targeted attack needs to conceal two main components: the trigger condition(s) and the attack payload.

DeepLocker is able to leverage the “black-box” nature of the DNN AI model to conceal the trigger condition. A simple “if this, then that” trigger condition is transformed into a deep convolutional network of the AI model that is very hard to decipher. In addition to that, it is able to convert the concealed trigger condition itself into a “password” or “key” that is required to unlock the attack payload.

Technically, this method allows three layers of attack concealment. That is, given a DeepLocker AI model alone, it is extremely difficult for malware analysts to figure out what class of target it is looking for. Is it after people’s faces or some other visual clues? What specific instance of the target class is the valid trigger condition? And what is the ultimate goal of the attack payload?



To demonstrate the implications of DeepLocker’s capabilities, the researchers designed a proof of concept in which we camouflage a well-known ransomware (WannaCry) in a benign video conferencing application so that it remains undetected by malware analysis tools, including antivirus engines and malware sandboxes. As a triggering condition, they trained the AI model to recognize the face of a specific person to unlock the ransomware and execute on the system.

Imagine that this video conferencing application is distributed and downloaded by millions of people, which is a plausible scenario nowadays on many public platforms. When launched, the app would surreptitiously feed camera snapshots into the embedded AI model, but otherwise behave normally for all users except the intended target. When the victim sits in front of the computer and uses the application, the camera would feed their face to the app, and the malicious payload will be secretly executed, thanks to the victim’s face, which was the preprogrammed key to unlock it.

This is important as any number of AI models could be plugged in to find the intended victim, and different types of malware could be used as the “payload” that is hidden within the application.
While a class of malware like DeepLocker has not been seen in the wild to date, these AI tools are publicly available, as are the malware techniques being employed — so it’s only a matter of time before these tools are used by cybercriminals, if they are not already.

IBM Research has been studying AI-powered attacks and identified several new traits compared to traditional attacks. In particular, the increased evasiveness of AI-powered attacks challenges traditional rule-based security tools. AI can learn the rules and evade them. Moreover, AI enables new scales and speeds of attacks by acting autonomously and adaptively.

The team recommends focussing on the use of AI in detectors; going beyond rule-based security, reasoning and automation to enhance the effectiveness of security teams; and cyber deception to misdirect and deactivate AI-powered attacks.

Additionally, it would be beneficial to focus on monitoring and analyzing how apps behave across user devices, and flagging events when a new app is taking unexpected actions. This detection tactic could help identify these types of attacks in the future.

Related stories:

No comments: