IBM Research and the Looming Threat of AI-Powered Cyber-Attacks
IBM Research has a long history of studying how emerging technologies are affecting the threat landscape to decide what cybersecurity countermeasures might be necessary to combat them; DeepLocker, its AI-powered malware Proof of Concept (PoC) falls within this history. AI has become rapidly democratized over the past few years as many AI tools have been open-sourced. Many people have started to leverage these tools, including “the bad guys” who have been studying how to weaponize AI to their advantage.
The Challenges of Addressing AI-Powered Attacks
As Stoecklin himself says, even the AI community doesn’t fully understand how AI really works as it is so complicated. There are so many millions of parameters within the AI neural networks that AI cybersecurity is a huge challenge. Stoecklin is certain, however, that the security industry “needs to be more conscious that these AI attacks will be arriving”. Many security tools deployed today continue to use rules and signatures. AI-powered attacks of the future will easily be able to learn these rules and bypass them. The security industry needs to move away from these legacy rules-based systems and rapidly embrace new techniques and technologies, such as cyber-detection. According to Stoecklin, “AI will need to be used to combat the AI-powered attacks coming”.
Existing AI Cybersecurity Methods
There are a range of existing AI cybersecurity methods, including Support Vector Machines, RainForests and Neural Networks. A problem that most of these suffer from is that it is typically hard to understand how they make decisions and which actions are triggered by those decisions. For malware analysts, this is problematic as it means that the program logic is not visible via analysis of the code.
At the recent Blackhat security summit, IBM Research presented DeepLocker to the cybersecurity community demonstrating the way in which its PoC is capable of expanding the range of analytical techniques available, such as via face recognition or speaker verification; these are areas vulnerable to attack once cyber-attackers are able to identify a specific target system.
What is DeepLocker?
DeepLocker is an IBM-developed malware that through demonstrating the potential of AI-embedded attacks engenders new possibilities for cyber-protection. It is using AI to conceal its malicious nature in popular applications until it reaches a particular target. The target is identified using AI techniques such as indicators like facial recognition, voice recognition or geolocation. DeepLocker is unique because its use of AI makes it extremely difficult for security experts to detect the target ahead of time. The target is unknowingly releasing its malicious code hidden in what appear to be safe carrier apps. In fact, a sniper-type AI-fueled attacker is keeping a careful eye on the target’s movements and only launching its attack on them at the very last moment. The stealthy PoC is primarily intended to demonstrate to the security community that we need to prepare now for new levels of AI-powered attacks.
In a conversation with Sara Peters, Senior Editor at Dark Reading at the recent Blackhat security conference, Marc Ph. Stoecklin. Principal Research Scientist and Manager of Cognitive Cybersecurity Intelligence at IBM Research Science, described the concept behind DeepLocker.
“We developed DeepLocker to demonstrate how existing AI technologies – open-source, available – can be combined very easily with existing malware attack types and patterns that are already being seen in the wild to create a new type of highly invasive and targeted attack”. DeepLocker is indeed using AI to conceal malicious contents in a sophisticated manner in innocent-looking applications, not spitting out the payload until it reaches its target.
DeepLocker deploys a Deep Neural Network (DNN) AI-model to conceal its attack payload in seemingly beginning carrier apps, such as video conferencing applications. The payload is only unlocked if the intended target is reached.
Malware concealment is nothing new. In the late 80s and early 90s, the first types of polymorphic and metamorphic virus were being constructed to disrupt and destroy data. Malware authors were able to obfuscate and mutate payloads and thus avoid antivirus systems. The antivirus industry gradually built static code and the potential for malware-analysis to combat this kind of attack.
In the 90s, malware authors began using encryption techniques and the security industry developed countermeasures of dynamic malware analysis. Attackers then developed the first forms of evasive malware seen in the wild i.e. malware that actively tries to elude detection. This approach continues to be prevalent.
Malware evasion has taken on increasingly sophisticated forms and increasingly targets highly specific endpoints, such as the Stuxnet worm used to target and specifically seek out industrial control systems (ICS) from an individual manufacturer using specific hardware and software configurations.
Nonetheless, however sophisticated malware evasion has become, to date, it has still been possible for cybersecurity forces to stay ahead of their attackers and exposing them by checking the code, packed code, configuration files or network activity.
The New Kinds of Danger that DeepLocker Reveals
DeepLocker is designed to be stealthy and fly under the radar in order to avoid detection until the exact moment it determines its target. Similarly to nation-state malware, this type of malware is particularly dangerous as it could infect millions of systems while eluding detection. Unlike nation-state malware, it is plausible in the civilian and commercial worlds.
At the BlackHat presentation, the IBM team pointed out that even though this kind of malware has not yet been seen in the wild, the tools to construct it are readily available as are the malware techniques. “In fact,” said IBM, “we would not be surprised if this type of attack were already being deployed.”
In an August 8th blog post preceding the Black Hat presentation of DeepLocker, Stoecklin describes how this makes DeepLocker unique in the history of malware concealment, putting it into a class of its own.
“What is unique about DeepLocker is that the use of AI makes the “trigger conditions” to unlock the attack almost impossible to reverse engineer”, writes Stoecklin. “The malicious payload will only be unlocked if the intended target is reached. It achieves this by using a deep neural network (DNN) AI model.” He adds, “As it is virtually impossible to exhaustively enumerate all possible trigger conditions for the AI model, this method would make it extremely challenging for malware analysts to reverse engineer the neural network and recover the mission-critical secrets, including the attack payload and the specifics of the target.”
In its demonstration of DeepLocker’s potential, the researchers demo’ed a PoC whereby WannaCry ransomware was concealed in a video conferencing app. The malware eluded detection by both antivirus engines and sandboxing.
The same warning message has been voiced at a global level. At the Munich Security Conference earlier this year, Kersti Kaljulaid, president of Estonia, predicted a 50% chance that by the middle of the C21st, an AI system will be in existence that is capable of launching a lethal attack. First, “we need to understand the risks — what we’re afraid of,” said Kaljulaid,, highlighting three of them: someone using AI disruptively; intelligence becoming widespread; and AI depleting energy reserves.
“Right now we know we want to give systems some right of auto decision-making when it has the necessarily information to react,” Kaljulaid said. But once that is accomplished, ‘then we have the responsibility’ to establish standards — the ability to apply reporting requirements to AI system, or to even shutdown systems if they are deemed threatening. The kind of standards gradually being put in place for cybersecurity need to apply to the AI world, exactly the same way”, she said.
Industry Response to DeepLocker
Hauke Gierow, Manager of Security Commmunications at G DATA Software wrote a strongly worded blog post in response to IBM’s AI malware PoC. While conceding that the contribution of the IBM researchers in developing DeepLocker was “very valuable” in its demonstratation of the range of new arsenal of AI-powered attacks, Gierow also said, “the effects of these innovations must also be put into perspective”. He stresses that malware evading analysis “is not a new phenomenon by any stretch” and the recognition of IT cybersecurity solutions has improved dramatically over recent years, including G DATA’s own security solutions. Executive Director of G DATA-Security Labs Ralf Benzmüller is quoted points out, “It is possible to recognize the AI-based procedures and to create signatures for them. Modern security solutions [such as G DATA Total Security] also increasingly rely on behavior-based detection methods that would easily recognize Deeplocker.”
The G DATA Behavior Blockers works by checking to see if specific suspicious actions are taking place on the device. In ransomware, for instance, the software can find out if a process has deleted a large number of shadow copies that can be deployed to recover deleted data. When a process begins to suddenly encrypt lots of data without prior user input, the software would either stop the process or ask the user if he/she indeed wants to encrypt data.
Gierow concludes by stating, “the procedure shown is not a fundamental problem for the security industry”. He further quotes Benzmüller for added reassurance, “Should such malware actually appear in the wild, we can also detect it with customized signatures or new behaviour-based rules. In the G DATA SecurityLabs we have been using Machine Learning and Artificial Intelligence for years to detect harmful files. But you don’t need a new AI engine to fend off AI-based attacks.”
Similarly, Ilia Kolochenko, CEO at High-Tech Bridge does not believe that DeepLocker is undetectable. “We are still pretty far from AI/ML hacking technologies that can outperform the brain of a criminal hacker. Of course, cybercriminals are already actively using machine learning and big data technologies to increase their overall effectiveness and efficiency. But,” he told Security Week, “it will not invent any substantially new hacking techniques or something beyond a new vector of exploitation or attack as all of those can be reliably mitigated by the existing defense technologies. Moreover, many cybersecurity companies also start leveraging machine learning with a lot of success, hindering cybercrime. Therefore, I see absolutely no reason for panic today.”