TriggerScope and LAVA at Oakland 2016
Triggered malware detection and large-scale bug injectionWe have two papers at Oakland this year: TriggerScope, a technique for static detection of triggered malicious behavior in Android applications, and LAVA, a large-scale technique for injecting bugs into programs.
Detecting Triggered Malware
Malware detection has been an area of intense research interest for quite some time, and many techniques have been proposed and developed to accomplish this. Often, these approaches are based on some application of dynamic analysis, usually within a sandbox that is instrumented to collect and process sample execution traces for further analysis – e.g., behavioral detection. In the Android world, many dynamic sandboxes are in popular use, including Google’s Bouncer and Andrubis.
However, malware authors have responded by making detection more difficult through evasion or anti-debugging techniques. For instance, one class of evasion techniques is to only perform malicious activity when a specific trigger condition is satisfied – e.g., at a specific time or place, or on receipt of a specific message. This can make dynamic detection extremely difficult because the analysis must supply inputs that satisfy a narrow condition to even observe malicious behavior.
In contrast, a static analysis has the advantage that it can in principle analyze all possible executions over all possible inputs. But, malicious behavior can be difficult to define in a general way. In this sense, triggered behavior can be a very useful heuristic for identifying evasive malice.
TriggerScope is a static analysis for Android applications that builds on these two observations. It uses static control dependency analysis between trigger checks and potentially security-sensitive operations to detect potential malice that would be difficult to detect either dynamically or with prior static techniques. In our evaluation, we found that TriggerScope was able to automatically detect real instances of triggered malware, including RCSAndroid.
Injecting Bugs?
It might initially strike you as counter-intuitive that adding bugs to software could be useful to improve security. However, consider that while significant effort has been invested in developing techniques for finding vulnerabilities in software, it’s nevertheless difficult to rigorously evaluate just how effective these techniques are, and even more difficult to compare and contrast different techniques.
A major reason for this difficulty is the scarcity of data sets with ground truth for the types and locations of vulnerabilities. To remedy this, we propose LAVA, a technique for scalably injecting many bugs into source code. LAVA uses dynamic taint analysis to identify DUAs (dead, uncomplicated, and available parts of input data) that can be repurposed as triggers for injected bugs. Our evaluation shows that LAVA is capable of injecting many bugs, and an initial comparison of vulnerability detection techniques shows that much work remains in this field.
We’re very excited by the possibilities of injecting bugs at scale, and believe that there is much more that can be done with this capability!