Realizable Universal Adversarial Perturbations for Malware

Published in arXiv preprint arXiv:2102.06747, 2022

Machine learning classification models are vulnerable to adversarial examples–effective input-specific perturbations that can manipulate the model’s output. Universal Adversarial Perturbations (UAPs), which identify noisy patterns that generalize across the input space, allow the attacker to greatly scale up the generation of these adversarial examples. Although UAPs have been explored in application domains beyond computer vision, little is known about their properties and implications in the specific context of realizable attacks, such as malware, where attackers must reason about satisfying challenging problem-space constraints. In this paper, we explore the challenges and strengths of UAPs in the context of malware classification. We generate sequences of problem-space transformations that induce UAPs in the corresponding feature-space embedding and evaluate their effectiveness across threat models that consider a varying degree of realistic attacker knowledge. Additionally, we propose adversarial training-based mitigations using knowledge derived from the problem-space transformations, and compare against alternative feature-space defenses. Our experiments limit the effectiveness of a white box Android evasion attack to ~20% at the cost of 3% TPR at 1% FPR. We additionally show how our method can be adapted to more restrictive application domains such as Windows malware.

Download article

Recommended citation: R. Labaca-Castro, L. Munoz-Gonzalez, F. Pendlebury, G. Dreo Rodosek, F. Pierazzi, L. Cavallaro: Universal Adversarial Perturbations for Malware. arXiv preprint arXiv:2102.06747, February 02, 2022.