Investigating Labelless Drift Adaptation for Malware Detection

Zeliang Kan, Feargus Pendlebury, Fabio Pierazzi, Lorenzo Cavallaro

Research output: Chapter in Book/Report/Conference proceedingConference contribution

101 Downloads (Pure)

Abstract

The evolution of malware has long plagued machine learning-based detection systems, as malware authors develop innovative strategies to evade detection and chase profits. This induces concept drift as the test distribution diverges from the training, causing performance decay that requires constant monitoring and adaptation.

In this work, we analyze the adaptation strategy used by DroidEvolver, a state-of-the-art learning system that self-updates using pseudo-labels to avoid the high overhead associated with obtaining a new ground truth. After removing sources of experimental bias present in the original evaluation, we identify a number of flaws in the generation and integration of these pseudo-labels, leading to a rapid onset of performance degradation as the model poisons itself. We propose DroidEvolver++, a more robust variant of DroidEvolver, to address these issues and highlight the role of pseudo-labels in addressing concept drift. We test the tolerance of the adaptation strategy versus different degrees of pseudo-label noise and propose the adoption of methods to ensure only high-quality pseudo-labels are used for updates.

Ultimately, we conclude that the use of pseudo-labeling remains a promising solution to limitations on labeling capacity, but great care must be taken when designing update mechanisms to avoid negative feedback loops and self-poisoning which have catastrophic effects on performance.
Original languageEnglish
Title of host publicationAISec '21
Subtitle of host publicationProceedings of the 14th ACM Workshop on Artificial Intelligence and Security
PublisherACM
Pages123-134
Number of pages12
DOIs
Publication statusPublished - 15 Nov 2021

Cite this