Nome e qualifica del proponente del progetto: 
sb_p_2154486
Anno: 
2020
Abstract: 

Algorithmic and hardware advancements in recent years all but removed barriers to entry for the adoption of machine learning (ML). As a result, ML-based applications surged in popularity, with dedicated hardware embedded in widespread consumer goods such as smartphones. Machine learning already enables multiple important applications such as image recognition, natural language processing and learning systems, as well as being the basis for near-future technologies such as automated driving.
While the uses and benefits of ML are clear, the security of the underlying algorithms is still not well understood. In recent years, researchers showed that ML algorithm are more fragile than previously thought, showing, for instance, how to force models to misclassify inputs, or how to invade users' privacy by tricking collaborative learning algorithms. Given the critical role of ML models in sensitive tasks, these attacks can lead to disastrous consequences (consider, for instance, forcing an automated car to misclassify a pedestrian as a shadow, or a STOP sign as a priority). Given the increasingly widespread adoption of ML in critical applications and our scarce understanding of its security, further research in this area is clearly needed. This project aims to investigate the robustness of ML algorithm, researching novel venues for attack as well as countermeasures to increase the reliability of ML applications.

ERC: 
PE6_5
PE6_7
PE6_11
Componenti gruppo di ricerca: 
sb_cp_is_2732575
Innovatività: 

Given the ubiquity of ML in computer security and critical real-life applications, the proposed research will have a considerable impact. Evasion attacks and adversarial examples can have profound consequences when used against critical applications such as ML malware detection or self-driving cars. Current research on the subject suffers from some drawbacks and limitations, both from the point of view of existing attacks as well as existing countermeasures. In particular, many state-of-the-art evasion attacks such as [12,16], while extremely effective, alter input data to achieve evasion without considering potential behavioural requirements. For instance, if we consider evasion of an ML-based malware detector, there are constraints regarding which modifications are allowed on the input. In particular, since the goal is to modify a malware in such a way that it classified as benign by the ML detector, the main constraint is that modifications to the malware do not alter its intended behaviour (i.e., it still performs its malware tasks). Current state-of-the-art evasion techniques and their countermeasures do not consider such constraints on the input, with the exception of [15]. However, even in [15] the behaviour is modeled in a simplistic way and is only applicable to the restricted context considered in the paper. The general problem of ML-based malware detection evasion is still an open research question, as well as the design of appropriate countermeasures for such attacks.

While the lack of a general approach to modify malware to evade detection could appear to be positive, this also means that no valid countermeasure against such an attack has been designed yet and that all currently used systems would be vulnerable. In my research, I aim to prove the feasibility of such general evasion attacks, as well as to propose robust countermeasures. I plan to investigate the applicability of mimicry attack to this particular category of ML evasion attacks. In particular, mimicry attacks attempt to disguise malicious activity by mimicking the behaviour of benign applications. Applying mimicry attacks to the context of ML evasion should allow to maintain the required behaviour of the sample, while also allowing room for modification to evade classification. In this context, Generative Adversarial Networks (or GANs) could be a good candidate for automating the task of modifying the malware sample until evasion is achieved.
Furthermore, after proving the feasibility and dangers of ML-based malware detection evasion, my research will focus on practical countermeasures. The specifics of the countermeasures will be highly related to the details of the attack, but some potentially good approaches are ensemble models, adversarial boosting [13] and distillation [17]. Using an ensemble of models would allow to prevent targeted evasion attacks against single ML detectors, requiring an attacker to find an evasive sample that can force the majority of the models in the ensemble to misclassify. Adversarial boosting can be used to increase the training set of ML detectors to include evasive sample, decreasing the likelihood of misclassification at runtime. Finally, distillation has already been used in the context of deep neural networks to mitigate the effects of input perturbations designed to force the network to misclassify a sample. It would be interesting to study how distillation affects malware detection evasion attacks.

Codice Bando: 
2154486

© Università degli Studi di Roma "La Sapienza" - Piazzale Aldo Moro 5, 00185 Roma