Towards Secure Machine Learning Systems

Anno
2020
Proponente -
Struttura
Sottosettore ERC del proponente del progetto
PE6_5
Componenti gruppo di ricerca
Componente Categoria
Luigi Vincenzo Mancini Tutor di riferimento
Abstract

Algorithmic and hardware advancements in recent years all but removed barriers to entry for the adoption of machine learning (ML). As a result, ML-based applications surged in popularity, with dedicated hardware embedded in widespread consumer goods such as smartphones. Machine learning already enables multiple important applications such as image recognition, natural language processing and learning systems, as well as being the basis for near-future technologies such as automated driving.
While the uses and benefits of ML are clear, the security of the underlying algorithms is still not well understood. In recent years, researchers showed that ML algorithm are more fragile than previously thought, showing, for instance, how to force models to misclassify inputs, or how to invade users' privacy by tricking collaborative learning algorithms. Given the critical role of ML models in sensitive tasks, these attacks can lead to disastrous consequences (consider, for instance, forcing an automated car to misclassify a pedestrian as a shadow, or a STOP sign as a priority). Given the increasingly widespread adoption of ML in critical applications and our scarce understanding of its security, further research in this area is clearly needed. This project aims to investigate the robustness of ML algorithm, researching novel venues for attack as well as countermeasures to increase the reliability of ML applications.

ERC
PE6_5, PE6_7, PE6_11
Keywords:
PRIVACY E SICUREZZA, INTELLIGENZA ARTIFICIALE, RETI NEURALI

© Università degli Studi di Roma "La Sapienza" - Piazzale Aldo Moro 5, 00185 Roma