Secure Machine Learning Systems
Componente | Categoria |
---|---|
Luigi Vincenzo Mancini | Aggiungi Tutor di riferimento (Professore o Ricercatore afferente allo stesso Dipartimento del Proponente) |
Machine Learning (ML) widespread adoption heavily altered the computer science industry landscape in recent years. Novel advanced algorithms and cheap, powerful hardware make it easy even for smaller companies to adopt ML-based solutions and propose ML models in widespread consumer goods such as smartphones. The success of machine learning is undeniable: ML models are already able to outperform humans in various tasks such as image recognition, voice recognition and complex tasks such as chess and go games. On the back of these successes, ML - and Neural Networks (NN) in particular - are currently being applied to critical applications such as automated driving, autonomous systems (e.g., drones, infrastructure monitoring) and cybersecurity.
Whilst the benefits of ML are clear, there is still uncertainty related to the security and suitability of applying ML techniques in critical applications. Indeed, researchers have repeatedly shown that ML and NN in particular are subject to a wide variety of attacks, with effects ranging from reduced effectiveness to complete destabilization of the system the NN is managing. These adversarial attacks can force models to misclassify inputs resulting in wrong predictions or recognition (e.g., a car misclassifying a pedestrian for a shadow), leak private information on users (e.g., in collaborative learning scenarios), poison the model and introduce triggers that allow for stealth attacks (e.g, poisoning of a ML-based cybersecurity system), and much more.
Given the increasingly widespread adoption of ML in critical systems and our lack of understanding of its security in several key application scenarios, additional research in this direction is needed. This project aims to investigate the applicability of Neural Networks to critical systems such as cybersecurity, while taking into consideration their robustness to attacks.