Nome e qualifica del proponente del progetto: 
sb_p_2495070
Anno: 
2021
Abstract: 

In the past two decades, AI systems have achieved an unprecedented level of efficiency and are being deployed to help us understand and comprehend the challenges we face. Yet, the adoption of modern AI systems in many domains today is limited by the low interpretability of machine predictions and especially deep neural networks. Current approaches to make DNNs outputs interpretable are focused on weak forms of explainability which are not satisfactory in many fields. With this project, we want to investigate the explainability potential of Rationalist Learning, a new machine learning paradigm used for theoretical inductive problems, and that we want to test on real-world problems such as process mining and object detection in computer vision. Such an approach would guarantee strong explainability and could unlock deep learning-based AI in many fields.

ERC: 
PE6_7
PE6_11
SH4_9
Componenti gruppo di ricerca: 
sb_cp_is_3626595
Innovatività: 

The time has come to reflect deeply on the topic of explainability in AI. This need has grown stronger lately, given the demand in many real-world scenarios and the black-box nature of the increasingly widespread deep neural networks. This state of affairs is the right opportunity to finally make a profound reflection, to formalize and clarify what we mean by "interpretable model".
We believe that this reflection cannot be carried out only by information technology. We cannot stop at the creation of the explanation object, an explanation is not a one-sided process that must be carried out exclusively by the machine, a problematic assumption of current methods. Each explanatory intent is bilateral, foresees two subjects: who provides the explanation and who interprets it. The success of the process cannot be separated from correct interpretation. Therefore, it is necessary to enrich the IT point of view with a semiotic, which prescribes the necessary conditions for an adequate interpretation.
The problem of explainability has the potential to become an interdisciplinary research sector in its own right, with clear objectives close to the needs of today's society, reachable through a collaboration between scientists and humanists: computer scientists and semiotics. The development of Rationalist Learning techniques can be one of the first lines of research in this new field.

Codice Bando: 
2495070

© Università degli Studi di Roma "La Sapienza" - Piazzale Aldo Moro 5, 00185 Roma