Advancing Deep Learning in medical applications through Explainability (ADeLE)

Anno
2020
Proponente Carlo Mancini Terracciano - Ricercatore
Sottosettore ERC del proponente del progetto
PE2_2
Componenti gruppo di ricerca
Componente Categoria
Stefano Giagu Componenti strutturati del gruppo di ricerca
Marco Anile Componenti strutturati del gruppo di ricerca
Componente Qualifica Struttura Categoria
Cecilia Voena Ricercatrice INFN Altro personale aggregato Sapienza o esterni, titolari di borse di studio di ricerca
Barbara Caccia Dirigente di ricerca Istituto Superiore di Sanità Altro personale aggregato Sapienza o esterni, titolari di borse di studio di ricerca
Silvia Pozzi Ricercatore Istituto Superiore di Sanità Altro personale aggregato Sapienza o esterni, titolari di borse di studio di ricerca
Abstract

Applications to medical data of analysis techniques developed in Particle Physics are more and more frequent. Among these, Artificial Intelligence (AI) methods, and Deep Learning (DL) in particular, have proven to be extremely successful for a wide range of application areas including medical ones. However, their "black box" nature is a barrier to its application in clinical practice where interpretability is essential. To this aim, we propose to quantify strengths and to highlight, and possibly solve, weakness of the available explainable AI methods in different applicative contexts. Indeed, one aspect hindering so far substantial progress towards explainability AI (xAI) is the fact that usually the proposed solution to be effective needs to be tailored to the specific applications, and it is not easily transferred to other domains. In ADeLE we will test the same array of xAI techniques to several use-cases, intentionally chosen to be heterogeneous with respect to the types of data, learning tasks, and scientific questions to find, as much as possible, general solutions. We will benchmark the xAI algorithms on the segmentation of public available brain images, then, we will apply them to the selected use cases, which are: the estimation of the probability of Spread Through Air Spaces (STAS) in pulmonary adenocarcinoma patients to personalize the treatment suggesting in advance the surgical resection depth; improving the emulation of nuclear reaction models of interest for ion-therapy; and finally to explore the possibility of improving the diagnosis of pulmonary diseases, such as the one caused by COVID-19, using ultrasound.

ERC
PE2_2, PE6_11, LS7_3
Keywords:
FISICA MEDICA, APPRENDIMENTO AUTOMATICO, DIAGNOSTICA PER IMMAGINI, MEDICINA NUCLEARE E RADIOTERAPIA, BIG DATA

© Università degli Studi di Roma "La Sapienza" - Piazzale Aldo Moro 5, 00185 Roma