Neural Network Interpretability in Bioinformatics

Anno
2021
Proponente Andrea Mastropietro - Assegnista di ricerca
Sottosettore ERC del proponente del progetto
PE6_7
Componenti gruppo di ricerca
Componente Categoria
Aristidis Anagnostopoulos Aggiungi Tutor di riferimento (Professore o Ricercatore afferente allo stesso Dipartimento del Proponente)
Abstract

Neural networks have almost always been treated as black boxes, as models that are able to capture the highly nonlinear relationships that exists among inputs that lead to a correct output with no clear idea about how to get an interpretation of how such relationship are learnt. However, feature importance and input correlation are crucial in certain domains, such as bioinformatics and medicine, in which researchers are not only interested to get a correct output, but also to understand the reason why the network output such results. In this spirit, the aim of this project is to explore techniques for neural network interpretability and to apply those techniques in the domain of bioinformatics, particularly in genomics, hopefully leading to new insights both in neural networks and bioinformatics domains, showing that deep learning can be an effective tool for genomics and that the obtained results can be interpreted, fulfilling the needs of medical doctors and geneticists.

ERC
PE6_7, PE6_13
Keywords:
INTELLIGENZA ARTIFICIALE, BIOINFORMATICA, RETI NEURALI, DATA MINING

© Università degli Studi di Roma "La Sapienza" - Piazzale Aldo Moro 5, 00185 Roma