Neural Network Interpretability in Bioinformatics
Componente | Categoria |
---|---|
Aristidis Anagnostopoulos | Aggiungi Tutor di riferimento (Professore o Ricercatore afferente allo stesso Dipartimento del Proponente) |
Neural networks have almost always been treated as black boxes, as models that are able to capture the highly nonlinear relationships that exists among inputs that lead to a correct output with no clear idea about how to get an interpretation of how such relationship are learnt. However, feature importance and input correlation are crucial in certain domains, such as bioinformatics and medicine, in which researchers are not only interested to get a correct output, but also to understand the reason why the network output such results. In this spirit, the aim of this project is to explore techniques for neural network interpretability and to apply those techniques in the domain of bioinformatics, particularly in genomics, hopefully leading to new insights both in neural networks and bioinformatics domains, showing that deep learning can be an effective tool for genomics and that the obtained results can be interpreted, fulfilling the needs of medical doctors and geneticists.