Explainable Artificial Intelligence to learn and standardise decision making in safety-critical clinical domains, with applications to psychotropic drug-based treatments of mental illness during pregnancy and lactation and criminal forensic psychiatry

Anno
2021
Proponente Toni Mancini - Professore Associato
Sottosettore ERC del proponente del progetto
PE6_7
Componenti gruppo di ricerca
Componente Categoria
Giovanna Parmigiani Dottorando/Assegnista/Specializzando componente non strutturato del gruppo di ricerca / PhD/Assegnista/Specializzando member non structured of the research group
Gloria Angeletti Componenti strutturati del gruppo di ricerca / Structured participants in the research project
Emanuele Panizzi Componenti strutturati del gruppo di ricerca / Structured participants in the research project
Enrico Tronci Componenti strutturati del gruppo di ricerca / Structured participants in the research project
Enrico Elio Del Prato Componenti strutturati del gruppo di ricerca / Structured participants in the research project
Componente Qualifica Struttura Categoria
Lavinia De Chiara M.D., Dottoranda Dipartimento di NEUROSCIENZE, SALUTE MENTALE E ORGANI DI SENSO, Sapienza Altro personale aggregato Sapienza o esterni, titolari di borse di studio di ricerca / Other aggregate personnel Sapienza or other institution, holders of research scholarships
Enrico Bassetti Dottorando Dipartimento di Informatica, Sapienza Università di Roma Altro personale aggregato Sapienza o esterni, titolari di borse di studio di ricerca / Other aggregate personnel Sapienza or other institution, holders of research scholarships
Abstract

[1998 chars (max 2000)]

Taking the right decision in safety-critical clinical domains is a complex task of the utmost importance.

For example, choosing psychotropic drugs for mentally ill pregnant/lactating women (our first application) involves the wise weighting of many factors. A wrong choice could lead to severe consequences, e.g., embryo malformations, suicide, infanticide.
Similarly, in the forensic field (our second application), psychiatric evaluations of the criminal accountability or social dangerousness of a defendant involve a multi-factor evaluation of the subject. A wrong assessment could end up with the criminal punishment (e.g., jail) of a mentally ill patient (who should deserve psychiatric care), or, vice versa, the psychiatric treatment of a defendant who should be punished.

Despite their critical nature, such decisions are taken by (usually individual) experts in a context of a general lack of standards.

This project aims at supporting the quantitative modelling and validation of both the peculiar strategies of individual experts as well as those pinpointing the aspects most agreed-upon by the entire experts¿ panel.

Such strategies will be exploited to:

1. Analyse in silico the similarities and differences of decisions of our experts, support evidence-based discussions, promote uniformity and self-awareness of personal biases;

2. Support design and prioritisation of ethically-principled prospective trials (e.g., experimental strategy vs. treatment as usual) to assess the quality of the learnt strategies, sustaining the emerging of best practices and golden standards;

3. Power Decision Support Systems (DSSs) for evidence-based decision making in those fields.

Our hybrid approach combines knowledge-based AI, knowledge engineering, machine learning, as well as active and interactive learning to generate synthetic (i.e., virtual) patients, plus gamification to leverage the presence of several domain experts in our interdisciplinary team.

ERC
PE6_7
Keywords:
INTELLIGENZA ARTIFICIALE, PSICHIATRIA, PSICOLOGIA CLINICA, PSICOLOGIA GIURIDICA

© Università degli Studi di Roma "La Sapienza" - Piazzale Aldo Moro 5, 00185 Roma