Artificial intelligence (AI) methods and machine learning (ML) are receiving growing attention given their potential impact on diverse areas of life sciences. In the forensic field, ML techniques have been mainly used in the area of Neuroprediction, with the raising of some ethical and legal concerns regarding their questionable fairness, accountability and transparency. Although algorithmic risk assessments can be perceived as a means to overcome human bias, they often reflect prejudice and institutionalized bias, as AI is generally trained on data that may themselves reflect evaluators¿ biases. The majority of studies conducted on this topic so far failed to isolate precisely which factors may be causing the bias, probably due to their naturalistic designs.
To overcome these ethical issues, this highly interdisciplinary project aims at extending a serious gaming¿based software platform (developed in a previous project) able to learn forensic decision strategies and identify cognitive biases via synthetic cases (Virtual Defendants).
Our unique team comprises forensic psychiatrists, coroners and a computer scientist PhD student (plus two senior AI experts, Mancini, and Tronci, as external members), and allows us to exploit:
1) Retrospective data available in 500 forensic psychiatric reports made in criminal proceedings
2) Expert knowledge for all the relevant forensic domains. Expertise on forensic experts¿ decision making and insanity defense will be covered by a post-doc position.
3) Acquire further Expert Knowledge through Synthetic cases (Virtual Defendants) generated by our AI-powered serious gaming platform
Our envisioned computational approach builds on our experience in this area matured with the coordination of the PAEON FP7 project (paeon.di.uniroma1.it ¿ PI: Tronci; Key person: Mancini), where computerized decision strategies for complex protocols for assisted reproduction have been formalised and offered to practitioners within a decision support system.