Applications to medical data of analysis techniques developed in Particle Physics are more and more frequent. Among these, Artificial Intelligence (AI) methods, and Deep Learning (DL) in particular, have proven to be extremely successful for a wide range of application areas including medical ones. However, their "black box" nature is a barrier to its application in clinical practice where interpretability is essential. To this aim, we propose to quantify strengths and to highlight, and possibly solve, weakness of the available explainable AI methods in different applicative contexts. Indeed, one aspect hindering so far substantial progress towards explainability AI (xAI) is the fact that usually the proposed solution to be effective needs to be tailored to the specific applications, and it is not easily transferred to other domains. In ADeLE we will test the same array of xAI techniques to several use-cases, intentionally chosen to be heterogeneous with respect to the types of data, learning tasks, and scientific questions to find, as much as possible, general solutions. We will benchmark the xAI algorithms on the segmentation of public available brain images, then, we will apply them to the selected use cases, which are: the estimation of the probability of Spread Through Air Spaces (STAS) in pulmonary adenocarcinoma patients to personalize the treatment suggesting in advance the surgical resection depth; improving the emulation of nuclear reaction models of interest for ion-therapy; and finally to explore the possibility of improving the diagnosis of pulmonary diseases, such as the one caused by COVID-19, using ultrasound.