explainability

Explainable inference on sequential data via memory-tracking

In this paper we present a novel mechanism to
get explanations that allow to better understand
network predictions when dealing with sequential
data. Specifically, we adopt memory-based net-
works — Differential Neural Computers — to ex-
ploit their capability of storing data in memory and
reusing it for inference. By tracking both the mem-
ory access at prediction time, and the information
stored by the network at each step of the input
sequence, we can retrieve the most relevant input

© Università degli Studi di Roma "La Sapienza" - Piazzale Aldo Moro 5, 00185 Roma