LSTM

Exploiting Prediction Error Inconsistencies through LSTM-based Classifiers to Detect Deepfake Videos

The ability of artificial intelligence techniques to build synthesized brand new videos or to alter the facial expression of already existing ones has been efficiently demonstrated in the literature. The identification of such new threat generally known as Deepfake, but consisting of different techniques, is fundamental in multimedia forensics. In fact this kind of manipulated information could undermine and easily distort the public opinion on a certain person or about a specific event.

LSTMEmbed: learning Word and Sense Representations from a Large Semantically Annotated Corpus with Long Short-Term Memories

While word embeddings are now a de facto standard representation of words in most NLP tasks, recently the attention has been shifting towards vector representations which capture the different meanings, i.e., senses, of words. In this paper we explore the capabilities of a bidirectional LSTM model to learn representations of word senses from semantically annotated corpora. We show that the utilization of an architecture that is aware of word order, like an LSTM, enables us to create better representations.

Forecasting People Trajectories and Head Poses by Jointly Reasoning on Tracklets and Vislets

In this work, we explore the correlation between people trajectories and their head orientations. We argue that people trajectory and head pose forecasting can be modelled as a joint problem. Recent approaches on trajectory forecasting leverage short-term trajectories (aka tracklets) of pedestrians to predict their future paths. In addition, sociological cues, such as expected destination or pedestrian interaction, are often combined with tracklets.

2-D convolutional deep neural network for multivariate energy time series prediction

A novel deep learning approach in proposed in this paper for multivariate prediction of energy time series. It is developed by using Convolutional Neural Network and Long Short-Term Memory models, in such a way that several correlated time series can be joined and filtered together considering the long term dependencies on the whole information. The learning scheme can be viewed as a stacked deep neural network where one or more layers are superposed, feeding their output in the sequent layer's input.

Recurrent neural networks with flexible gates using kernel activation functions

Gated recurrent neural networks have achieved remarkable results in the analysis of sequential data. Inside these networks, gates are used to control the flow of information, allowing to model even very long-term dependencies in the data. In this paper, we investigate whether the original gate equation (a linear projection followed by an element-wise sigmoid) can be improved. In particular, we design a more flexible architecture, with a small number of adaptable parameters, which is able to model a wider range of gating functions than the classical one.

© Università degli Studi di Roma "La Sapienza" - Piazzale Aldo Moro 5, 00185 Roma