Long short-term memory

A deep learning integrated Lee-Carter model

In the field of mortality, the Lee–Carter based approach can be considered the milestone
to forecast mortality rates among stochastic models. We could define a “Lee–Carter model family”
that embraces all developments of this model, including its first formulation (1992) that remains the
benchmark for comparing the performance of future models. In the Lee–Carter model, the kt parameter,
describing the mortality trend over time, plays an important role about the future mortality behavior.

Life expectancy and lifespan disparity forecasting: a long short-term memory approach

After the World War II, developed countries experienced a constant decline in mortality. As a result, life expectancy has never stopped increasing, despite an evident deceleration in developed countries, e.g. England, USA and Denmark. In this paper, we propose a new approach for forecasting life expectancy and lifespan disparity based on the recurrent neural networks with a long short-term memory.

Study and evaluation of QoS degradation costs in optical-nfv network environments with resource allocations based on long short term memory prediction techniques

The paper investigates the effectiveness of bandwidth prediction technique based on Long Short Term Memory recurrent neural networks for the resource allocation in Network Function Virtualization network architectures in which the datacenters are interconnected by an Elastic Optical Network. In particular we evaluate the under-provisioning costs that occurs when fewer resources than the needed ones are allocated and characterizes the QoS penalty cost to be paid by the provider because of the QoS degradation.

Recurrent neural network architectures

In this chapter, we present three different recurrent neural network architectures that we employ for the prediction of real-valued time series. All the models reviewed in this chapter can be trained through the previously discussed backpropagation through time procedure. First, we present the most basic version of recurrent neural networks, called Elman recurrent neural network. Then, we introduce two popular gated architectures, which are long short-term memory and the gated recurrent units.

Separation of drum and bass from monaural tracks

In this paper, we propose a deep recurrent neural network (DRNN), based on the Long Short-Term Memory (LSTM) unit, for the separation of drum and bass sources from a monaural audio track. In particular, a single DRNN with a total of six hidden layers (three feedforward and three recurrent) is used for each original source to be separated. In this work, we limit our attention to the case of only two, challenging sources: drum and bass. Some experimental results show the effectiveness of the proposed approach with respect to another state-of-the-art method.

© Università degli Studi di Roma "La Sapienza" - Piazzale Aldo Moro 5, 00185 Roma