deep networks

Global optimization issues in deep network regression: an overview

The paper presents an overview of global issues in optimizationmethods for training feedforward
neural networks (FNN) in a regression setting.We first recall the learning optimization
paradigm for FNN and we briefly discuss global scheme for the joint choice of the network
topologies and of the network parameters. The main part of the paper focuses on the
core subproblem which is the continuous unconstrained (regularized) weights optimization
problem with the aim of reviewing global methods specifically arising both in multi layer

Group sparse regularization for deep neural networks

In this paper, we address the challenging task of simultaneously optimizing (i) the weights of a neural network, (ii) the number of neurons for each hidden layer, and (iii) the subset of active input features (i.e., feature selection). While these problems are traditionally dealt with separately, we propose an efficient regularized formulation enabling their simultaneous parallel execution, using standard optimization routines.

© Università degli Studi di Roma "La Sapienza" - Piazzale Aldo Moro 5, 00185 Roma