regularization

Compressing deep-quaternion neural networks with targeted regularisation

In recent years, hyper-complex deep networks (such as complex-valued and quaternion-valued neural networks - QVNNs) have received a renewed interest in the literature. They find applications in multiple fields, ranging from image reconstruction to 3D audio processing. Similar to their real-valued counterparts, quaternion neural networks require custom regularisation strategies to avoid overfitting. In addition, for many real-world applications and embedded implementations, there is the need of designing sufficiently compact networks, with few weights and neurons.

Priorconditioned CGLS-Based Quasi-MAP Estimate, Statistical Stopping Rule, and Ranking of Priors

We consider linear discrete ill-posed problems within the Bayesian framework, assuming a Gaussian additive noise model and a Gaussian prior whose covariance matrices may be known modulo multiplicative scaling factors. In that context, we propose a new pointwise estimator for the posterior density, the prior conditioned CGLS-based quasi-MAP (qMAP) as a computationally attractive approximation of the classical maximum a posteriori (MAP) estimate, in particular when the e?ective rank of the matrix A is much smaller than the dimension of the unknown.

Efficient continual learning in neural networks with embedding regularization

Continual learning of deep neural networks is a key requirement for scaling them up to more complex applicative scenarios and for achieving real lifelong learning of these architectures. Previous approaches to the problem have considered either the progressive increase in the size of the networks, or have tried to regularize the network behavior to equalize it with respect to previously observed tasks. In the latter case, it is essential to understand what type of information best represents this past behavior.

© Università degli Studi di Roma "La Sapienza" - Piazzale Aldo Moro 5, 00185 Roma