Bayesian learning

Bayesian Neural Networks with Maximum Mean Discrepancy regularization

Bayesian Neural Networks (BNNs) are trained to optimize an entire distribution over their weights instead of a single set, having significant advantages in terms of, e.g., interpretability, multi-task learning, and calibration. Because of the intractability of the resulting optimization problem, most BNNs are either sampled through Monte Carlo methods, or trained by minimizing a suitable Evidence Lower BOund (ELBO) on a variational approximation.

Approximate Bayesian Network Formulation for the Rapid Loss Assessment of Real-World Infrastructure Systems

This paper proposes to learn an approximate Bayesian Network (BN) model from Monte-Carlo simulations of an infrastructure system exposed to seismic hazard. Exploiting preliminary physical simulations has the twofold benefit of building a drastically simplified BN and of predicting complex system performance metrics. While the approximate BN cannot yield exact probabilities for predictive analyses, its use in backward analyses based on evidenced variables yields promising results as a decision support tool for post-earthquake rapid response.

© Università degli Studi di Roma "La Sapienza" - Piazzale Aldo Moro 5, 00185 Roma