The continuous, pervasive collection of enormous amounts of data and the advances in computational power present major challenges to modern big data analytics. Modeling and processing these huge data sets require the solution of hard optimization problems with features that include huge dimensionality, high non convexity and/or non differentiability of the functions involved.
To address such challenges, this project aims at analyzing new algorithmic frameworks enabling the parallel, distributed, and asynchronous solution of hard nonconvex and possibly nondifferentiable optimization problems.
The new general framework for the asynchronous solution of the problems in the first two classes described before, is considerably more complex and general than those used so far. This complexity allows us to cover, in a mathematically sound and unified way, both lock-free and not lock-free settings, parallel synchronous and asynchronous implementations (and also serial implementations) in either shared memory or distributed (with no shared memory)-based architectures., thus bringing a significant advancement with respect to results in the literature.
As regard the case of optimization problem with coupling constraints, such as the SVMs training problem, the proposed algorithm would be the first parallel methods using nonlinear separating surface (nonlinear kernels) within a bias model which encompasses good statistical properties.
From the theoretical point of view, the algorithms proposed improve existing ones in the fact that convergence can be proved under rather mild assumptions. Convergence issues are still an open problem for parallel asynchronous algorithms. Practical implementations and testing will be pursued too.