kernel methods

Indefinite Topological Kernels

Topological Data Analysis (\texttt{TDA}) is a recent and growing branch of statistics devoted to the study of the shape of the data. Motivated by the complexity of the object summarizing the topology of data, we introduce a new topological kernel that allows to extend the \texttt{TDA} toolbox to supervised learning. Exploiting the geodesic structure of the space of Persistence Diagrams, we define a geodesic kernel for Persistence Diagrams, we characterize it, and we show with an application that, despite not being positive semi--definite, it can be successfully used in regression tasks.

(Hyper)graph kernels over simplicial complexes

Graph kernels are one of the mainstream approaches when dealing with measuring similarity between graphs, especially for pattern recognition and machine learning tasks. In turn, graphs gained a lot of attention due to their modeling capabilities for several real-world phenomena ranging from bioinformatics to social network analysis. However, the attention has been recently moved towards hypergraphs, generalization of plain graphs where multi-way relations (other than pairwise relations) can be considered.

Modelling and recognition of protein contact networks by multiple kernel learning and dissimilarity representations

Multiple kernel learning is a paradigm which employs a properly constructed chain of kernel functions able to simultaneously analyse different data or different representations of the same data. In this paper, we propose an hybrid classification system based on a linear combination of multiple kernels defined over multiple dissimilarity spaces. The core of the training procedure is the joint optimisation of kernel weights and representatives selection in the dissimilarity spaces.

Privacy-preserving data mining for distributed medical scenarios

In this paper, we consider the application of data mining methods in medical contexts, wherein the data to be analysed (e.g. records from different patients) is distributed among multiple clinical parties. Although inference procedures could provide meaningful medical information (such as optimal clustering of the subjects), each party is forbidden to disclose its local dataset to a centralized location, due to privacy concerns over sensible portions of the dataset.

Kafnets: Kernel-based non-parametric activation functions for neural networks

Neural networks are generally built by interleaving (adaptable) linear layers with (fixed) nonlinear activation functions. To increase their flexibility, several authors have proposed methods for adapting the activation functions themselves, endowing them with varying degrees of flexibility. None of these approaches, however, have gained wide acceptance in practice, and research in this topic remains open. In this paper, we introduce a novel family of flexible activation functions that are based on an inexpensive kernel expansion at every neuron.

© Università degli Studi di Roma "La Sapienza" - Piazzale Aldo Moro 5, 00185 Roma