entropy

Thermodynamics of high frequency nonlinear vibrations

In the field of vibrations of complex structures, energy methods like SEA and a series of mid-frequency methods, represent an important resource for computational analysis. All these methods are based in general on a linear formulation of the elastic problem. However, when nonlinearities are present, for example related to clearance or stiffening of joints, these methods, in principle, cannot be applied.

Energy sharing between nonlinear structures by entropy modelling

Entropy and therefore the second principal of thermodynamics has been proposed originally from the Sapienza group to complete energy methods in vibroacoustics. The idea is to add entropy beside energy in making models for complex mechanical systems, especially in the field of high frequency vibrations. In fact, energy is naturally involved in the first law of thermodynamics (the energy conservation principle), but it does not involve any flow directions and "energy flow speed" information, governed indeed by inequality principles. However, this information are crucial.

Reconstructing nonparametric productivity networks

Network models provide a general representation of inter-connected system dynamics. This ability to connect systems has led to a proliferation of network models for economic productivity analysis, primarily estimated non-parametrically using Data Envelopment Analysis (DEA). While network DEA models can be used to measure system performance, they lack a statistical framework for inference, due in part to the complex structure of network processes.

Bayesian Neural Networks with Maximum Mean Discrepancy regularization

Bayesian Neural Networks (BNNs) are trained to optimize an entire distribution over their weights instead of a single set, having significant advantages in terms of, e.g., interpretability, multi-task learning, and calibration. Because of the intractability of the resulting optimization problem, most BNNs are either sampled through Monte Carlo methods, or trained by minimizing a suitable Evidence Lower BOund (ELBO) on a variational approximation.

Shaping Dimensions of Urban Complexity: The Role of Economic Structure and Socio-Demographic Local Contexts

Diversification in urban functions—a key component of urban complexity—was analysed using Pielou’s evenness indexes for 12 socioeconomic dimensions (economic structure, working classes, education, demographic structure by age, composition of non-native population by citizenship, distribution of personal incomes, land-use, land imperviousness, building use, vertical profile of buildings, building age, construction materials) at a local spatial scale in the Athens’ metropolitan region, Greece.

From phylogenetic to functional originality. Guide through indices and new developments

In biodiversity studies a species is often classified as original when it has few closely related species, a definition
that reflects its phylogenetic originality. More recently, studies have focussed on biological or functional traits
that reflect the role(s) that species play within communities and ecosystems. This has led many studies to an
alternative evaluation of species’ originality: its functional originality. Most indices of species' originality were

Entropia: A Family of Entropy-Based Conformance Checking Measures for Process Mining

This paper presents a command-line tool, called Entropia, that implements a family of conformance checking measures for process mining founded on the notion of entropy from information theory. The measures allow quantifying classical non-deterministic and stochastic precision and recall quality criteria for process models automatically discovered from traces executed by IT-systems and recorded in their event logs.

Monotone Precision and Recall Measures for Comparing Executions and Specifications of Dynamic Systems

The behavioural comparison of systems is an important concern of software engineering research. For example, the areas of specification discovery and specification mining are concerned with measuring the consistency between a collection of execution traces and a program specification. This problem is also tackled in process mining with the help of measures that describe the quality of a process specification automatically discovered from execution logs.

Bio-chemical data classification by dissimilarity representation and template selection

The identification and classification of bio-chemical substances are very important tasks in chemical, biological and forensic analysis. In this work we present a new strategy to improve the accuracy of the supervised classification of this type of data obtained from different analytical techniques that combine two processes: first, a dissimilarity representation of data and second, the selection of templates for the refinement of the representative samples in each class set. In order to evaluate the performance of our proposal, a comparative study between three approaches is presented.

Weighty LBP: a new selection strategy of LBP codes depending on their information content

This paper presents a novel variation of the use of LBP codes. Similarly to Uniform LBP and Local Salient Patterns (LSP), it aims at both obtaining an effective texture description, and decreasing the length of the feature vectors, i.e., of the chains of LBP histograms. Instead of considering uniform codes, we rather consider the codes providing the highest “representativeness” power with respect to texture features. We identify this subset of codes by a generalized notion of entropy. This allows determining the most informative items in an homogeneous set.

© Università degli Studi di Roma "La Sapienza" - Piazzale Aldo Moro 5, 00185 Roma