An evaluation of entropy measures for microphone identification
Research findings have shown that microphones can be uniquely identified by audio recordings since physical features of the microphone components leave repeatable and distinguishable traces on the audio stream. This property can be exploited in security applications to perform the identification of a mobile phone through the built-in microphone. The problem is to determine an accurate but also efficient representation of the physical characteristics, which is not known a priori. Usually there is a trade-off between the identification accuracy and the time requested to perform the classification. Various approaches have been used in literature to deal with it, ranging from the application of handcrafted statistical features to the recent application of deep learning techniques. This paper evaluates the application of different entropy measures (Shannon Entropy, Permutation Entropy, Dispersion Entropy, Approximate Entropy, Sample Entropy, and Fuzzy Entropy) and their suitability for microphone classification. The analysis is validated against an experimental dataset of built-in microphones of 34 mobile phones, stimulated by three different audio signals. The findings show that selected entropy measures can provide a very high identification accuracy in comparison to other statistical features and that they can be robust against the presence of noise. This paper performs an extensive analysis based on filter features selection methods to identify the most discriminating entropy measures and the related hyper-parameters (e.g., embedding dimension). Results on the trade-off between accuracy and classification time are also presented.