A nonuniform quantizer for hardware implementation of neural networks

04 Pubblicazione in atti di convegno
Altilio Rosa, Rosato Antonello, Panella Massimo
ISSN: 2474-9672

New trends in neural computation, now dealing with distributed learning on pervasive sensor networks and multiple sources of big data, make necessary the use of computationally efficient techniques to be implemented on simple and cheap hardware architectures. In this paper, a nonuniform quantization at the input layer of neural networks is introduced, in order to optimize their implementation on hardware architectures based on a finite precision arithmetic. Namely, we propose a nonlinear A/D conversion of input signals by considering the actual structure of data to be processed. Random Vector Functional-Link is considered as the reference model for neural networks and a genetic optimization is adopted for determining the quantization levels to be found. The proposed approach is assessed by several experimental results obtained on well-known benchmarks for the general problem of data regression.

© Università degli Studi di Roma "La Sapienza" - Piazzale Aldo Moro 5, 00185 Roma