Distributed on-line learning for random-weight fuzzy neural networks
The Random-Weight Fuzzy Neural Network is an inference system where the fuzzy rule parameters of antecedents (i.e., membership functions) are randomly generated and the ones of consequents are estimated using a Regularized Least Squares algorithm. In this regard, we propose an on-line learning algorithm under the hypothesis of training data distributed across a network of interconnected agents. In particular, we assume that each agent in the network receives a stream of data as a sequence of mini-batches. When receiving a new chunk of data, each agent updates its estimate of the consequent parameters and, periodically, all agents agree on a common model through the Distributed Average Consensus protocol. The learning algorithm is faster than a solution based on a centralized training set and it does not rely on any coordination authority. The experimental results on well-known datasets validate our proposal.