Simplified Information Maximization for Improving Multi-Layered Neural Networks

Ryotaro Kamimura


information maximization, multi-layered neural networks, information acquisition, information use, divergence


Information-theoretic methods have been widely used in describing the various aspects of neural computing. However, the learning procedures of the information-theoretic method have been complicated and they have been applied to relatively small networks and data sets. Thus, the present paper proposes a new information-theoretic method to simplify the learning procedures. The new method is characterized by the direct control of hidden neurons and the separation of information maximization and error minimization. The method was applied to the bankruptcy data set. Results showed that the simplified method could increase sufficiently information content. In addition, generalization performance was improved in direct proportion to information increase. By the new method, the information-theoretic methods will be more easily applied to practical problems.

Important Links:

Go Back