Artificial Intelligence Techniques - Novel Approaches & Practical Applications |
Foundation of Computer Science USA |
AIT - Number 2 |
None 2011 |
Authors: R. Manjula Devi, S.Kuppuswami |
9b70b539-c152-4bb4-8349-7798ce9b6a1b |
R. Manjula Devi, S.Kuppuswami . Speed Learning by Adaptive Skipping: Improving the Learning Rate of Artificial Neural Network through Adaptive Stochastic Sample Presentation. Artificial Intelligence Techniques - Novel Approaches & Practical Applications. AIT, 2 (None 2011), 36-39.
The basic idea of this paper is to increase the learning rate of a artificial neural network without affecting the accuracy of the system. The new algorithms for dynamically reducing the number of input samples presented to the ANN (Artificial Neural Network) are given thus increasing the rate of learning. This method is called as Adaptive skipping. This can be used along with any supervised Learning Algorithms. The training phase is the most crucial and time consuming part of an ANN. The rate at which the ANN learns is the most considerable part. Among the factors affecting learning rate, the Size of the training set (no. of input samples used to train an ANN for a specific application) are considered and how the size of the training set affects the learning rate and accuracy of an ANN are discussed. The related works done in this field to reduce the training set are reviewed. The new Adaptive Skipping which dynamically says how many epoch the input sample has to skip depending upon consecutive successful learning of that input sample are introduced. The algorithm and the steps to train an ANN using the new approach are given and also how the speedup of learning are tested and briefly discussed. The test results are also analyzed. Finally the future works and ideas in this area are discussed. The experiment is demonstrated with the help of a simple ANN using Adaptive skipping along standard Backpropogation for learning.