International Journal of Computer Applications |
Foundation of Computer Science (FCS), NY, USA |
Volume 183 - Number 50 |
Year of Publication: 2022 |
Authors: Shweta Agrawal, Ravishek Kumar Singh |
10.5120/ijca2022921908 |
Shweta Agrawal, Ravishek Kumar Singh . Recent Improvements of Gradient Descent Method for Optimization. International Journal of Computer Applications. 183, 50 ( Feb 2022), 50-53. DOI=10.5120/ijca2022921908
Gradient descent is best and common method used for optimization. Gradient descent is one of the optimization techniques apply when machine learning based model or algorithm or trained. It has a function of convex, the technique is based on this function and the parameters of this function iteratively apply to reduce cost function to find local minima. Gradient used a function which takes more than one input variable. Gradient descent technique measures the variations in the weights with respect to the change in error. The purpose of Gradient descent technique is to make changes in set of parameters for reaching optimal parameters. The purpose of Gradient descent technique is to found set that leads to the minimum loss function value possible. In this paper we introduce common optimization technique and their challenges and how this leads to the derivation by using their update rules. In this paper we give also provides advantage and disadvantage of different variants of gradient descent techniques.