CFP last date
20 January 2025
Reseach Article

Recent Improvements of Gradient Descent Method for Optimization

by Shweta Agrawal, Ravishek Kumar Singh
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 183 - Number 50
Year of Publication: 2022
Authors: Shweta Agrawal, Ravishek Kumar Singh
10.5120/ijca2022921908

Shweta Agrawal, Ravishek Kumar Singh . Recent Improvements of Gradient Descent Method for Optimization. International Journal of Computer Applications. 183, 50 ( Feb 2022), 50-53. DOI=10.5120/ijca2022921908

@article{ 10.5120/ijca2022921908,
author = { Shweta Agrawal, Ravishek Kumar Singh },
title = { Recent Improvements of Gradient Descent Method for Optimization },
journal = { International Journal of Computer Applications },
issue_date = { Feb 2022 },
volume = { 183 },
number = { 50 },
month = { Feb },
year = { 2022 },
issn = { 0975-8887 },
pages = { 50-53 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume183/number50/32268-2022921908/ },
doi = { 10.5120/ijca2022921908 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2024-02-07T01:15:26.230335+05:30
%A Shweta Agrawal
%A Ravishek Kumar Singh
%T Recent Improvements of Gradient Descent Method for Optimization
%J International Journal of Computer Applications
%@ 0975-8887
%V 183
%N 50
%P 50-53
%D 2022
%I Foundation of Computer Science (FCS), NY, USA
Abstract

Gradient descent is best and common method used for optimization. Gradient descent is one of the optimization techniques apply when machine learning based model or algorithm or trained. It has a function of convex, the technique is based on this function and the parameters of this function iteratively apply to reduce cost function to find local minima. Gradient used a function which takes more than one input variable. Gradient descent technique measures the variations in the weights with respect to the change in error. The purpose of Gradient descent technique is to make changes in set of parameters for reaching optimal parameters. The purpose of Gradient descent technique is to found set that leads to the minimum loss function value possible. In this paper we introduce common optimization technique and their challenges and how this leads to the derivation by using their update rules. In this paper we give also provides advantage and disadvantage of different variants of gradient descent techniques.

References
  1. Yiming Ying et al “Online gradient descent learning algorithm” Department of Computer Science, University College London Gower Street, London, 2014 WC1E 6BT, England, UK.
  2. Diederik P. Kingma et al “Adam: A Method For Stochastic Optimization” arXiv:1412.6980v9 30 Jan 2017.
  3. Marcin Andrychowicz et al “Learning to learn by gradient descent by gradient descent”. Western Norway Research Institute, Box 163, NO-6851 Sogndal, Norway
  4. Stephan Mandt and Matthew D. Hoffman Stochastic Gradient Descent as Approximate Bayesian Inference Journal of Machine Learning Research 18 (2017) 1-35 Submitted 4/17; Revised 10/17.
  5. Sebastian Ruder “An overview of gradient descent optimization algorithms”. Insight Centre for Data Analytics, NUI Galway Aylien Ltd., Dublinruder
  6. Shuang Song et al “Stochastic gradient descent with differentially private updates” Dept. of Computer Science and Engineering University of California, San Diego La Jolla, CA USA.
  7. Loucas Pillaud-Vivien et al “Statistical Optimality of Stochastic Gradient Descent on Hard Learning Problems through Multiple Passes” INRIA – Ecole Normale Supérieure PSL Research University
  8. Leon Bottou et al “Optimization Methods for Large-Scale Machine Learning” 2018 Society for Industrial and Applied Mathematics Vol. 60, No. 2, pp. 223–311.
  9. Prateek Jain et al “Parallelizing Stochastic Gradient Descent for Least Squares Regression: Mini-batching, Averaging, and Model Misspecification” Journal of Machine Learning Research 18 (2018) .
  10. Nan Cui “Applying Gradient Descent in Convolutional Neural Networks” CMVIT IOP Publishing IOP Conf. Series: Journal of Physics: Conf. Series 1004 (2018) .
  11. E. M. Dogo et al “ A Comparative Analysis of Gradient Descent-Based Optimization Algorithms on Convolutional Neural Networks” 978-1-5386-7709 2018 IEEE.
  12. Dokkyun Yi et al “An Enhanced Optimization Scheme Based on Gradient Descent Methods for Machine Learning” Daegu University, Kyungsan 38453, Korea 8 June 2019.
  13. Jonathan Schmidt et al “Recent advances and applications of machine learning in solid state materials” science 26 February 2019 Accepted: 17 July 2019.
  14. Simon Shaolei et al “Gradient Descent for Non-convex Problems in Modern Machine Learning” APRIL 2019 School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213.
  15. Yura Malitsky Konstantin et al “Adaptive Gradient Descent without Descent” Proceedings of the 37 th International Conference on Machine Learning, Vienna, Austria, PMLR 119, 2020.
  16. Nam D. Vo et al “Implicit Stochastic Gradient Descent Method for Cross-Domain Recommendation System Sensors” 2020, Western Norway Research Institute, Box 163, NO-6851 Sogndal, Norway.
Index Terms

Computer Science
Information Sciences

Keywords

Gradient Descent Machine learning Optimization cost function Iterative