CFP last date
20 December 2024
Reseach Article

A Novel Methodology to Implement Optimization Algorithms in Machine Learning

by Venkata Karthik Gullapalli, Rahul Brungi
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 112 - Number 4
Year of Publication: 2015
Authors: Venkata Karthik Gullapalli, Rahul Brungi
10.5120/19657-1296

Venkata Karthik Gullapalli, Rahul Brungi . A Novel Methodology to Implement Optimization Algorithms in Machine Learning. International Journal of Computer Applications. 112, 4 ( February 2015), 33-36. DOI=10.5120/19657-1296

@article{ 10.5120/19657-1296,
author = { Venkata Karthik Gullapalli, Rahul Brungi },
title = { A Novel Methodology to Implement Optimization Algorithms in Machine Learning },
journal = { International Journal of Computer Applications },
issue_date = { February 2015 },
volume = { 112 },
number = { 4 },
month = { February },
year = { 2015 },
issn = { 0975-8887 },
pages = { 33-36 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume112/number4/19657-1296/ },
doi = { 10.5120/19657-1296 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2024-02-06T22:48:35.419134+05:30
%A Venkata Karthik Gullapalli
%A Rahul Brungi
%T A Novel Methodology to Implement Optimization Algorithms in Machine Learning
%J International Journal of Computer Applications
%@ 0975-8887
%V 112
%N 4
%P 33-36
%D 2015
%I Foundation of Computer Science (FCS), NY, USA
Abstract

Optimization is considered to be one of the pillars of statistical learning and also plays a major role in the design and development of intelligent systems such as search engines, recommender systems, and speech and image recognition software. Machine Learning is the study that gives the computers the ability to learn and also the ability to think without being explicitly programmed. A computer is said to learn from an experience with respect to a specified task and its performance related to that task. The machine learning algorithms are applied to the problems to reduce efforts. Machine learning algorithms are used for manipulating the data and predict the output for the new data with high precision and low uncertainty. The optimization algorithms are used to make rational decisions in an environment of uncertainty and imprecision. In this paper a methodology is presented to use the efficient optimization algorithm as an alternative for the gradient descent machine learning algorithm as an optimization algorithm.

References
  1. Malouf, Robert (2002). "A comparison of algorithms for maximum entropy parameter estimation". Proc. Sixth Conf. on Natural Language Learning (CoNLL). pp. 49–55.
  2. Andrew, Galen; Gao, Jianfeng (2007). "Scalable training of L?-regularized log-linear models". Proceedings of the 24th International Conference on Machine Learning.
  3. C. ; Byrd, Richard H. ; Lu, Peihuang; Nocedal, Jorge (1997). "L-BFGS-B: Algorithm 778: L-BFGS-B, FORTRAN routines for large scale bound constrained optimization". ACM Transactions on Mathematical Software 23 (4): 550–560.
  4. Fletcher, Roger (1987), Practical methods of optimization (2nd ed. ), New York: John Wiley & Sons, ISBN 978-0-471-91547-8.
  5. Venkata Karthik Gullapalli and Aishwarya Asesh, Data Trawling and Security Strategies, ISSN – 2278-8727, IOSR Journal of Computer Engineering, Volume 16, Issue 6, Ver. 1, Nov - Dec 2014.
  6. Danilo P Mandic, A Generalized Normalized Gradient Descent Algorithm, IEEE Signal Processing Letters, Vol. 11, No. 2, February 2004.
  7. Freund, Y. , Iyer, R. , Schapire, R. , & Singer, Y. (2003). An efficient boosting algorithm for combining preferences. Journal of Machine Learning Research, 4,933–969.
  8. Herbrich, R. , Graepel, T. , & Obermayer, K. (2000). Large margin rank boundaries for ordinal regression Advances in Large Margin Classifiers, MIT Press (pp. 115–132).
  9. Martin F. Moller, A Scaled Conjugate Gradient Algorithm for fast Supervised learning, ISSN 0105-8517, Daimi PB 339, November 1990.
  10. D. E. Goldberg and J. H. Holland, Genetic Algorithms and machine learning, Guest Editorial, Machine Learning 3: 95-99, 1988 Kluwer Academic Publishers - The Netherlands.
  11. R. Johnson and T. Zhang, Accelerating stochastic gradient descent using predictive variance reduction, Adv. Neural Inf. Process. Syst. , 26 (2013), 315–323.
  12. Mokbnache L. , Boubakeur A. (2002) ''Comparison of Different Back-Propagation Algorithms used in The Diagnosis of Transformer Oil'' IEEE Annual Report Conference on Electrical Insulation and Dielectric Phenomena, 244-247.
  13. Charalambous C. (1992) Conjugate Gradient Algorithm for Efficient Training of Artificial Neural Networks, IEEE Proceedings, 139 (3), 301-310.
  14. Jin Yu, S. V. N. Vishwanathan, Simon Gunter, Nicol N. Schraudolph, A Quasi-Newton Approach to Non Smooth Convex Optimization problems in Machine Learning, Journal of Machine Learning Research, March 2010.
Index Terms

Computer Science
Information Sciences

Keywords

Gradient Descent BFGS Cost Function Data Analysis