CFP last date
20 December 2024
Reseach Article

Understanding Neural Networks for Machine Learning using Microsoft Neural Network Algorithm

by Nagesh Ramprasad
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 150 - Number 3
Year of Publication: 2016
Authors: Nagesh Ramprasad
10.5120/ijca2016911481

Nagesh Ramprasad . Understanding Neural Networks for Machine Learning using Microsoft Neural Network Algorithm. International Journal of Computer Applications. 150, 3 ( Sep 2016), 32-38. DOI=10.5120/ijca2016911481

@article{ 10.5120/ijca2016911481,
author = { Nagesh Ramprasad },
title = { Understanding Neural Networks for Machine Learning using Microsoft Neural Network Algorithm },
journal = { International Journal of Computer Applications },
issue_date = { Sep 2016 },
volume = { 150 },
number = { 3 },
month = { Sep },
year = { 2016 },
issn = { 0975-8887 },
pages = { 32-38 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume150/number3/26077-2016911481/ },
doi = { 10.5120/ijca2016911481 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2024-02-06T23:54:57.695537+05:30
%A Nagesh Ramprasad
%T Understanding Neural Networks for Machine Learning using Microsoft Neural Network Algorithm
%J International Journal of Computer Applications
%@ 0975-8887
%V 150
%N 3
%P 32-38
%D 2016
%I Foundation of Computer Science (FCS), NY, USA
Abstract

A Neural Network refers to a simple computing system that consists of some highly fixed and interconnected processing elements. All the neural networks appear as a set of layers. The layers of the neural networks have some interconnected nodes that make up the activation function. In this research, focus is on the Microsoft Neural System Algorithm. The Microsoft Neural System Algorithm is a simple implementation of the adaptable and popular neural networks that are used in the machine learning. The algorithm operates through the testing of every possible state of the inputs that attribute against all the possible states of some predictable calculating of probabilities for all the combinations on the training data. For instance, the neural networks can be used to write a computer program that will recognize some handwritten digits. Later in the study the artificial neural networks are discussed. In the artificial neural networks, no multiple central processors exist. Most of them consist of the learning rules that are used for the modification of weight that connect them to the input patterns that present them continuously. From the study, it is clear that the Microsoft neural network viewers can be used to work and see with the data models that correlate with the outputs and the inputs. Through this utilization a custom viewer can then explore the model structure of a Microsoft neural network quickly. The neural networks can be used to recognize some handwritten digits. In this case consider a sequence that is written as 505762. Several individuals will basically recognize the digits as 505762. Such an ease might be deceptive. In all the hemispheres of the human brain, people have the primary visual cortex that is also known as the v1. The v1 contains 140 million neurons that have 10 billion connections between them. On the other hand the human vision has a series of visual cortices of v2, v3, v4 and v5 that help in recognizing more difficult image processing. Therefore our heads are used as the supercomputers. It is not easy to recognize the handwritten digits. Humans have therefore come up with the neural networks to help in recognizing handwritten digits so that they can make sense to what is being done unconsciously. In this case, the neural networks help in sampling data, distributing and invoking it into simple values.

References
  1. Bishop, Christopher M. "Pattern recognition." Machine Learning 128 (2006).
  2. Funahashi, Ken-Ichi. "On the approximate realization of continuous mappings by neural networks." Neural networks 2.3 (1989): 183-192.
  3. Microsoft Neural Network Algorithm, https://msdn.microsoft.com/en-us/library/ms174941.aspx
  4. Microsoft Neural Network Algorithm Technical Reference, https://msdn.microsoft.com/en-us/library/cc645901.aspx
  5. Graves, Alex, Abdel-Rahman Mohamed, and Geoffrey Hinton. "Speech recognition with deep recurrent neural networks." 2013 IEEE international conference on acoustics, speech and signal processing. IEEE, 2013.
  6. Hinton, Geoffrey E., et al. "Improving neural networks by preventing co-adaptation of feature detectors." arXiv preprint arXiv:1207.0580 (2012).
  7. Hornik, Kurt, Maxwell Stinchcombe, and Halbert White. "Multilayer feedforward networks are universal approximators." Neural networks 2.5 (1989): 359-366.
  8. Haykin, Simon, and Neural Network. "A comprehensive foundation." Neural Networks 2.2004 (2004).
  9. Maass, Wolfgang. "Networks of spiking neurons: the third generation of neural network models." Neural networks 10.9 (1997): 1659-1671.
  10. Oja, Erkki. "Principal components, minor components, and linear neural networks." Neural networks 5.6 (1992): 927-935.
  11. Specht, Donald F. "Probabilistic neural networks." Neural networks 3.1 (1990): 109-118.
  12. Wolpert, David H. "Stacked generalization." Neural networks 5.2 (1992): 241-259.
Index Terms

Computer Science
Information Sciences

Keywords

Neural Networks Microsoft Neural Networks handwritten digits data models Artificial Neural Networks