CFP last date
20 January 2025
Reseach Article

Taxonomy and a Theoretical Model for Feedforward Neural Networks

by Benuwa Ben-Bright, Yongzhao Zhan, Benjamin Ghansah, Richard Amankwah, Dickson Keddy Wornyo, Ernest Ansah
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 163 - Number 4
Year of Publication: 2017
Authors: Benuwa Ben-Bright, Yongzhao Zhan, Benjamin Ghansah, Richard Amankwah, Dickson Keddy Wornyo, Ernest Ansah
10.5120/ijca2017913513

Benuwa Ben-Bright, Yongzhao Zhan, Benjamin Ghansah, Richard Amankwah, Dickson Keddy Wornyo, Ernest Ansah . Taxonomy and a Theoretical Model for Feedforward Neural Networks. International Journal of Computer Applications. 163, 4 ( Apr 2017), 39-49. DOI=10.5120/ijca2017913513

@article{ 10.5120/ijca2017913513,
author = { Benuwa Ben-Bright, Yongzhao Zhan, Benjamin Ghansah, Richard Amankwah, Dickson Keddy Wornyo, Ernest Ansah },
title = { Taxonomy and a Theoretical Model for Feedforward Neural Networks },
journal = { International Journal of Computer Applications },
issue_date = { Apr 2017 },
volume = { 163 },
number = { 4 },
month = { Apr },
year = { 2017 },
issn = { 0975-8887 },
pages = { 39-49 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume163/number4/27386-2017913513/ },
doi = { 10.5120/ijca2017913513 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2024-02-07T00:09:17.322997+05:30
%A Benuwa Ben-Bright
%A Yongzhao Zhan
%A Benjamin Ghansah
%A Richard Amankwah
%A Dickson Keddy Wornyo
%A Ernest Ansah
%T Taxonomy and a Theoretical Model for Feedforward Neural Networks
%J International Journal of Computer Applications
%@ 0975-8887
%V 163
%N 4
%P 39-49
%D 2017
%I Foundation of Computer Science (FCS), NY, USA
Abstract

Feedforward Neural Network (FFNN) is a surrogate of Artificial Neural Network (ANN) in which links amongst the units do not form a directed cycle. ANNs, akin to the vast network of neurons in the brain (human central nervous system) are usually presented as systems of interweaving connected "neurons" which exchange messages between each other. These connections have numeric hefts that can be adjusted and grounded on experience, enforcing adaptively on neural networks to inputs and learning capabilities. This paper presents a comprehensive review of FFNN with emphasis on implantation issues, which have been addressed by previous approaches. We also propose a theoretical model that exhibits potential superior performances in terms of convergence speed, efficient and effective computation and generality than state of the art models.

References
  1. A. Kumar and F. Shaik, "Image Processing Methods Utilized," in Image Processing in Diabetic Related Causes, ed: Springer, 2016, pp. 9-18.
  2. D. J. Larimer, "Processes and Systems for Automated Collective Intelligence," ed: Google Patents, 2007.
  3. R. Cutler and A. Kapoor, "System and method for audio/video speaker detection," ed: Google Patents, 2008.
  4. M. Caudill, "Neural nets primer, part VI," AI Expert, vol. 4, pp. 61-67, 1989.
  5. R. Hecht-Nielsen, "Neural network primer: part i," AI Expert, pp. 4-51, 1989.
  6. W. Wu, G. Feng, and X. Li, "Training multilayer perceptrons via minimization of sum of ridge functions," Advances in Computational Mathematics, vol. 17, pp. 331-347, 2002.
  7. W. Wu, G. Feng, Z. Li, and Y. Xu, "Deterministic convergence of an online gradient method for BP neural networks," IEEE Transactions on Neural Networks, vol. 16, pp. 533-540, 2005.
  8. W. Wu, N. Zhang, Z. Li, L. Li, and Y. Liu, "Convergence of gradient method with momentum for back-propagation neural networks," JOURNAL OF COMPUTATIONAL MATHEMATICS-INTERNATIONAL EDITION-, vol. 26, p. 613, 2008.
  9. W. Sun and Y.-X. Yuan, Optimization theory and methods: nonlinear programming vol. 1: Springer Science & Business Media, 2006.
  10. Z. Li, W. Wu, and Y. Tian, "Convergence of an online gradient method for feedforward neural networks with stochastic inputs," Journal of Computational and Applied Mathematics, vol. 163, pp. 165-176, 2004.
  11. R. C. O'Reilly and Y. Munakata, Computational explorations in cognitive neuroscience: Understanding the mind by simulating the brain: MIT press, 2000.
  12. D. Fagan, "JMuTeaches its last course," chance, vol. 63, p. 41.
  13. O. Sporns, Networks of the Brain: MIT press, 2010.
  14. S. Samarasinghe, Neural networks for applied sciences and engineering: from fundamentals to complex pattern recognition: CRC Press, 2016.
  15. R. A. R. Ashfaq, X.-Z. Wang, J. Z. Huang, H. Abbas, and Y.-L. He, "Fuzziness based semi-supervised learning approach for intrusion detection system," Information Sciences, vol. 378, pp. 484-497, 2017.
  16. S. Goyal and G. K. Goyal, "Heuristic machine learning feedforward algorithm for predicting shelf life of processed cheese," International Journal of Basic and Applied Sciences, vol. 1, pp. 458-467, 2012.
  17. S. Goyal and G. K. Goyal, "Soft computing single hidden layer models for shelf life prediction of burfi," Russian Journal of Agricultural and Socio-Economic Sciences, vol. 5, 2012.
  18. G.-z. Quan, Z.-y. Zhan, T. Wang, and Y.-f. Xia, "Modeling the Hot Tensile Flow Behaviors at Ultra-High-Strength Steel and Construction of Three-Dimensional Continuous Interaction Space for Forming Parameters," High Temperature Materials and Processes, vol. 36, pp. 29-43, 2017.
  19. C. MacLeod, "The synthesis of artificial neural networks using single string evolutionary techniques," 1999.
  20. J. Nagi and M. S. K. AHMED, "Pattern Recognition Of Simple Shapes In A Matlab/Simulink Environment: Design And Development Of An Efficient High-Speed Face Recognition System," A Thesis Electrical And Electronics Engineering. University Tenaga Nasional, 2007.
  21. P. Auer, H. Burgsteiner, and W. Maass, "A learning rule for very simple universal approximators consisting of a single layer of perceptrons," Neural Networks, vol. 21, pp. 786-795, 2008.
  22. Y.-C. Hu, "Tolerance rough sets for pattern classification using multiple grey single-layer perceptrons," Neurocomputing, vol. 179, pp. 144-151, 2016.
  23. Q. V. Le, "A Tutorial on Deep Learning Part 1: Nonlinear Classifiers and The Backpropagation Algorithm," ed, 2015.
  24. B. Choubin, S. Khalighi-Sigaroodi, A. Malekian, and Ö. Kişi, "Multiple linear regression, multi-layer perceptron network and adaptive neuro-fuzzy inference system for forecasting precipitation based on large-scale climate signals," Hydrological Sciences Journal, vol. 61, pp. 1001-1009, 2016.
  25. S. P. Fard and Z. Zainuddin, "The universal approximation capabilities of double 2\ pi-periodic approximate identity neural networks," Soft Computing, vol. 19, pp. 2883-2890, 2015.
  26. H. H. Bhadeshia, "Neural networks in materials science," ISIJ international, vol. 39, pp. 966-979, 1999.
  27. H. Schütze, D. A. Hull, and J. O. Pedersen, "A comparison of classifiers and document representations for the routing problem," in Proceedings of the 18th annual international ACM SIGIR conference on Research and development in information retrieval, 1995, pp. 229-237.
  28. M. M. Saggaf, M. N. Toksöz, and H. M. Mustafa, "Estimation of reservoir properties from seismic data by smooth neural networks," Geophysics, vol. 68, pp. 1969-1983, 2003.
  29. P. L. Bartlett, "The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network," Information Theory, IEEE Transactions on, vol. 44, pp. 525-536, 1998.
  30. H. Xiao and X. Zhu, "Margin-Based Feed-Forward Neural Network Classifiers," arXiv preprint arXiv:1506.03626, 2015.
  31. B. B. Benuwa, Y. Z. Zhan, B. Ghansah, D. K. Wornyo, and F. Banaseka Kataka, "A Review of Deep Machine Learning," in International Journal of Engineering Research in Africa, 2016, pp. 124-136.
  32. P. Vishwanath and V. Viswanatha, "FACE CLASSIFICATION USING WIDROW-HOFF LEARNING PARALLEL LINEAR COLLABORATIVE DISCRIMINANT REGRESSION (WH-PLCDRC)," Journal of Theoretical and Applied Information Technology, vol. 89, p. 362, 2016.
  33. J. Wang, W. Wu, Z. Li, and L. Li, "Convergence of gradient method for double parallel feedforward neural network," Int J Numer Anal Model, vol. 8, pp. 484-495, 2011.
  34. L. S. D. G. Z. Shisheng, "Aeroengine Lubricating Oil Metal Elements Concentration Prediction Based on Double Parallel Process Neural Network [J]," Lubrication Engineering, vol. 5, p. 010, 2006.
  35. X. MENG, G.-b. DING, and L. TANG, "Calculation for the Exhaust Enthalpy of a Steam Turbine Based on Parallel Connection Feed-forward Network," Turbine Technology, vol. 1, p. 004, 2006.
  36. G. Huang and R. He, "Analyzing water diversion demand for irrigation areas at lower reach of yellow river with BP neural network techniques," J. Irriga. Drain, vol. 19, pp. 20-23, 2000.
  37. M. He, "Double Parallel Feedforward Neural Networks with Application to Simulation Study of Flight Fault Inspection," Acta. Aerona. ET. Astrona. Sinica, vol. 15, pp. 877-881, 1994.
  38. S. Haykin and R. Lippmann, "Neural Networks, A Comprehensive Foundation," International Journal of Neural Systems, vol. 5, pp. 363-364, 1994.
  39. C. G. Looney, Pattern recognition using neural networks: theory and algorithms for engineers and scientists: Oxford University Press, Inc., 1997.
  40. M. Leshno, V. Y. Lin, A. Pinkus, and S. Schocken, "Multilayer feedforward networks with a nonpolynomial activation function can approximate any function," Neural networks, vol. 6, pp. 861-867, 1993.
  41. Y. Liang, D. Feng, H. P. Lee, S. P. Lim, and K. Lee, "Successive approximation training algorithm for feedforward neural networks," Neurocomputing, vol. 42, pp. 311-322, 2002.
  42. Y. Liang, W. Lin, H. Lee, S. Lim, K. Lee, and H. Sun, "Proper orthogonal decomposition and its applications–part II: Model reduction for MEMS dynamical analysis," Journal of Sound and Vibration, vol. 256, pp. 515-532, 2002.
  43. K. Hornik, "Approximation capabilities of multilayer feedforward networks," Neural networks, vol. 4, pp. 251-257, 1991.
  44. S.-s. Zhong and G. Ding, "Research on double parallel feedforward process neural networks and its application," Control and Decision, vol. 20, p. 764, 2005.
  45. D. Wei, "Alternate Iterative Algorithm of Double Parallel Artifical Neural Network and Its Application," MINIMICRO SYSTEMS-SHENYANG-, vol. 17, pp. 65-68, 1996.
  46. M. He, "Error Analysis of Double Parallel Feedforward Neural Networks," JOURNAL-NORTHWESTERN POLYTECHNICAL UNIVERSITY, vol. 15, pp. 125-130, 1997.
  47. J. Moody, S. Hanson, A. Krogh, and J. A. Hertz, "A simple weight decay can improve generalization," Advances in neural information processing systems, vol. 4, pp. 950-957, 1995.
  48. G.-B. Huang, P. Saratchandran, and N. Sundararajan, "A generalized growing and pruning RBF (GGAP-RBF) neural network for function approximation," Neural Networks, IEEE Transactions on, vol. 16, pp. 57-67, 2005.
  49. Y. Bengio, "Learning deep architectures for AI," Foundations and trends® in Machine Learning, vol. 2, pp. 1-127, 2009.
  50. R. Collobert and J. Weston, "A unified architecture for natural language processing: Deep neural networks with multitask learning," in Proceedings of the 25th international conference on Machine learning, 2008, pp. 160-167.
  51. D. Ciresan, U. Meier, and J. Schmidhuber, "Multi-column deep neural networks for image classification," in Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, 2012, pp. 3642-3649.
  52. D. Ciresan, U. Meier, and J. Schmidhuber, "Multi-column Deep Neural Networks for Image Classification Supplementary Online Material."
  53. G. E. Hinton, S. Osindero, and Y.-W. Teh, "A fast learning algorithm for deep belief nets," Neural computation, vol. 18, pp. 1527-1554, 2006.
  54. Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle, "Greedy layer-wise training of deep networks," Advances in neural information processing systems, vol. 19, p. 153, 2007.
  55. X. Glorot and Y. Bengio, "Understanding the difficulty of training deep feedforward neural networks," in International conference on artificial intelligence and statistics, 2010, pp. 249-256.
  56. H. Larochelle, Y. Bengio, J. Louradour, and P. Lamblin, "Exploring strategies for training deep neural networks," The Journal of Machine Learning Research, vol. 10, pp. 1-40, 2009.
  57. J. Weston, R. Collobert, F. Sinz, L. Bottou, and V. Vapnik, "Inference with the universum," in Proceedings of the 23rd international conference on Machine learning, 2006, pp. 1009-1016.
  58. B. T. C. G. D. Roller, "Max-margin Markov networks," Advances in neural information processing systems, vol. 16, p. 25, 2004.
  59. G. Chechik, G. Heitz, G. Elidan, P. Abbeel, and D. Koller, "Max-margin classification of data with absent features," The Journal of Machine Learning Research, vol. 9, pp. 1-21, 2008.
  60. R. Gilad-Bachrach, A. Navot, and N. Tishby, "Margin based feature selection-theory and algorithms," in Proceedings of the twenty-first international conference on Machine learning, 2004, p. 43.
  61. B. Li, M. Chi, J. Fan, and X. Xue, "Support cluster machine," in Proceedings of the 24th international conference on Machine learning, 2007, pp. 505-512.
  62. B. Ghansah, S. Wu, and N. Ghansah, "Rankboost-Based Result Merging," in Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing (CIT/IUCC/DASC/PICOM), 2015 IEEE International Conference on, 2015, pp. 907-914.
  63. T. N. Huynh and R. J. Mooney, "Online Max-Margin Weight Learning for Markov Logic Networks," in SDM, 2011, pp. 642-651.
  64. M. Hoai and F. De la Torre, "Max-margin early event detectors," International Journal of Computer Vision, vol. 107, pp. 191-202, 2014.
  65. C. Li, Q. Liu, W. Dong, F. Wei, X. Zhang, and L. Yang, "Max-Margin-Based Discriminative Feature Learning," IEEE transactions on neural networks and learning systems, vol. 27, pp. 2768-2775, 2016.
  66. S. A. Ali, M. Andleeb, and R. Asif, "Performance Evaluation of Loss Functions for Margin Based Robust Speech Recognition," Performance Evaluation, vol. 7, 2016.
  67. A. Laudani, G. M. Lozito, F. R. Fulginei, and A. Salvini, "On training efficiency and computational costs of a feed forward neural network: a review," Computational intelligence and neuroscience, vol. 2015, p. 83, 2015.
  68. G. Cybenko, "Approximation by superpositions of a sigmoidal function," Mathematics of control, signals and systems, vol. 2, pp. 303-314, 1989.
  69. K. Hornik, M. Stinchcombe, and H. White, "Multilayer feedforward networks are universal approximators," Neural networks, vol. 2, pp. 359-366, 1989.
  70. K.-I. Funahashi, "On the approximate realization of continuous mappings by neural networks," Neural networks, vol. 2, pp. 183-192, 1989.
  71. J. L. Castro, C. J. Mantas, and J. Benıtez, "Neural networks with a continuous squashing function in the output are universal approximators," Neural Networks, vol. 13, pp. 561-563, 2000.
  72. H. Jaeger, Tutorial on training recurrent neural networks, covering BPPT, RTRL, EKF and the" echo state network" approach: GMD-Forschungszentrum Informationstechnik, 2002.
  73. H. Jaeger, "Echo state network. Scholarpedia 2 (9): 2330," ed, 2007.
  74. T. Lin, B. G. Horne, P. Tiňo, and C. L. Giles, "Learning long-term dependencies in NARX recurrent neural networks," Neural Networks, IEEE Transactions on, vol. 7, pp. 1329-1338, 1996.
  75. A. Rodan and P. Tiňo, "Minimum complexity echo state network," Neural Networks, IEEE Transactions on, vol. 22, pp. 131-144, 2011.
  76. D. Li, M. Han, and J. Wang, "Chaotic time series prediction based on a novel robust echo state network," Neural Networks and Learning Systems, IEEE Transactions on, vol. 23, pp. 787-799, 2012.
  77. K. Hornik, M. Stinchcombe, and H. White, "Universal approximation of an unknown mapping and its derivatives using multilayer feedforward networks," Neural networks, vol. 3, pp. 551-560, 1990.
  78. E. Soria-Olivas, J. D. Martín-Guerrero, G. Camps-Valls, A. J. Serrano-López, J. Calpe-Maravilla, and L. Gómez-Chova, "A low-complexity fuzzy activation function for artificial neural networks," IEEE Transactions on Neural Networks, vol. 14, pp. 1576-1579, 2003.
  79. A. L. Braga, C. H. Llanos, D. Göhringer, J. Obie, J. Becker, and M. Hübner, "Performance, accuracy, power consumption and resource utilization analysis for hardware/software realized Artificial Neural Networks," in Bio-Inspired Computing: Theories and Applications (BIC-TA), 2010 IEEE Fifth International Conference on, 2010, pp. 1629-1636.
  80. G. Bebis and M. Georgiopoulos, "Feed-forward neural networks," Potentials, IEEE, vol. 13, pp. 27-31, 1994.
Index Terms

Computer Science
Information Sciences

Keywords

Feedforward neural networks Margin-Based principle Multi-layer perceptron Single-layer perceptron Double Parallel Feedforward neural networks Natural networks