CFP last date
20 January 2025
Reseach Article

Deep Columnar Convolutional Neural Network

by Somshubra Majumdar, Ishaan Jain
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 145 - Number 12
Year of Publication: 2016
Authors: Somshubra Majumdar, Ishaan Jain
10.5120/ijca2016910772

Somshubra Majumdar, Ishaan Jain . Deep Columnar Convolutional Neural Network. International Journal of Computer Applications. 145, 12 ( Jul 2016), 25-32. DOI=10.5120/ijca2016910772

@article{ 10.5120/ijca2016910772,
author = { Somshubra Majumdar, Ishaan Jain },
title = { Deep Columnar Convolutional Neural Network },
journal = { International Journal of Computer Applications },
issue_date = { Jul 2016 },
volume = { 145 },
number = { 12 },
month = { Jul },
year = { 2016 },
issn = { 0975-8887 },
pages = { 25-32 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume145/number12/25331-2016910772/ },
doi = { 10.5120/ijca2016910772 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2024-02-06T23:48:38.543374+05:30
%A Somshubra Majumdar
%A Ishaan Jain
%T Deep Columnar Convolutional Neural Network
%J International Journal of Computer Applications
%@ 0975-8887
%V 145
%N 12
%P 25-32
%D 2016
%I Foundation of Computer Science (FCS), NY, USA
Abstract

Recent developments in the field of deep learning have shown that convolutional networks with several layers can approach human level accuracy in tasks such as handwritten digit classification and object recognition. It is observed that the state-of-the-art performance is obtained from model ensembles, where several models are trained on the same data and their predictions probabilities are averaged or voted on. Here, the proposed model is a single deep and wide neural network architecture that offers near state-of-the-art performance on various image classification challenges, such as the MNIST dataset and the CIFAR-10 and CIFAR-100 datasets. On the competitive MNIST handwritten image classification challenge, the proposed model approaches the near state-of-the-art 35 model ensemble in terms of accuracy. On testing the model on the CIFAR datasets, it is found that the proposed model approaches the performance of the top two ensemble models. The architecture is also analyzed on the SVHN dataset.

References
  1. Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner.” Gradient based learning applied to document recognition”. Proceedings of the IEEE, 86(11):2278–2324, November 1998.
  2. A. Krizhevsky.” Learning multiple layers of features from tiny images”. Master’s thesis, Computer Science Department, University of Toronto, 2009. 1
  3. Raina, Rajat, Anand Madhavan, and Andrew Y. Ng. "Large-scale Deep Unsupervised Learning Using Graphics Processors." Proceedings of the 26th Annual International Conference on Machine Learning - ICML '09 (2009). Print.
  4. Fukushima, Kunihiko. "Neocognitron: A Self-organizing Neural Network Model for a Mechanism of Pattern Recognition Unaffected by Shift in Position."Biol. Cybernetics Biological Cybernetics 36.4 (1980): 193-202. Print.
  5. Behnke, Sven. "Hierarchical Neural Networks for Image Interpretation."Lecture Notes in Computer Science(2003). Print.
  6. Simard, P.y., D. Steinkraus, and J.c. Platt. "Best Practices for Convolutional Neural Networks Applied to Visual Document Analysis." Seventh International Conference on Document Analysis and Recognition, 2003. Proceedings. Print.
  7. Ciresan, D., U. Meier, and J. Schmidhuber. "Multi-column Deep Neural Networks for Image Classification." 2012 IEEE Conference on Computer Vision and Pattern Recognition (2012). Print.
  8. Li Wan, Matthew Zeiler, Sixin Zhang, Yann LeCun, Rob Fergus. “Regularization of Neural Network using DropConnect”. International Conference on Machine Learning 2013
  9. Strigl, Daniel, Klaus Kofler, and Stefan Podlipnig. "Performance and Scalability of GPU-Based Convolutional Neural Networks."2010 18th Euromicro Conference on Parallel, Distributed and Network-based Processing (2010). Print.
  10. Uetz, Rafael, and Sven Behnke. "Large-scale Object Recognition with CUDA-accelerated Hierarchical Neural Networks." 2009 IEEE International Conference on Intelligent Computing and Intelligent Systems (2009). Print.
  11. Zeiler, Matthew D. "ADADELTA: an adaptive learning rate method." arXiv preprint arXiv:1212.5701 (2012).
  12. Glorot, Xavier, and Yoshua Bengio. "Understanding the difficulty of training deep feedforward neural networks." International conference on artificial intelligence and statistics. 2010.
  13. Lecun, Y., Fu Jie Huang, and L. Bottou. "Learning Methods for Generic Object Recognition with Invariance to Pose and Lighting."Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004.Print.
  14. S. Hochreiter, Y. Bengio, P. Frasconi, and J. Schmidhuber. “Gradient flow in recurrent nets: the difficulty of learning long-term dependencies”. In S. C. Kremer and J. F. Kolen, editors, A Field Guide to Dynamical Recurrent Neural Networks. IEEE Press, 2001.
  15. D. E. Rumelhart, G. E. Hinton, and R. J. Williams. “Learning internal representations by error propagation”. In Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations, pages 318–362. MIT Press, Cambridge, MA, USA, 1986
  16. Ranzato, Marc'aurelio, Fu Jie Huang, Y-Lan Boureau, and Yann Lecun. "Unsupervised Learning of Invariant Feature Hierarchies with Applications to Object Recognition."2007 IEEE Conference on Computer Vision and Pattern Recognition (2007). Print.
  17. Erhan, Dumitru, et al. "Why does unsupervised pre-training help deep learning?." The Journal of Machine Learning Research 11 (2010): 625-660.
  18. Springenberg, Jost Tobias, et al. "Striving for simplicity: The all convolutional net." arXiv preprint arXiv:1412.6806 (2014).
  19. Xu, Bing, et al. "Empirical evaluation of rectified activations in convolutional network." arXiv preprint arXiv:1505.00853 (2015).
  20. D. C. Ciresan, U. Meier, L. M. Gambardella, and J. Schmidhuber. “Deep, big, simple neural nets for handwritten digit recognition”. Neural Computation, 22(12):3207–3220, 2010.
  21. Graham, Benjamin. "Fractional max-pooling." arXiv preprint arXiv:1412.6071(2014).
  22. D. C. Ciresan, U. Meier, L. M. Gambardella, and J. Schmidhuber. Convolutional neural network committees for handwritten character classification. In International Conference on Document Analysis and Recognition, pages 1250–1254, 2011.
  23. Goodfellow, Ian J., et al. "Maxout networks." arXiv preprint arXiv:1302.4389(2013).
  24. Lin, Min, Qiang Chen, and Shuicheng Yan. "Network in network." arXiv preprint arXiv:1312.4400 (2013).
  25. Clevert, Djork-Arné, Thomas Unterthiner, and Sepp Hochreiter. "Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)." arXiv preprint arXiv:1511.07289 (2015).
  26. Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, Andrew Y. Ng. “Reading Digits in Natural Images with Unsupervised Feature Learning.” NIPS Workshop on Deep Learning and Unsupervised Feature Learning 2011.
  27. Lee, Chen-Yu, et al. "Deeply-supervised nets." arXiv preprint arXiv:1409.5185 (2014).
  28. Chang, Jia-Ren, and Yong-Sheng Chen. "Batch-normalized Maxout Network in Network." arXiv preprint arXiv:1511.02583 (2015).
  29. Liang, Ming, and Xiaolin Hu. "Recurrent convolutional neural network for object recognition." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015.
  30. Liao, Zhibin, and Gustavo Carneiro. "Competitive Multi-scale Convolution." arXiv preprint arXiv:1511.05635 (2015).
  31. Lee, Chen-Yu, Patrick W. Gallagher, and Zhuowen Tu. "Generalizing Pooling Functions in Convolutional Neural Networks: Mixed, Gated, and Tree." arXiv preprint arXiv:1509.08985 (2015).
Index Terms

Computer Science
Information Sciences

Keywords

Neural Networks Convolutional Neural Network Computer Vision