CFP last date
20 February 2025
Reseach Article

Going More Deeper with Convolutions for Network in Network

by Neji Kouka, Jawaher Ben Khalfa, Jalel Eddine Hajlaoui
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 186 - Number 56
Year of Publication: 2024
Authors: Neji Kouka, Jawaher Ben Khalfa, Jalel Eddine Hajlaoui
10.5120/ijca2024924286

Neji Kouka, Jawaher Ben Khalfa, Jalel Eddine Hajlaoui . Going More Deeper with Convolutions for Network in Network. International Journal of Computer Applications. 186, 56 ( Dec 2024), 35-38. DOI=10.5120/ijca2024924286

@article{ 10.5120/ijca2024924286,
author = { Neji Kouka, Jawaher Ben Khalfa, Jalel Eddine Hajlaoui },
title = { Going More Deeper with Convolutions for Network in Network },
journal = { International Journal of Computer Applications },
issue_date = { Dec 2024 },
volume = { 186 },
number = { 56 },
month = { Dec },
year = { 2024 },
issn = { 0975-8887 },
pages = { 35-38 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume186/number56/going-more-deeper-with-convolutions-for-network-in-network/ },
doi = { 10.5120/ijca2024924286 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2024-12-27T02:46:01.256879+05:30
%A Neji Kouka
%A Jawaher Ben Khalfa
%A Jalel Eddine Hajlaoui
%T Going More Deeper with Convolutions for Network in Network
%J International Journal of Computer Applications
%@ 0975-8887
%V 186
%N 56
%P 35-38
%D 2024
%I Foundation of Computer Science (FCS), NY, USA
Abstract

Network in Network is an important extension of the deep convolution neural network that uses a shallow multilayer perceptron (MLP), a nonlinear function, to replace the linear filter. In this article, we propose to replace convolution layers with convolution modules. The main feature of this architecture is the improved utilization of computing resources inside the network. This has been achieved through a carefully crafted design that allows for increased network depth and width while keeping the compute budget constant. The experimental results on the CIFAR10 dataset demonstrate the effectiveness of the proposed method.

References
  1. M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. In Computer Vision–ECCV 2014, pages 818–833. Springer, 2014.
  2. K. Simonyan, A. Zisserman, "Very deep convolutional networks for large-scale image recognition", CoRR, vol. abs/1409.1556, 2014, [online] Available: http://arxiv.org/abs/1409.1556
  3. C. Szegedy et al., "Going deeper with convolutions", CoRR, vol. abs/1409.4842, 2014, [online] Available: http://arxiv.org/abs/1409.4842
  4. M. Lin, Q. Chen, and S. Yan. Network in network. International Conference on Learning Representations, abs/1312.4400, 2014.
  5. G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. CoRR, abs/1207.0580, 2012.
  6. D. Ciresan, U. Meier, J. Schmidhuber, "Multi-column deep neural networks for image classification", CoRR, vol. abs/1202.2745, 2012, [online] Available: http://arxiv.org/abs/1202.2745
  7. K. Gregor, Y. LeCun, "Emergence of complex-like cells in a termporal product network with local receptive fields", CoRR, vol. abs/1006.0448, 2010, [online] Available: http://arxiv.org/abs/1006.0448..
  8. K. Jarrett, K. Kavukcuoglu, M. A. Ranzato, Y. LeCun, "What is the best multi-stage architecture for object recognition?", Proc. IEEE Int. Conf. Comput. Vis., pp. 2146-2153, Sep. 2009.
  9. Y. LeCun, K. Kavukcuoglu, C. Farabet, "Convolutional networks and applications in vision", Proc. IEEE Int. Symp. Circuits Syst., pp. 253-256, Jun. 2010
  10. M. D. Zeiler, R. Fergus, "Stochastic pooling for regularization of deep convolutional neural networks", CoRR, vol. abs/1301.3557, 2013, [online] Available: http://arxiv.org/abs/1301.3557
  11. K. He, X. Zhang, S. Ren, J. Sun, "Spatial pyramid pooling in deep convolutional networks for visual recognition", Proc. Eur. Conf. Comput. Vis., pp. 346-361, 2014.
  12. T. Chan, K. Jia, S. Gao, J. Lu, Z. Zeng, Y. Ma, "PCANet: A simple deep learning baseline for image classification?", CoRR, vol. abs/1404.3606, 2014, [online] Available: http://arxiv.org/abs/1404.3606.
  13. C. Lee, P. Gallagher, Z. Tu, "Generalizing pooling functions in convolutional neural networks: Mixed gated and tree", CoRR, vol. abs/1509.08985, 2015, [online] Available:https://arxiv.org/abs/1509.08985
  14. J. Springenberg, A. Dosovitskiy, T. T. Brox, M. Riedmiller, "Striving for simplicity: The all convolutional net", CoRR, vol. abs/1412.6806, 2014, [online]Available:http://arxiv.org/abs/1412.6806.
  15. K. Gregor, Y. LeCun, "Emergence of complex-like cells in a termporal product network with local receptive fields", CoRR, vol. abs/1006.0448, 2010, [online] Available: http://arxiv.org/abs/1006.0448.
  16. D. Yoo, S. Park, J. Lee, I. Kweon, "Multi-scale pyramid pooling for deep convolutional representation", Proc. IEEE Workshop Comput. Vis. Pattern Recognit., pp. 1-5, Sep. 2015.
  17. B. Graham, "Fractional max-pooling", CoRR, vol. abs/1412.6071, 2014, [online] Available: https://arxiv.org/abs/1412.6071
  18. N. Murray, F. Perronnin, "Generalized max pooling", Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit., pp. 2473-2480, Sep. 2014.
  19. I. J. Goodfellow, D. Warde-Farley, M. Mirza, A. C. Courville, and Y. Bengio. Maxout networks. In Proceedings of the 30th International Conference on Machine Learning (ICML 2013), volume 28 of JMLR Proceedings, pages 1319– 1327. JMLR.org, 2013.
  20. C.-Y. Lee, S. Xie, P. Gallagher, Z. Zhang, and Z. Tu. Deeply supervised nets. In Proceedings of AISTATS 2015, 2015.
  21. K. He, X. Zhang, S. Ren, J. Sun, "Deep residual learning for image recognition", CoRR, vol. abs/1512.03385, 2015, [online] Available: http://arxiv.org/abs/1512.03385.
  22. S. Zagoruyko and N. Komodakis. Wide residual networks. In BMVC, 2016.
  23. Saining Xie, Ross Girshick, Piotr Dollár, Zhuowen Tu, Kaiming He, Aggregated Residual Transformations for Deep Neural Networks, arXiv:1611.05431.
  24. Z. Liao and G. Carneiro. On the importance of normalisation layers in deep learning with piecewise linear activation units. ArXiv preprint arXiv: 1508.00330,
Index Terms

Computer Science
Information Sciences

Keywords

Convolutional Neural Networks (CNNs) Image recognition Network in Network (NiN)