CFP last date
21 April 2025
Call for Paper
May Edition
IJCA solicits high quality original research papers for the upcoming May edition of the journal. The last date of research paper submission is 21 April 2025

Submit your paper
Know more
Reseach Article

Densely Connected Network in Network

by Neji Kouka, Jawaher Ben Khalfa, Jalel Eddine Hajlaoui
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 186 - Number 75
Year of Publication: 2025
Authors: Neji Kouka, Jawaher Ben Khalfa, Jalel Eddine Hajlaoui
10.5120/ijca2025924633

Neji Kouka, Jawaher Ben Khalfa, Jalel Eddine Hajlaoui . Densely Connected Network in Network. International Journal of Computer Applications. 186, 75 ( Mar 2025), 22-25. DOI=10.5120/ijca2025924633

@article{ 10.5120/ijca2025924633,
author = { Neji Kouka, Jawaher Ben Khalfa, Jalel Eddine Hajlaoui },
title = { Densely Connected Network in Network },
journal = { International Journal of Computer Applications },
issue_date = { Mar 2025 },
volume = { 186 },
number = { 75 },
month = { Mar },
year = { 2025 },
issn = { 0975-8887 },
pages = { 22-25 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume186/number75/densely-connected-network-in-network/ },
doi = { 10.5120/ijca2025924633 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2025-03-25T22:41:46.468561+05:30
%A Neji Kouka
%A Jawaher Ben Khalfa
%A Jalel Eddine Hajlaoui
%T Densely Connected Network in Network
%J International Journal of Computer Applications
%@ 0975-8887
%V 186
%N 75
%P 22-25
%D 2025
%I Foundation of Computer Science (FCS), NY, USA
Abstract

Recent work has shown that convolutional neural networks can be more precise, deeper and more efficient for training if they integrate shorter connections between the layers near the input and those near the output. In this paper, we adopt this observation and propose a new deep network structure called “densely connected Network in network” (DcNiN), which connects each layer of MLPconv to all other layers in the same structure in ways as the own maps of MLPconv. Characteristics for each layer are used as inputs in all subsequent layers. The interesting advantages presented by DcNiN are several. Examples include strengthening feature propagation, reducing the leakage gradient problem, reducing the number of parameters, and encouraging feature reuse. We evaluate our proposed architecture against a widely known and highly competitive database (CIFAR-10). DcNINs achieved 99.9611% accuracy on this test set.

References
  1. G. Huang, Y. Sun, Z. Liu, D. Sedra, and K. Q. Weinberger. Deep networks with stochastic depth.In ECCV, 2016.
  2. G. Larsson, M. Maire, and G. Shakhnarovich. Fractalnet: Ultra-deep neural networks withoutresiduals. arXiv preprint arXiv:1605.07648, 2016.
  3. K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR,2016.
  4. R. K. Srivastava, K. Greff, and J. Schmidhuber. Training very deep networks. In NIPS, 2015.
  5. Gao S, Miao Z, Zhang Q, Li Q (2019) DCRN: densely connected refinement network for objectdetection. J Phys: Conf Series., 1229, Article ID 012034
  6. Zeiler MD, Fergus R (2013) Visualizing and understanding convolutional networks.
  7. Zagoruyko S, Komodakis N (2017) Wide Residual Networks, 1605.07146, arXiv, pp 87.1–87.12 https:// doi.org/10.5244/C.30.87.
  8. Iandola F, Han S, Moskewicz M, Ashraf K, Dally W, Keutzer K (2017) SqueezeNet: alexnet-levelaccuracy with 50x fewer parameters and connected convolutional networks.
  9. Alaeddine, H., Jihene, M. Deep network in network. Neural Comput & Applic 33, 1453–1465 (2021). https://doi.org/10.1007/s00521-020-05008-0
  10. Hmidi A, Malek J (2021) Deep Residual Network in Network, Computational Intelligence and Neuroscience. Hindawi 6659083:1687–5265. https://doi.org/10.1155/2021/6659083
  11. C. Szegedy et al., "Going deeper with convolutions", CoRR, vol. abs/1409.4842, 2014, [online] Available: http://arxiv.org/abs/1409.4842.
  12. Gong Y, Wang L, Guo R, Lazebnik S (2014) Multi-scale orderless pooling of deep convolutionalactivation features. http://arxiv.org/abs/1403.1840
  13. Graham B (2014) Fractional max-pooling. https://arxiv.org/abs/1412.6071
  14. He K, Zhang X, Ren S, Sun J (2014) Spatial pyramid pooling in deep convolutional networks forvisual recognition. http://arxiv.org/abs/1406.4729.
  15. Lee C, Gallagher P and Tu Z (2015) Generalizing pooling functions in convolutional neural networks: mixed gated and tree,”https://arxiv.org/abs/1509.08985.
  16. Lin M, Chen Q, Yan S (2013) Network in network. http://arxiv.org/abs/1312.4400.
  17. Murray N, Perronnin F (2015) “Generalized max pooling,” in Proceedings of the 2015 IEEEconference on computer vision and pattern recognition (CVPR), pp 2473–2480, Boston, MA USA
  18. Nair V, Hinton GE (2010) Rectified linear units improve restricted boltzmann machines. In: proceedings of the 27th international conference on machine learning (ICML 2010), pp 807–814
  19. Raiko T, Valpola H, Lecun Y (2012) “Deep learning made easier by linear transformations inperceptrons,” in Proceedings of the Fifteenth International Conference on Artificial Intelligence andStatistics (AISTATS12), N. D. Lawrence and M. A. Girolami, Eds., vol. 22, pp. 924–932, La Palma,Canary Islands, Spain
  20. Romero A, Ballas N, Kahou SE, Antoine C, Gatta C, Bengio Y (2014) FitNets: hints for thin deepnets
  21. Schmidhuber J (1992) Learning complex, extended sequences using the principle of historycompression. Neural Comput 4(2):234–242
  22. Simonyan K, Zisserman A (2014) Very deep convolutional networks for largescale image recognition
  23. Springenberg J, Dosovitskiy A, Brox TT, Riedmiller M (2014) Striving for simplicity: the all convolutional net. http://arxiv.org/abs/1412.6806
  24. Srivastava N, Geoffrey H, Krizhevsky A, Ilya S, Ruslan S, Dropout (2014) A simple way to prevent neural networks from overfitting. J Mach Learn Res 15:1929–1958.
  25. Chang J-R, Chen Y-S (2015) Batch-normalized maxout network in network.http://arxiv.org/abs/1511. 02583
  26. Liao Z, Carneiro G (2016) On the importance of normalisation layers in deep learning with piecewise linear activation units.
  27. Alaeddine, H., Jihene, M. Wide deep residual networks in networks. Multimed Tools Appl 82, 7889–7899 (2023). https://doi.org/10.1007/s11042-022-13696-0.
  28. S. Ioffe and C. Szegedy, Batch normalization: Accelerating deep network training byreducing internal covariate shift“, CoRR, vol. abs/1502.03167, 2015.
  29. I. J. Goodfellow, D. Warde-Farley, M. Mirza, A. C. Courville, and Y. Bengio. Maxout networks. In Proceedings of the 30th International Conference on Machine Learning (ICML2013), volume 28 of JMLR Proceedings, pages 1319– 1327. JMLR.org, 2013.
  30. C.-Y. Lee, S. Xie, P. Gallagher, Z. Zhang, and Z. Tu. Deeply supervised nets. InProceedings of AISTATS 2015, 2015
Index Terms

Computer Science
Information Sciences

Keywords

Convolutional Neural Networks (CNNs) Image recognition Network in Network (NiN)