CFP last date
22 July 2024
Call for Paper
August Edition
IJCA solicits high quality original research papers for the upcoming August edition of the journal. The last date of research paper submission is 22 July 2024

Submit your paper
Know more
Reseach Article

Gesture Recognition for Interpretation of Bengali Sign Language using Hyper Parameter Tuning Convolution Neural Network

by Tahmina Akter, Muhammad Anwarul Azim, Mohammad Khairul Islam
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 185 - Number 13
Year of Publication: 2023
Authors: Tahmina Akter, Muhammad Anwarul Azim, Mohammad Khairul Islam
10.5120/ijca2023922672

Tahmina Akter, Muhammad Anwarul Azim, Mohammad Khairul Islam . Gesture Recognition for Interpretation of Bengali Sign Language using Hyper Parameter Tuning Convolution Neural Network. International Journal of Computer Applications. 185, 13 ( Jun 2023), 1-7. DOI=10.5120/ijca2023922672

@article{ 10.5120/ijca2023922672,
author = { Tahmina Akter, Muhammad Anwarul Azim, Mohammad Khairul Islam },
title = { Gesture Recognition for Interpretation of Bengali Sign Language using Hyper Parameter Tuning Convolution Neural Network },
journal = { International Journal of Computer Applications },
issue_date = { Jun 2023 },
volume = { 185 },
number = { 13 },
month = { Jun },
year = { 2023 },
issn = { 0975-8887 },
pages = { 1-7 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume185/number13/32753-2023922672/ },
doi = { 10.5120/ijca2023922672 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2024-02-07T01:25:56.014425+05:30
%A Tahmina Akter
%A Muhammad Anwarul Azim
%A Mohammad Khairul Islam
%T Gesture Recognition for Interpretation of Bengali Sign Language using Hyper Parameter Tuning Convolution Neural Network
%J International Journal of Computer Applications
%@ 0975-8887
%V 185
%N 13
%P 1-7
%D 2023
%I Foundation of Computer Science (FCS), NY, USA
Abstract

In recent years, the proportion of deaf and dumb persons has increased dramatically all across the world. Physically challenged persons have found it challenging to communicate with normal others. Via the movement of gestures like the face, hand, and body, communication takes place. The goal of this research is to develop a hand gesture recognition method that can understand sign language for deaf and hard of hearing persons. We collected our hand sign data from a well-known source called Ishara- Lipi. Because of the low amount of gestures, we augment this data set from 1009 images to 9360 images for our recognition purposes. This augmented dataset is divided into the following sections: training and testing. We develop the Convolutional Neural Network (CNN) model that uses multilayer CNN, followed by pooling layers and dropout layers and also adding multiple hidden layers, or simply called dense layers afterward. We experiment with our CNN model with the tuning of hyper parameters in all possible combinations. Our model provides 98.78 percent and 97.45 percent accuracy in training and validation respectively. We evaluate our trained CNN model through the test dataset that also has provided both 98 percent for accuracy and f1 score.

References
  1. Ahmed, Tauhid S, Akhand M.:Bangladeshi sign language recognition using fingertip position. In: 2016 International Conference on Medical Engineering, Health Informatics and Technology (MediTec). IEEE, pp. 1–5 (2016).
  2. Chen, Lingchen et al.:A survey on hand gesture recognition. In: 2013 International conference on computer sciences and applications. IEEE, pp. 313–316 (2013).
  3. Haque, Promila, Dasb B, Kaspy MN.: Two- Handed Bangla Sign Language Recognition Using Principal Component Analysis (PCA) And KNN Algorithm. In: 2019 International Conference on Electrical, Computer and Communication Engineering (ECCE). IEEE, pp.1–4(2019).
  4. Hoque, Tazimul.:Automated Bangla sign language translation system: Prospects, limitations and applications. In: 2016 5th International Conference on Informatics, Electronics and Vision (ICIEV). IEEE, pp. 856- 862 (2016).
  5. Wang, Chen, Xi Y.: Convolutional neural network for image classification. In: Johns Hopkins University Baltimore, MD 21218 (1997).
  6. Paulo T, Ribeiro F, Reis LP .:Vision-based Portuguese sign language recognition system. In: New Perspectives in In-formation Systems and Technologies, Volume 1. Springer, pp. 605– 617(2014).
  7. Itkarkar, RR and Anil V Nandi.: Hand gesture to speech conversion using Matlab. In: 2013 Fourth International Conference on Computing, Communications and Networking Technologies (ICCCNT). IEEE, pp. 1–4 (2013).
  8. Khan, Asifullah A survey of the recent architectures of deep convolutional neural networks. In: Artificial Intelligence Review 53.8, pp. 5455–5516 (2020).
  9. Huong, Thi T N, Huu T V, Xuan TL.: Static hand gesture recognition for vietnamese sign language (VSL) using principle components analysis”. In: 2015 International Conference on Communications, Management and Telecommunications (ComManTel). IEEE, pp.138-141(2015).
  10. Shinde, Sonajirao S, Rm Aute.:Real Time Hand Gesture Recognition and Voice Conversion System for Deaf and Dumb Per son Based on Image Processing. In: Journal NX 2.9, pp. 39–43 Smith, J. M. and A. B. Jones (2012). Book Title. 7th. Publisher.
  11. Dabre, Kanchan and Dholay S.: Machine learning model for sign language interpretation using webcam images. In: 2014 International Conference on Circuits, Systems, Communication and Information Technolongy Applications (CSCITA). IEEE, pp. 317–321 (2014).
  12. Saha, Nath H.: A machine learning based approach for hand gesture recognition using distinctive feature extraction. In: 2018 IEEE 8th Annual Computing and Communication Workshop and Conference (CCWC). IEEE, pp. 91–98 (2018).
  13. Kishore, PVV.: Optical flow hand tracking and active contour hand shape features for continuous sign language recognition with artificial neural networks. In: 2016 IEEE 6th international conference on advanced computing (IACC). IEEE, pp. 346–351(2016).
  14. Prakash, Meena R .:Gesture recognition and finger tip detection for human computer interaction. In: 2017 International Conference on Inno- vations in Information, Embedded and Communication Systems (ICIIECS). IEEE, pp. 1–4 (2017).
  15. Cao, Ming C L, and Yin Z .:American sign language alphabet recognition using microsoft kinect. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp. 44- 52(2015).
  16. Perez-Sanz, Fernando, Pedro J N, and Marcos.: Plant phenomics: an overview of image acquisition technologies and image data analysis algorithms. In: GigaScience 6.11gix092.Smith, J.M. and A. B. Jones (2012). Book Title. 7th. Publisher (2017).
  17. Hussain, Zeshan.: Differential data augmentation techniques for medical imaging classification tasks. In: AMIA Annual Symposium Proceedings. Vol. 2017. American Medical Informatics Association, p. 979 (2017).
  18. Vogelsang, David C and Bradley J.: Erickson Magician’s corner: 6. TensorFlow and TensorBoard(2020).
  19. Albawi, Saad, Mohammed TA, and Al-Zawi S.:Understanding of a convolutional neural network”. In: 2017 International Conference onEngineering and Technology (ICET). Ieee, pp. 1–6 (2017).
  20. Xu, Xiaowei.:. A deep learning system to screen novel coronavirus disease 2019 pneumonia. In: Engineering 6.10, pp. 1122–1129 (2020).
  21. Alani, Ali A.: Hand gesture recognition using an adapted convol tional neural network with data augmentation. In: 2018 4th International conference on information management (ICIM). IEEE, pp. 5–12 (2018).
  22. Hussain, Soeb.:.Hand gesture recognition using deep learning. In: 2017 International SoC Design Conference (ISOCC). IEEE, pp. 48–49.Jones, A.B. and J. M. Smith (Mar. 2013). Article Title. In: Journal title 13.52, pp. 123–456 (2017).
  23. Zhang, Shanwen.: Cucumber leaf disease identification with global pooling dilated convolutional neural network. In: Computers and Electronics in Agriculture 162, pp. 422–430 (2019).
  24. Hahn, Sangchul, Choi H Understanding dropout as an optimization trick. In: Neurocomputing 398, pp. 64–70 2020).
  25. Xu, Qi.:Overfitting remedy by sparsifying regularization on fully connected layers of CNNs. In: Neurocomputing 328, pp. 69–74 (2019).
  26. Wang, Yulong, Zhang H, and Zhang G PSO- CNN: An efficient PSO-based algorithm for fine- tuning hyper-parameters of convolutional neural networks”. In: Swarm and Evolutionary Computation 49, pp. 114–123 (2019).
  27. Bibaeva, V., 2018, September. Using metaheuristics for hyper-parameter optimization of convolutional neural networks. In 2018 IEEE 28Th international workshop on machine learning for signal processing (MLSP) (pp. 1-6). IEEE.
  28. Kligvasser, I., Shaham, T.R. and Michaeli, T., 2018. xunit: Learning a spatial activation function for efficient image restoration. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 2433-2442).
  29. Adithya, V. and Rajesh, R., 2020. A deep convolutional neural network approach for static hand gesture recognition. Procedia computer science, 171, pp.2353-2361.
  30. Ozcan, T. and Basturk, A., 2019. Transfer learning-based convolutional neural networks with heuristic optimization for hand gesture recognition. Neural Computing and Applications, 31,pp.8955-8970.Sannella, M. J. 1994 Constraint Satisfaction and Debugging for Interactive User Interfaces. Doctoral Thesis. UMI Order Number: UMI Order No. GAX95-09398., University of Washington.
Index Terms

Computer Science
Information Sciences

Keywords

Gesture recognition sign language CNN hyper parameter tuning.