CFP last date
20 January 2025
Reseach Article

Traffic Image Analysis using Deep Learning for Safe Vehicle Navigation in Roads Controlled by Police

by Samitha P. Randeniya, Ruwan D. Nawarathna
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 186 - Number 35
Year of Publication: 2024
Authors: Samitha P. Randeniya, Ruwan D. Nawarathna
10.5120/ijca2024923915

Samitha P. Randeniya, Ruwan D. Nawarathna . Traffic Image Analysis using Deep Learning for Safe Vehicle Navigation in Roads Controlled by Police. International Journal of Computer Applications. 186, 35 ( Aug 2024), 8-18. DOI=10.5120/ijca2024923915

@article{ 10.5120/ijca2024923915,
author = { Samitha P. Randeniya, Ruwan D. Nawarathna },
title = { Traffic Image Analysis using Deep Learning for Safe Vehicle Navigation in Roads Controlled by Police },
journal = { International Journal of Computer Applications },
issue_date = { Aug 2024 },
volume = { 186 },
number = { 35 },
month = { Aug },
year = { 2024 },
issn = { 0975-8887 },
pages = { 8-18 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume186/number35/traffic-image-analysis-using-deep-learning-for-safe-vehicle-navigation-in-roads-controlled-by-police/ },
doi = { 10.5120/ijca2024923915 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2024-08-26T20:51:45+05:30
%A Samitha P. Randeniya
%A Ruwan D. Nawarathna
%T Traffic Image Analysis using Deep Learning for Safe Vehicle Navigation in Roads Controlled by Police
%J International Journal of Computer Applications
%@ 0975-8887
%V 186
%N 35
%P 8-18
%D 2024
%I Foundation of Computer Science (FCS), NY, USA
Abstract

Driving in urban environments where traffic is controlled by police presents significant challenges for both human drivers and autonomous vehicles (AVs). Interacting with pedestrians and traffic police officers in such settings requires sophisticated communication and understanding of their intentions. Such interactions are critical because pedestrians are the most vulnerable road users. Traffic conditions, driving scenarios, police signals, and pedestrian behaviours can vary widely between countries. Understanding these behaviours and signals is not straightforward and depends on numerous factors such as pedestrian demographics, traffic dynamics, and environmental conditions. In different countries, pedestrians may use hand signals to stop traffic when crossing the road, and traffic police officers may control vehicles during traffic jams, traffic light malfunctions, and at zebra crossings. The common signals used are STOP and GO. Convolutional Neural Networks (CNNs), a deep learning technology, are widely applied in areas such as computer vision and object recognition. This study explores how an AV can identify the STOP signal from a pedestrian or traffic police officer amidst other pedestrians and officers on the road. A model is proposed using a custom dataset and a CNN-based multi-class object detection framework. Additionally, the model can identify pedestrians crossing at zebra crossings. To test the proposed model in real-time, a compact autonomous vehicle was designed using a Raspberry Pi, a popular microcontroller for small-scale projects. This prototype AV can detect five classes of objects and respond by moving forward or stopping based on the relevant signals. The study focused on the traffic conditions in Sri Lanka, where the case study was conducted.

References
  1. T. Winkle, “Safety benefits of automated vehicles: Extended findings from accident research for development, validation and testing,” in Autonomous Driving: Technical, Legal and Social Aspects, 2016, pp. 335–364. doi: 10.1007/978-3-662-48847-8_17.
  2. T. Litman, “Autonomous Vehicle Implementation Predictions: Implications for Transport Planning,” Transportation Research Board Annual Meeting, vol. 28, 2014, doi: 10.1613/jair.301.
  3. A. Rasouli, I. Kotseruba, and J. K. Tsotsos,“Understanding Pedestrian Behavior in Complex Traffic Scenes,” IEEE Transactions on Intelligent Vehicles, vol. 3, no. 1, pp. 61–70, 2018, doi: 10.1109/tiv.2017.2788193.
  4. S. Deb, L. Strawderman, D. W. Carruth, J. DuBien, B. Smith, and T. M. Garrison, “Development and validation of a questionnaire to assess pedestrian receptivity toward fully autonomous vehicles,” Transportation Research Part C: Emerging Technologies, vol. 84, pp. 178–195, 2017, doi: 10.1016/j.trc.2017.08.029.
  5. R. Sun, X. Zhuang, C. Wu, G. Zhao, and K. Zhang, “The estimation of vehicle speed and stopping distance by pedestrians crossing streets in a naturalistic traffic environment,” Transportation Research Part F: Traffic Psychology and Behaviour, vol. 30, pp. 97–106, 2015, doi: 10.1016/j.trf.2015.02.002.
  6. A. Hussein, F. García, J. M. Armingol, and C. Olaverri-Monreal, “P2V and V2P communication for pedestrian warning on the basis of autonomous vehicles,” in IEEE Conference on Intelligent Transportation Systems, Proceedings, ITSC, 2016, pp. 2034–2039. doi: 10.1109/ITSC.2016.7795885.
  7. A. Rasouli, I. Kotseruba, and J. K. Tsotsos, “Agreeing to cross: How drivers and pedestrians communicate,” in IEEE Intelligent Vehicles Symposium, Proceedings, 2017, pp. 264–269. doi: 10.1109/IVS.2017.7995730.
  8. E. Y. Du et al., “Pedestrian behavior analysis using 110-Car Naturalistic Driving data in USA,” 2013. [Online]. Available: https://www.esv.nhtsa.dot.gov/Proceedings/23/isv7/main.htm
  9. Z. Ren, X. Jiang, and W. Wang, “Analysis of the Influence of Pedestrians’ eye Contact on Drivers’ Comfort Boundary during the Crossing Conflict,” in Procedia Engineering, 2016, vol. 137, pp. 399–406. doi: 10.1016/j.proeng.2016.01.274.
  10. A. Tom and M. A. Granié, “Gender differences in pedestrian rule compliance and visual search at signalized and unsignalized crossroads,” Accident Analysis and Prevention, vol. 43, no. 5, pp. 1794–1801, 2011, doi: 10.1016/j.aap.2011.04.012.
  11. M. Sucha, D. Dostal, and R. Risser, “Pedestrian-driver communication and decision strategies at marked crossings,” Accident Analysis and Prevention, vol. 102, pp. 41–50, 2017, doi: 10.1016/j.aap.2017.02.018.
  12. D. Dey and J. Terken, “Pedestrian interaction with vehicles: Roles of explicit and implicit communication,” in AutomotiveUI 2017 - 9th International ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications, Proceedings, 2017, pp. 109–113. doi: 10.1145/3122986.3123009.
  13. R. Sathya and M. Kalaiselvi Geetha, “Vision based Traffic Police Hand Signal Recognition in Surveillance Video - A Survey,” International Journal of Computer Applications, vol. 81, no. 9, pp. 1–10, 2013, doi:10.5120 / 14037-2192.
  14. G. Wang and X. Ma, “Traffic Police Gesture Recognition using RGB-D and Faster R-CNN,” in 2018 International Conference on Intelligent Informatics and Biomedical Sciences, ICIIBMS 2018, 2018, pp. 78–81. doi: 10.1109/ICIIBMS.2018.8549975.
  15. C. Wang, C. Zhao, and H. Wang, “Self-similarity based zebra-crossing detection for intelligent vehicle,” Open Automation and Control Systems Journal, vol. 7, no. 1, pp. 974–986, 2015, doi:10.2174/1874444301507010974.
  16. X. Liu, Y. Zhang, and Q. Li, “Automatic pedestrian crossing detection and impairment analysis based on mobile mapping system,” in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 2017, vol. 4, no. 2W4, pp. 251–258. doi: 10.5194/isprs-annals-IV-2-W4-251-2017.
  17. K. Behrendt, L. Novak, and R. Botros, “A deep learning approach to traffic lights: Detection, tracking, and classification,” in Proceedings - IEEE International Conference on Robotics and Automation, 2017, pp. 1370–1377. doi: 10.1109/ICRA.2017.7989163.
  18. R. W. Wolcott and R. M. Eustice, “Visual localization within LIDAR maps for automated urban driving,” in IEEE International Conference on Intelligent Robots and Systems, 2014, pp. 176–183. doi:10.1109/IROS.2014.69 42558.
  19. Q. Rao and J. Frtunikj, “Deep learning for self-driving cars: Chances and challenges: Extended Abstract,” in Proceedings - International Conference on Software Engineering, 2018, pp. 35–38. doi: 10.1145/ 3194085319 4087.
  20. J. Kocić, N. Jovičić, and V. Drndarević, “An end-to-end deep neural network for autonomous driving designed for embedded automotive platforms,” Sensors (Switzerland), vol. 19, no. 9, 2019, doi:10.3390/s19092064.
  21. S. Grigorescu, B. Trasnea, T. Cocias, and G. Macesanu, “A survey of deep learning techniques for autonomous driving,” Journal of Field Robotics, 2019, doi: 10.1002/rob.21918.
  22. Tomar and Suramya, “Converting video formats with FFmpeg,” Linux Journal, 2006.
  23. Z. Fang and A. M. López, “Is the Pedestrian going to Cross? Answering by 2D Pose Estimation,” in IEEE Intelligent Vehicles Symposium, Proceedings, 2018, vol. 2018-June, pp. 1271–1276. doi:10.1109/IVS.2018.85004 13.
  24. Tzutalin, “LabelImg,” LabelImg, 2015.
  25. A. Lindgren, F. Chen, P. W. Jordan, and H. Zhang, “Requirements for the design of advanced driver assistance systems - The differences between Swedish and Chinese drivers,” International Journal of Design, vol. 2, no. 2, pp. 41–54, 2008.
  26. G. M. Björklund and L. Åberg, “Driver behaviour in intersections: Formal and informal traffic rules,” Transportation Research Part F: Traffic Psychology and Behaviour, vol. 8, no. 3, pp. 239–253, 2005, doi: 10.1016/j.trf.2005.04.006.
Index Terms

Computer Science
Information Sciences
Artificial Intelligence
Self Driving Cars
Computer Vision

Keywords

Deep Learning Traffic Image Analysis Object Detection Safe Vehicle Navigation Single-stage model