CFP last date
20 January 2025
Reseach Article

Knowledge Based Reinforcement Learning Robot in Maze Environment

by Dr. D. Venkata Vara Prasad, Chitra Devi. J, Karpagam. P, Manju Priyadharsini. D
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 14 - Number 7
Year of Publication: 2011
Authors: Dr. D. Venkata Vara Prasad, Chitra Devi. J, Karpagam. P, Manju Priyadharsini. D
10.5120/1895-2525

Dr. D. Venkata Vara Prasad, Chitra Devi. J, Karpagam. P, Manju Priyadharsini. D . Knowledge Based Reinforcement Learning Robot in Maze Environment. International Journal of Computer Applications. 14, 7 ( February 2011), 22-30. DOI=10.5120/1895-2525

@article{ 10.5120/1895-2525,
author = { Dr. D. Venkata Vara Prasad, Chitra Devi. J, Karpagam. P, Manju Priyadharsini. D },
title = { Knowledge Based Reinforcement Learning Robot in Maze Environment },
journal = { International Journal of Computer Applications },
issue_date = { February 2011 },
volume = { 14 },
number = { 7 },
month = { February },
year = { 2011 },
issn = { 0975-8887 },
pages = { 22-30 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume14/number7/1895-2525/ },
doi = { 10.5120/1895-2525 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2024-02-06T20:02:46.490460+05:30
%A Dr. D. Venkata Vara Prasad
%A Chitra Devi. J
%A Karpagam. P
%A Manju Priyadharsini. D
%T Knowledge Based Reinforcement Learning Robot in Maze Environment
%J International Journal of Computer Applications
%@ 0975-8887
%V 14
%N 7
%P 22-30
%D 2011
%I Foundation of Computer Science (FCS), NY, USA
Abstract

A simple approach for knowledge based maze solving is presented for a mobile robot. The artificial intelligence concept like reinforcement learning technique is utilized by the robot to learn the new environment. The robot travels through the environment and identifies the target by following a set of rules. After reaching the target, the robot returns back through the optimum path by avoiding dead ends. For achieving this, the robot uses a line maze solving algorithm which uses a set of replacement rules to replace the wrong paths travelled with the correct ones. The algorithm for this maze solver is qualitative in nature, requiring no map of environment, no image Jacobian, no Homography, no fundamental matrix, and no assumption. The environment is accessible, deterministic and static. The working procedure of this project consists of line path following, mobile robot navigation, knowledge based navigation, reinforcement learning.

References
  1. Claude F. Touzet. 2004 “Distributed Lazy Q-learning for Cooperative Mobile Robots”, pp. 5-13, International Journal of Advanced Robotic Systems, Volume 1 Number 1, ISSN 1729-8806
  2. Gerasimos G. Rigatos. 2008 “Multi-Robot Motion Planning Using Swarm Intelligence”, International Journal of Advanced Robotic Systems, Vol. 5, No. 2
  3. Heero, K.; Aabloo, A. & Kruusmaa, M. 2005 “Learning Innovative Routes for Mobile Robots in Dynamic Partially Unknown Environments”, pp. 209 - 222, International Journal of Advanced Robotic Systems, Volume 2, Number 3, ISSN 1729-8806
  4. Javier Minguez, Associate Member, IEEE, and Luis Montano, Member, IEEE 2009 “ Extending Collision Avoidance Methods to Consider the Vehicle Shape, Kinematics, and Dynamics of a Mobile Robot” IEEE TRANSACTIONS ON ROBOTICS, VOL. 25, NO. 2.
  5. Jianghao Li; Zhenbo Li and Jiapin Chen. 2008 “Wheels Optimization and Vision Control of Omni-directional Mobile Microrobot”, International Journal of Advanced Robotic Systems, Vol. 5, No. 2
  6. Kao-Shing, Hwang, Yu-Jen Chen, and Tzung-Feng Lin. 2008 “Q-learning in Multi-Agent Cooperation”, IEEE International Conference on Advanced Robotics and its Social Impacts, Taipei, Taiwan, Aug. 23-25
  7. Maurizio Piaggio and Renato Zaccaria 1997 “Learning Navigation Situations Using RoadMaps”, D.I.S.T. University of Genoa Via Opera Pia 13,I-16145 Genova, Italy
  8. Moteaal Asadi Shirzi, M. R. Hairi Yazdi and Caro Lucas. 2007 “Combined Intelligent Control (CIC) An Intelligent decision making algorithm”, International Journal of Advanced Robotic Systems,Vol.4,No.1
  9. Murphy, R.; Kravitz, J.; Stover, S.; Shoureshi, R. 2009 “Mobile robots in mine rescue and recovery”, Robotics & Automation Magazine, IEEE Volume 16, Issue 2, Pages: 91-103
  10. Noureddine Ouadah, Lamine Ourak and Farès Boudjema. 2008 “Car-Like Mobile Robot Oriented Positioning by Fuzzy Controllers”, International Journal of Advanced Robotic Systems, Vol. 5, No. 3
  11. Shaker, M.R. Shigang Yue Duckett, T. 2009 Dept. of Comput. & Inf., Univ. of Lincoln, Lincoln, UK, “Vision-based reinforcement learning using approximate policy iteration”, Advanced Robotics, ICAR 2009.
  12. Szoke, I. Lazea, G. Tamas, L. Popa, M. Majdik, A. 2009 “Path planning and dynamic objects detection”, Advanced Robotics, ICAR 2009.
  13. Zhichao Chen and Stanley T. Birchfield. 2009 Senior Member, IEEE, “Qualitative Vision-Based Path Following”, IEEE TRANSACTIONS ON ROBOTICS, VOL. 25, NO. 3.
  14. Vinod G. Shelake , Rajanish K. Kamat, Jivan S. Parab, Gourish M. Naik, “Exploring C for Microcontrollers - A Hands on Approach”
Index Terms

Computer Science
Information Sciences

Keywords

Reinforcement learning Robot Maze Environment