We apologize for a recent technical issue with our email system, which temporarily affected account activations. Accounts have now been activated. Authors may proceed with paper submissions. PhDFocusTM
CFP last date
20 November 2024
Reseach Article

An Improved Q-learning Algorithm for Path-Planning of a Mobile Robot

by Pradipta K Das, S. C. Mandhata, H. S. Behera, S. N. Patro
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 51 - Number 9
Year of Publication: 2012
Authors: Pradipta K Das, S. C. Mandhata, H. S. Behera, S. N. Patro
10.5120/8073-1468

Pradipta K Das, S. C. Mandhata, H. S. Behera, S. N. Patro . An Improved Q-learning Algorithm for Path-Planning of a Mobile Robot. International Journal of Computer Applications. 51, 9 ( August 2012), 40-46. DOI=10.5120/8073-1468

@article{ 10.5120/8073-1468,
author = { Pradipta K Das, S. C. Mandhata, H. S. Behera, S. N. Patro },
title = { An Improved Q-learning Algorithm for Path-Planning of a Mobile Robot },
journal = { International Journal of Computer Applications },
issue_date = { August 2012 },
volume = { 51 },
number = { 9 },
month = { August },
year = { 2012 },
issn = { 0975-8887 },
pages = { 40-46 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume51/number9/8073-1468/ },
doi = { 10.5120/8073-1468 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2024-02-06T20:49:59.308458+05:30
%A Pradipta K Das
%A S. C. Mandhata
%A H. S. Behera
%A S. N. Patro
%T An Improved Q-learning Algorithm for Path-Planning of a Mobile Robot
%J International Journal of Computer Applications
%@ 0975-8887
%V 51
%N 9
%P 40-46
%D 2012
%I Foundation of Computer Science (FCS), NY, USA
Abstract

Classical Q-learning requires huge computations to attain convergence and a large storage to save the Q-values for all possible actions in a given state. This paper proposes an alternative approach to Q-learning to reduce the convergence time without using the optimal path from a random starting state of a final goal state, when the Q-table is used for path planning of a mobile robot. Further, the proposed algorithm stores the Q-value for the best possible action at a state, and thus save significant storage. Experiments reveal that the acquired Q-table obtained by the proposed algorithm helps in saving turning angles of the robot in the planning stage. Reduction in turning angles is economic from the point of view of energy consumption by the robot. Thus the proposed algorithm has several merits with respect to classical Q-learning. The proposed algorithm is constructed based on four fundamental properties derived here and the validation of the algorithm is studied with Khepera-II robot.

References
  1. Dean, T. , Basye, K. and Shewchuk, J. "Reinforcement learning for planning and Control". In: Minton, S. (ed. ) Machine Learning Methods for Planning and Scheduling. Morgan Kaufmann 1993.
  2. Bellman, R. E. , "Dynamic programming ", Princeton, NJ: Princeton University Press, p. 957.
  3. Watkins, C. and Dayan, P. , "Q-learning," Machine Learning, Vol. 8, pp. 279- 292, 1992
  4. Konar, A. , Computational Intelligence: Principles, Techniques and Applications. Springer-Verlag, 2005
  5. Busoniu, L. , Babushka, R. , Schutter, B. De. , Ernst, D. , Reinforcement Learning and Dynamic Programming Using Function Approximators, CRC Press, Taylor & Francis group, Boca Raton, FL, 2010.
  6. Chakraborty, J. , Konar A. , Jain, L. C. , and Chakraborty, U. , "Cooperative Multi-Robot Path Planning Using Differential Evolution" Journal of Intelligent & Fuzzy Systems, Vol. 20, Pp. 13-27, 2009.
  7. Gerke, M. , and Hoyer, H. , Planning of Optimal paths for autonomous agents moving in inhomogeneous environments, in: Proceedings of the 8th Int. Conf. on Advanced Robotics, July 1997, pp. 347-352.
  8. Xiao, J. , Michalewicz, Z. , Zhang, L. , and Trojanowski, K. , Adaptive Evolutionary Planner/ Navigator for Mobile robots, IEEE Transactions on evolutionary Computation 1 (1), April 1997.
  9. Bien, Z. , and Lee, J. , A Minimum–Time trajectory planning Method for Two Robots, IEEE Trans on Robotics and Automation 8(3). PP. 443-450, JUNE 1992.
  10. Moll, M. , and Kavraki, L. E. , Path Planning for minimal Energy Curves of Constant Length, in: Proceedings of the 2004 IEEE Int. Conf. on Robotics and Automation, pp. 2826-2831, April 2004.
  11. Regele, R. , and Levi, P. , Cooperative Multi-Robot Path Planning by Heuristic Priority Adjustment, in: Proceedings of the IEEE/RSJ Int Conf on Intelligent Robots and Systems, 2006.
  12. Yuan-Pao Hsu, Wei-Cheng Jiang, Hsin-Yi Lin, A CMAC-Q-Learning Based Dyna Agent, in: SICE Annual Conference, 2008, pp. 2946 – 2950, The University Electro-Communications, Tokyo, Japan.
  13. Yi Zhou and Meng Joo Er, A Novel Q-Learning Approach with Continuous States and Actions, in: 16th IEEE International Conference on Control Applications Part of IEEE Multi-conference on Systems and Control, Singapore, 1-3 October 2007
  14. Kyungeun Cho, Yunsick Sung, Kyhyun Um, A Production Technique for a Q-table with an Influence Map for Speeding up Q-learning, in : International Conference on Intelligent Pervasive Computing, 2007.
  15. Deepshikha Pandey, Punit Pandey, Approximate Q-Learning: An Introduction, in : Second International Conference on Machine Learning and Computing, 2010.
  16. S. S. Masoumzadeh and G. Taghizadeh, K. Meshgi and S. Shiry, Deep Blue: A Fuzzy Q-Learning Enhanced Active Queue Management Scheme, in : International Conference on Adaptive and Intelligent Systems, 2009.
  17. Indrani Goswami (Chakraborty), Pradipta Kumar Das, Amit Konar, R. Janarthanan. "Extended Q-learning Algorithm for Path-Planning of a Mobile Robot", in: Eighth International Conference on Simulated Evolution And Learning (SEAL-2010), Indian Institute of Technology Kanpur, India, December 2010.
Index Terms

Computer Science
Information Sciences

Keywords

Q-learning Reinforcement learning Motion planning Mobile Robot Energy