CFP last date
20 January 2025
Reseach Article

Playing Doom with Deep Reinforcement Learning

Published on August 2019 by Manan Kalra, J. C. Patni
International Conference on Recent Trends in Science, Technology, Management and Social Development
Foundation of Computer Science USA
ICRTSTMSD2018 - Number 1
August 2019
Authors: Manan Kalra, J. C. Patni
b12bdc16-cfb3-4d43-953d-c3a6d004910d

Manan Kalra, J. C. Patni . Playing Doom with Deep Reinforcement Learning. International Conference on Recent Trends in Science, Technology, Management and Social Development. ICRTSTMSD2018, 1 (August 2019), 14-20.

@article{
author = { Manan Kalra, J. C. Patni },
title = { Playing Doom with Deep Reinforcement Learning },
journal = { International Conference on Recent Trends in Science, Technology, Management and Social Development },
issue_date = { August 2019 },
volume = { ICRTSTMSD2018 },
number = { 1 },
month = { August },
year = { 2019 },
issn = 0975-8887,
pages = { 14-20 },
numpages = 7,
url = { /proceedings/icrtstmsd2018/number1/30844-1804/ },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Proceeding Article
%1 International Conference on Recent Trends in Science, Technology, Management and Social Development
%A Manan Kalra
%A J. C. Patni
%T Playing Doom with Deep Reinforcement Learning
%J International Conference on Recent Trends in Science, Technology, Management and Social Development
%@ 0975-8887
%V ICRTSTMSD2018
%N 1
%P 14-20
%D 2019
%I International Journal of Computer Applications
Abstract

In this work, we present a deep learning model based on reinforcement learning that is tied to an AI agent. The agent successfully learns policies to control itself in a virtual game environment directly from high-dimensional sensory inputs. The model is a convolutional neural network, trained with a variant of the Q-learning algorithm, whose input is raw pixels and whose output is a Q-value directly associated with the best possible future action. We apply our method to a first-person shooting game - Doom. We find that it outperforms all previous approaches and also surpasses a human expert.

References
  1. Sutton, R. S. & Bartro, A. G. (1998). Introduction to Reinforcement Learning. Cambridge, MA: MIT Press.
  2. Bellman, R. E (1957). Dynamic Programming. , Princeton, New Jersey: Princeton University Press.
  3. Bellemare M. G. , Naddaf, Y. , Veness, J. & Bowling M. (2013). The arcade learning environment: An evaluation platform for general agents. Journal of Artificial Intelligence Research, 47:253–279.
  4. Bellemare, M. G. , Veness, J. & Bowling, M. (2012). Investigating contingency awareness using atari 2600 games. In AAAI.
  5. Silver, D. (2016). Deep Reinforcement Learning. DeepMind Technologies.
  6. Krizhevsky, A. , Sutskever, I. & Hinton, G. (2012). Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems 25, pages 1106–1114.
  7. White, D. J. (1993, November). A Survey of Applications of Markov Decision Processes. The Journal of the Operational Research Society, Vol. 44, No. 11, pp. 1073-1096.
Index Terms

Computer Science
Information Sciences

Keywords

Machine Learning Reinforcement Learning Q-learning Dqn Cnn