CFP last date
20 December 2024
Reseach Article

A Learning Automata based Solution for Optimizing Dialogue Strategy in Spoken Dialogue System

by G. Kumaravelan, R. Sivakumar
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 58 - Number 9
Year of Publication: 2012
Authors: G. Kumaravelan, R. Sivakumar
10.5120/9310-3541

G. Kumaravelan, R. Sivakumar . A Learning Automata based Solution for Optimizing Dialogue Strategy in Spoken Dialogue System. International Journal of Computer Applications. 58, 9 ( November 2012), 20-27. DOI=10.5120/9310-3541

@article{ 10.5120/9310-3541,
author = { G. Kumaravelan, R. Sivakumar },
title = { A Learning Automata based Solution for Optimizing Dialogue Strategy in Spoken Dialogue System },
journal = { International Journal of Computer Applications },
issue_date = { November 2012 },
volume = { 58 },
number = { 9 },
month = { November },
year = { 2012 },
issn = { 0975-8887 },
pages = { 20-27 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume58/number9/9310-3541/ },
doi = { 10.5120/9310-3541 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2024-02-06T21:02:00.881539+05:30
%A G. Kumaravelan
%A R. Sivakumar
%T A Learning Automata based Solution for Optimizing Dialogue Strategy in Spoken Dialogue System
%J International Journal of Computer Applications
%@ 0975-8887
%V 58
%N 9
%P 20-27
%D 2012
%I Foundation of Computer Science (FCS), NY, USA
Abstract

Application of reinforcement learning methods in the development of dialogue strategies that support robust and efficient human–computer interaction using spoken language is a growing research area. In spoken dialogue system, Markov Decision Processes (MDPs) provide a formal framework for making dialogue management decisions for planning. This framework enables the system to learn the value of initiating an action from each possible state which in turn facilitates the maximization of the total reward. However, these MDP systems with large state-action spaces lead to intractable solution. The goal of this paper is, thus, to present a novel approximation method with sampling practice to compute an optimal solution to control dialogue strategy based on learning automata. Compared to other baseline reinforcement learning methods the proposed approach exhibits a better performance with regard to the learning speed, good exploration/exploitation in its update and robustness in the presence of uncertainty in the states obtained.

References
  1. McTear, M. 2004. Spoken Dialog Technology: Toward the Conversational User Interface, Springer-Verlag.
  2. Sutton, R. S. , and Barto, A. G. 1998. Reinforcement Learning an Introduction, MIT press, Cambridge, MA.
  3. Singh, S. , Litman, D. , and Walker, M. 2002. Optimizing dialogue management with reinforcement leaning: Experiments with the NJFun system. Journal of Artificial Intelligence, vol. 16, 105– 133. .
  4. Paek, T. , and Pieraccini, R. 2008. Automating spoken dialogue management design using machine learning: an industry perspective. Speech Communication, vol. 50, no. 8-9, 716–729.
  5. Chang, H. S. , Fu, M. , Hu, J. , and Marcus, S. I. 2007. Recursive learning automata approach to markov decision processes. IEEE Trans Automat Contr, Vol. 52, no. 7, 1349-1355.
  6. Young, S. , Gasic, M. , Keizer, S. , Mairesse, F. , Schatzmann, J. , Thomson, B. , and Yu, K. 2010. The hidden information state model: A practical framework for POMDP-based spoken dialogue management. Computer Speech & Language, vol. 25, 150-174.
  7. McTear, M. 1998. Modelling spoken dialogues with state transition diagrams: experience with the CSLU toolkit. In Proc. ICSLP, 1223-1226.
  8. Goddeau, D. , Meng, H. , Polifroni, J. , Seneff, S. , and Busayapongchai, S. 1996. A form-based dialogue manager for spoken language applications. In Proc. ICSLP, 701–704.
  9. Rich, C. , and Sidner, C. 1998. Collagen: A collaboration manager for software interface agents. User Modeling and User-Adapted Interaction, vol. 8, no. 3/4, 315–350, 1998.
  10. Pietquin, O. , and Dutoit, T. 2006. A probabilistic framework for dialog simulation and optimal strategy learning. IEEE Trans Audio Speech Lang Process, vol. 14(2), 589–599.
  11. Frampton, M. , and Lemon, O. Recent research advances in Reinforcement Learning in Spoken Dialogue Systems. 2009. The Knowledge Engineering Review, vol. 24, no. 04, 375–408.
  12. Henderson, J. , Lemon, O. , and Georgila, K. 2008. Hybrid Reinforcement/Supervised Learning of Dialogue Policies from Fixed Data Sets. Computational Linguistics, vol. 34 (4), 487–512.
  13. Cuayáhuitl, H. , Renals, S. , Lemon, O. , and Shimodaira, H. 2010. Evaluation of a hierarchical reinforcement learning spoken dialogue system. Computer Speech & Language, vol. 24, no. 2, 395–429.
  14. Toney, D. , Moore, J. , and Lemon, O. 2006. Evolving optimal inspectable strategies for spoken dialogue systems. In Proc HLT, 173–176.
  15. Rieser, V. 2008. Bootstrapping Reinforcement Learning-based Dialogue Strategies fromWizard-of-Oz data. PhD dissertation, Saarbruecken Dissertations in Computational Linguistics and Language Technology, Vol. 28.
  16. Thathachar, M. A. L. , and Sastry, P. S. 2004. Networks of Learning Automata: Techniques for Online Stochastic Optimization, Kluwer.
  17. Oommen B. J. , and Misra, S. 2009. Cybernetics and learning automata. Springer Handbook of Automation, pp. 221-235.
  18. Chang, H. S. , Fu, M. , Hu, J. , and Marcus, S. I. 2007. An adaptive sampling algorithm for solving Markov decision processes. Operations Research, vol. 53(1), 126-139.
  19. Schatzmann, J. , Weilhammer, K. , Stuttle, M. N. , and Young, S. 2006. A survey of statistical user simulation techniques for reinforcement-learning of dialogue management strategies. The Knowledge Engineering Review, vol. 21, no. 02, 97–126.
  20. Walker, M. , Litman, D. , Kamm, C. , and Abella, A. 1998. PARADISE: a framework for evaluating spoken dialogue agents. In Proc. , Assoc. Comput. Linguist. (ACL), 271–280.
  21. Young, S. ATK: an application toolkit for HTK. Available:http://mi. eng. cam. ac. uk/research/dialogue/atk_home
  22. Yamagishi, J. , Zen, H. , Toda, T. , and Tokuda, K. 2007. Speaker-independent HMM-based speech synthesis system – HTS-2007 system for the Blizzard challenge 2007. In Proc The Blizzard Challenge, http://www. cstr. ed. ac. uk/projects/festival/
  23. Dethlefs, N. , Cuayahuitl, H. , Richter, K. , Andonova, E. , and Bateman, J. Evaluating task success in a dialogue system for indoor navigation. In Proc. 14th Workshop on the Semantics and Pragmatics of Dialogue, 143-146.
Index Terms

Computer Science
Information Sciences

Keywords

Learning Automata Reinforcement Learning Markov Decision Process Spoken Dialogue System