We apologize for a recent technical issue with our email system, which temporarily affected account activations. Accounts have now been activated. Authors may proceed with paper submissions. PhDFocusTM
CFP last date
20 December 2024
Reseach Article

AutoCAP: An Automatic Caption Generation System based on the Text Knowledge Power Series Representation Model

Published on February 2015 by Krishnapriya P S, Usha K
Advanced Computing and Communication Techniques for High Performance Applications
Foundation of Computer Science USA
ICACCTHPA2014 - Number 4
February 2015
Authors: Krishnapriya P S, Usha K
1715646b-b5f9-454d-b801-af833e4c8872

Krishnapriya P S, Usha K . AutoCAP: An Automatic Caption Generation System based on the Text Knowledge Power Series Representation Model. Advanced Computing and Communication Techniques for High Performance Applications. ICACCTHPA2014, 4 (February 2015), 26-28.

@article{
author = { Krishnapriya P S, Usha K },
title = { AutoCAP: An Automatic Caption Generation System based on the Text Knowledge Power Series Representation Model },
journal = { Advanced Computing and Communication Techniques for High Performance Applications },
issue_date = { February 2015 },
volume = { ICACCTHPA2014 },
number = { 4 },
month = { February },
year = { 2015 },
issn = 0975-8887,
pages = { 26-28 },
numpages = 3,
url = { /proceedings/icaccthpa2014/number4/19456-6045/ },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Proceeding Article
%1 Advanced Computing and Communication Techniques for High Performance Applications
%A Krishnapriya P S
%A Usha K
%T AutoCAP: An Automatic Caption Generation System based on the Text Knowledge Power Series Representation Model
%J Advanced Computing and Communication Techniques for High Performance Applications
%@ 0975-8887
%V ICACCTHPA2014
%N 4
%P 26-28
%D 2015
%I International Journal of Computer Applications
Abstract

This paper describes Automatic Caption generation for news Articles, it is an experimental intelligent system that generates presentations in text based on the text knowledge power series representation model. Captions or titles are useful for users who only need information on the main topics of an article. Using current extractive summarization techniques, it is not able to generate a coherent document summary shorter than a single opinion, or to produce a brief that conforms to specific linguistic constraints. The power series representation (PSR) model, which has a low complex computation in text knowledge constructing process. This model can provide rich knowledge and automatic construction. Our model experience to design captions from a database of news articles, and from the associated images in them, and their captions, and consists of two stages. Text assertions is that the keywords/terms that stand for the common knowledge and Text association rules, are referred to these term's relations that are causations and reflect the semantic relationships in a text. It is viable to generate captions from the news by mapping the PSR and the image associated with the news article.

References
  1. A Kojima, M. Takaya, S. Aoki, T. Miyamoto, and K. Fukunaga, "Recognition and Textual Description of Human Activities by Mobile Robot," Proc. Third Int'l Conf.
  2. P. He´de, P. A. Moe¨llic, J. Bourgeoys, M. Joint, and C. Thomas, "Automatic Generation of Natural Language Descriptions for Images," Proc. Recherche d'InformationAssiste´e par Ordinateur, 2004.
  3. B. Yao, X. Yang, L. Lin, M. W. Lee, and S. Chun Zhu,"I2T: Image Parsing to Text Description," Proc. IEEE, vol. 98, no. 8, pp. 1485- 1508, 2009.
  4. A. Kojima, T. Tamura, and K. Fukunaga, "Natural Language Description of Human Activities from Video Images Based on Concept Hierarchy of Actions," Int'l J. Computer Vision, vol. 50, no. 2, pp. 171-184, 2002.
  5. Y. Wang and G. Mori. A discriminative latent model of image region and object tag correspondence. In NIPS, 2010.
  6. M. Guillaumin, T. Mensink, J. Verbeek, and C. Schmid. Automatic face naming with caption-based supervision. In CVPR, 2008.
  7. T. Cour, B. Sapp, C. Jordan, and B. Taskar. Learning from ambiguously labeled images. In CVPR, 2009.
  8. T. Berg, A. Berg, J. Edwards, and D. Forsyth. Who's in the picture. In NIPS, 2004.
  9. K. Barnard, P. Duygulu, N. de Freitas, D. Forsyth, D. Blei, and M. Jordan, "Matching Words and Pictures," J. Machine Learning Research, vol. 3, pp. 1107-1135, 2003.
  10. P. Duygulu, K. Barnard, N. de Freitas, and D. Forsyth, "Object Recognition as Machine Translation," Proc. European Conf. Computer Vision, 2002.
  11. T. L. Berg, A. C. Berg, J. Edwards, M. Maire, R. White, E. Learned- Miller, Y. -W. Teh, and D. A. Forsyth, "Names and Faces," Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2004.
  12. K. Saenko and T. Darrell, "Unsupervised Learning of Visual Sense Models for Polysemous Words," Proc. Neural Information Processing Systems, 2008
Index Terms

Computer Science
Information Sciences

Keywords

Natural Language Generation Computer Vision Human Concept Learning Knowledge Representation.