CFP last date
20 December 2024
Reseach Article

Utilizing Transformer for Sentence Similarity Modeling and Answer Generation in the Machine Reading Comprehension Task

by Arafat Habib Quraishi, Alak Kanti Sarma
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 184 - Number 24
Year of Publication: 2022
Authors: Arafat Habib Quraishi, Alak Kanti Sarma
10.5120/ijca2022922196

Arafat Habib Quraishi, Alak Kanti Sarma . Utilizing Transformer for Sentence Similarity Modeling and Answer Generation in the Machine Reading Comprehension Task. International Journal of Computer Applications. 184, 24 ( Aug 2022), 1-4. DOI=10.5120/ijca2022922196

@article{ 10.5120/ijca2022922196,
author = { Arafat Habib Quraishi, Alak Kanti Sarma },
title = { Utilizing Transformer for Sentence Similarity Modeling and Answer Generation in the Machine Reading Comprehension Task },
journal = { International Journal of Computer Applications },
issue_date = { Aug 2022 },
volume = { 184 },
number = { 24 },
month = { Aug },
year = { 2022 },
issn = { 0975-8887 },
pages = { 1-4 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume184/number24/32457-2022922196/ },
doi = { 10.5120/ijca2022922196 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2024-02-07T01:22:15.138786+05:30
%A Arafat Habib Quraishi
%A Alak Kanti Sarma
%T Utilizing Transformer for Sentence Similarity Modeling and Answer Generation in the Machine Reading Comprehension Task
%J International Journal of Computer Applications
%@ 0975-8887
%V 184
%N 24
%P 1-4
%D 2022
%I Foundation of Computer Science (FCS), NY, USA
Abstract

In the Machine Reading Comprehension (MRC) task, the objective is to understand the given text in order to answer questions about it. For this task, selecting relevant passages or sentences is an important step. In this paper, we utilize the pre-trained transformer encoder for sentence similarity modeling to select rel- evant passage(s) or sentence(s) for the MRC problem and then we fed the relevant passage(s) or sentence(s) to a question answering model for answer generation. Experimental results in the Microsoft MAchine Reading COmpre- hension (MS-MARCO) dataset for the QA task show that our proposed approach is effective to generate abstractive answers in the MRC task.

References
  1. P. Bajaj, D. Campos, N. Craswell, L. Deng, J. Gao, X. Liu, R. Majumder, A. McNamara, B. Mitra, T. Nguyen, et al. Ms marco: A human generated ma- chine reading comprehension dataset. arXiv preprint arXiv:1611.09268, 2016.
  2. T. Baumel, M. Eyal, and M. Elhadad. Query focused abstractive summarization: Incorporating query relevance, multi-document coverage, and sum- mary length constraints into seq2seq models. arXiv preprint arXiv:1801.07704, 2018.
  3. J. Devlin, M. Chang, K. Lee, and K. Toutanova. BERT: pre-training of deep bidirectional transform- ers for language understanding. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 4171–4186, 2019.
  4. S. Garg, T. Vu, and A. Moschitti. Tanda: Transfer and adapt pre-trained transformer models for answer sentence selection. arXiv preprint arXiv:1911.04118, 2019.
  5. T. Lai, Q. H. Tran, T. Bui, and D. Kihara. A gated self-attention memory network for answer selection. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, pages 5955–5961, 2019.
  6. M. T. R. Laskar, E. Hoque, and J. Huang. Utiliz- ing bidirectional encoder representations from trans- formers for answer selection task. The V AMMCS International Conference: Extended Abstract, 2019.
  7. M. T. R. Laskar, E. Hoque, and J. Huang. Query focused abstractive summarization via incorporat- ing query relevance and transfer learning with trans- former models. In Canadian Conference on Artificial Intelligence, pages 342–348. Springer, 2020.
  8. M. T. R. Laskar, E. Hoque, and X. Huang. WSL- DS: Weakly supervised learning with distant super- vision for query focused multi-document abstractive summarization. In Proceedings of the 28th Inter- national Conference on Computational Linguistics, pages 5647–5654, 2020.
  9. M. T. R. Laskar, E. Hoque, and J. Xiangji Huang. Utilizing bidirectional encoder representations from transformers for answer selection. In International Conference on Applied Mathematics, Modeling and Computational Science, pages 693–703. Springer, 2019.
  10. M. T. R. Laskar, E. Hoque, and J. Xiangji Huang. Domain adaptation with pre-trained transformers for query focused abstractive text summarization. arXiv e-prints, pages arXiv–2112, 2021.
  11. M. T. R. Laskar, J. X. Huang, V. Smetana, C. Stew- art, K. Pouw, A. An, S. Chan, and L. Liu. Extending isolation forest for anomaly detection in big data via k-means. ACM Transactions on Cyber-Physical Sys- tems (TCPS), 5(4):1–26, 2021.
  12. M. T. R. Laskar, X. Huang, and E. Hoque. Con- textualized embeddings based transformer encoder for sentence similarity modeling in answer selection task. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 5505–5514, 2020.
  13. M. Lewis, Y. Liu, N. Goyal, M. Ghazvininejad, A. Mohamed, O. Levy, V. Stoyanov, and L. Zettle- moyer. Bart: Denoising sequence-to-sequence pre- training for natural language generation, translation, and comprehension. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 7871–7880, 2020.
  14. Y. Liu and M. Lapata. Text summarization with pre- trained encoders. In Proceedings of the 2019 Confer- ence on Empirical Methods in Natural Language Pro- cessing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3721–3731, 2019.
  15. Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692, 2019.
  16. R. Nallapati, B. Zhou, C. dos Santos, Ç. Gu̇lçehre, and B. Xiang. Abstractive text summarization using sequence-to-sequence RNNs and beyond. In Pro- ceedings of The 20th SIGNLL Conference on Compu- tational Natural Language Learning, pages 280–290. Association for Computational Linguistics, 2016.
  17. P. Nema, M. M. Khapra, A. Laha, and B. Ravindran. Diversity driven attention model for query-based ab- stractive summarization. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1063–1072. Association for Com- putational Linguistics, July 2017.
  18. K. Nishida, I. Saito, K. Nishida, K. Shinoda, A. Ot- suka, H. Asano, and J. Tomita. Multi-style generative reading comprehension. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2273–2284, 2019.
  19. P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang. Squad: 100,000+ questions for machine comprehen- sion of text. arXiv preprint arXiv:1606.05250, 2016.
  20. A. M. Rush, S. Chopra, and J. Weston. A neural at- tention model for abstractive sentence summarization. In Proceedings of the 2015 Conference on Empiri- cal Methods in Natural Language Processing, pages 379–389. Association for Computational Linguistics, 2015.
  21. A. See, P. J. Liu, and C. D. Manning. Get to the point: Summarization with pointer-generator net- works. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1073–1083. Association for Computational Linguis- tics, 2017.
  22. M. Seo, A. Kembhavi, A. Farhadi, and H. Hajishirzi. Bidirectional attention flow for machine comprehen- sion. arXiv preprint arXiv:1611.01603, 2016.
  23. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008, 2017.
  24. T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. De- langue, A. Moi, P. Cistac, T. Rault, R. Louf, M. Fun- towicz, and J. Brew. Huggingface’s transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771, 2019.
  25. M. Yan, J. Xia, C. Wu, B. Bi, Z. Zhao, J. Zhang, L. Si, R. Wang, W. Wang, and H. Chen. A deep cascade model for multi-document reading compre- hension. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 7354–7361, 2019.
  26. M. T. R. Laskar, C. Chen, A. Martsinovich, J. John- ston, X.-Y. Fu, S. B. TN, and S. Corston-Oliver. Blink with elasticsearch for efficient entity linking in business conversations. arXiv preprint arXiv:2205.04438, 2022.
  27. X.-Y. Fu, C. Chen, M. T. R. Laskar, S. Bhushan, and S. Corston-Oliver. Improving punctuation restoration for speech transcripts via external data. In Proceedings of the Seventh Workshop on Noisy User- generated Text (W-NUT 2021), pages 168–174, 2021.
Index Terms

Computer Science
Information Sciences

Keywords

Machine Reading Comprehension