We apologize for a recent technical issue with our email system, which temporarily affected account activations. Accounts have now been activated. Authors may proceed with paper submissions. PhDFocusTM
CFP last date
20 December 2024
Reseach Article

Transformer based Neural Joke Generator

by Taaha Kazi, Sameer Joshi, Steeve Kaitharath, Imran Ali Mirza
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 183 - Number 34
Year of Publication: 2021
Authors: Taaha Kazi, Sameer Joshi, Steeve Kaitharath, Imran Ali Mirza
10.5120/ijca2021921724

Taaha Kazi, Sameer Joshi, Steeve Kaitharath, Imran Ali Mirza . Transformer based Neural Joke Generator. International Journal of Computer Applications. 183, 34 ( Oct 2021), 1-4. DOI=10.5120/ijca2021921724

@article{ 10.5120/ijca2021921724,
author = { Taaha Kazi, Sameer Joshi, Steeve Kaitharath, Imran Ali Mirza },
title = { Transformer based Neural Joke Generator },
journal = { International Journal of Computer Applications },
issue_date = { Oct 2021 },
volume = { 183 },
number = { 34 },
month = { Oct },
year = { 2021 },
issn = { 0975-8887 },
pages = { 1-4 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume183/number34/32150-2021921724/ },
doi = { 10.5120/ijca2021921724 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2024-02-07T01:18:40.985561+05:30
%A Taaha Kazi
%A Sameer Joshi
%A Steeve Kaitharath
%A Imran Ali Mirza
%T Transformer based Neural Joke Generator
%J International Journal of Computer Applications
%@ 0975-8887
%V 183
%N 34
%P 1-4
%D 2021
%I Foundation of Computer Science (FCS), NY, USA
Abstract

Humor is a complex and intrinsic part of human conversation, which involves a deep understanding of grammatical structure and knowledge of the world. Building computational models that can identify and generate humor remains a challenging field. This work presents a neural network based joke generator that employs a transformer-based architecture. To improve the generator's performance, the model was further trained with Proximal Policy Optimization (PPO), a reinforcement learning algorithm. The model's performance was evaluated by human ratings by conductingqualitative analysis.

References
  1. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. North American Chapter of the Association for Computational Linguistics.
  2. Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya. 2019. Language Models are Unsupervised Multitask Learners.
  3. John Schulman and FilipWolski and PrafullaDhariwal and Alec Radford and Oleg Klimov 2017. Proximal Policy Optimization Algorithms.
  4. Diyi Yang, AlonLavie, Chris Dyer, and Eduard Hovy. 2015. Humor recognition and humor anchor extraction. Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing,
  5. Orion Weller, Kevin Seppi. Humor Detection: A Transformer Gets the Last Laugh. 2019. Association for Computational Linguistics
  6. IssaAnnamoradnejad. ColBERT: Using {BERT} Sentence Embedding for Humor Detection. 2020. https://arxiv.org/abs/2004.12765
  7. Daniel M. Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B. Brown ,Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving 2020. Fine-Tuning Language Models from Human Preferences
  8. Ashish Vaswani, Noam Shazeer, NikiParmar, JakobUszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and IlliaPolosukhin. 2017. Attention Is All You Need. 31st Conference on Neural Information Processing
  9. AbhinavMoudgil. Short Jokes. 2016. https://www.kaggle.com/abhinavmoudgil95/short-joke
  10. Taaha Kazi. Detoxifying Language Models with Proximal Policy Optimization. 2021. Manuscript in preparation.
Index Terms

Computer Science
Information Sciences

Keywords

Natural Language Generation Humor