CFP last date
20 January 2025
Reseach Article

A Comprehensive Study of Resume Summarization using Large Language Models

by Akshata Upadhye
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 186 - Number 6
Year of Publication: 2024
Authors: Akshata Upadhye
10.5120/ijca2024923401

Akshata Upadhye . A Comprehensive Study of Resume Summarization using Large Language Models. International Journal of Computer Applications. 186, 6 ( Jan 2024), 33-37. DOI=10.5120/ijca2024923401

@article{ 10.5120/ijca2024923401,
author = { Akshata Upadhye },
title = { A Comprehensive Study of Resume Summarization using Large Language Models },
journal = { International Journal of Computer Applications },
issue_date = { Jan 2024 },
volume = { 186 },
number = { 6 },
month = { Jan },
year = { 2024 },
issn = { 0975-8887 },
pages = { 33-37 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume186/number6/33077-2024923401/ },
doi = { 10.5120/ijca2024923401 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2024-02-07T01:29:55.520730+05:30
%A Akshata Upadhye
%T A Comprehensive Study of Resume Summarization using Large Language Models
%J International Journal of Computer Applications
%@ 0975-8887
%V 186
%N 6
%P 33-37
%D 2024
%I Foundation of Computer Science (FCS), NY, USA
Abstract

Due to a large number of applications received for a job posting, the recruiters and hiring teams can afford to spend very less time reviewing each resume. Due to the time constraint, it could be very helpful to the recruiters and the hiring teams if the key information from a resume could be summarized to provide a quick overview of the candidate’s skills and experiences for initial screening. Therefore, this research focuses on exploring resume summarization through the utilization of various Language Models. This study explores the efficiency of various models like the BERT, T5 and BART for extractive and abstractive summarization in comprehensively summarizing diverse resumes. The research investigates the potential of LLMs in capturing important information, skills, and experiences, aiming to enhance the efficiency of the hiring process. By leveraging the power of these language models, the goal of this research is to contribute to the evolution of resume summarization techniques, offering a more context-aware approach for recruiters and the hiring teams.

References
  1. Moratanch, N., and S. Chitrakala. ”A survey on extractive text summarization.” In 2017 international conference on computer, communication and signal processing (ICCCSP), pp. 1-6. IEEE, 2017.
  2. Rahimi, Shohreh Rad, Ali Toofanzadeh Mozhdehi, and Mohamad Abdolahi. ”An overview on extractive text summarization.” In 2017 IEEE 4th international conference on knowledge-based engineering and innovation (KBEI), pp. 0054-0062. IEEE, 2017.
  3. Miller, Derek. ”Leveraging BERT for extractive text summarization on lectures.” arXiv preprint arXiv:1906.04165 (2019).
  4. Yeasmin, Sabina, Priyanka Basak Tumpa, Adiba Mahjabin Nitu, Md Palash Uddin, Emran Ali, and Masud Ibn Afjal. ”Study of abstractive text summarization techniques.” American Journal of Engineering Re- search 6, no. 8 (2017): 253-260.
  5. Moratanch, N., and S. Chitrakala. ”A survey on abstractive text summarization.” In 2016 International Conference on Circuit, power and computing technologies (ICCPCT), pp. 1-7. IEEE, 2016.
  6. Alomari, Ayham, Norisma Idris, Aznul Qalid Md Sabri, and Izzat Alsmadi. ”Deep reinforcement and transfer learning for abstractive text summarization: A review.” Computer Speech and Language 71 (2022): 101276.
  7. Teo, Louis. “The Secret Guide to Human-like Text Summariza- tion.” Medium, July 28, 2022. https://towardsdatascience.com/the-secret- guide-to-human-like-text-summarization-fcea0bfbe801.
  8. Kale, Mihir, and Abhinav Rastogi. ”Text-to-text pre-training for data- to-text tasks.” arXiv preprint arXiv:2005.10433 (2020).
  9. Stevhliu. “Stevhliu/Myawesomebillsummodel · Hugging Face.” stevhliu/my-awesome-billsum-model · Hugging Face. Accessed December 20, 2023. https://huggingface.co/stevhliu/my-awesome- billsum-model.
  10. Lewis, Mike, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Ab- delrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettle- moyer. ”Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension.” arXiv preprint arXiv:1910.13461 (2019).
  11. Lin, Chin-Yew, and Eduard Hovy. ”Automatic evaluation of summaries using n-gram co-occurrence statistics.” In Proceedings of the 2003 human language technology conference of the North American chapter of the association for computational linguistics, pp. 150-157. 2003.
Index Terms

Computer Science
Information Sciences

Keywords

Natural Language Processing Large Language Models Extractive Summarization Abstractive Summarization BERT T5.