| International Journal of Computer Applications |
| Foundation of Computer Science (FCS), NY, USA |
| Volume 187 - Number 86 |
| Year of Publication: 2026 |
| Authors: Sheezan Farooq, Rumaan Bashir |
10.5120/ijca2026926485
|
Sheezan Farooq, Rumaan Bashir . Hybrid Transformer–Recurrent Modelling for Sentiment Analysis in Low Resource Language. International Journal of Computer Applications. 187, 86 ( Mar 2026), 14-18. DOI=10.5120/ijca2026926485
Sentiment analysis in low-resource languages like Kashmiri is underexplored due to the lack of annotated datasets and computational tools. This research proposes an effective hybrid deep learning model that combines the strengths of XLM-RoBERTa and BiLSTM networks for sentiment analysis of Kashmiri language text. The Kashmiri language, being low-resource and morphologically rich, poses significant challenges for natural language understanding tasks. A major contribution of this work is the creation of a manually annotated sentiment dataset tailored for the Kashmiri language, encompassing positive, negative, and neutral sentiment categories. This dataset serves as a foundational resource for training and evaluating sentiment classification models in this underrepresented language. The hybrid model combines XLM-RoBERTa transformer contextual embeddings with BiLSTM modeling sequences because of its ability to function in low-resource environments. Experimental results demonstrate that the hybrid model achieves state-of-the-art performance with a validation accuracy of 94.7% and an F1-score of 0.94 across all sentiment classes. Additionally, ROC analysis confirms high discriminative ability with an AUC of 0.99 for each class. Experimental findings demonstrate that integrating pre-trained transformers with recurrent models improves sentiment recognition abilities in Kashmiri by a significant degree.