CFP last date
20 August 2025
Call for Paper
September Edition
IJCA solicits high quality original research papers for the upcoming September edition of the journal. The last date of research paper submission is 20 August 2025

Submit your paper
Know more
Random Articles
Reseach Article

Assessing LLMs as Cognitive Interpreters of Student Prompts: A Typological Framework

by Tadeu da Ponte, Matevz Vremec, Matej Mertik
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 187 - Number 27
Year of Publication: 2025
Authors: Tadeu da Ponte, Matevz Vremec, Matej Mertik
10.5120/ijca2025925477

Tadeu da Ponte, Matevz Vremec, Matej Mertik . Assessing LLMs as Cognitive Interpreters of Student Prompts: A Typological Framework. International Journal of Computer Applications. 187, 27 ( Aug 2025), 1-11. DOI=10.5120/ijca2025925477

@article{ 10.5120/ijca2025925477,
author = { Tadeu da Ponte, Matevz Vremec, Matej Mertik },
title = { Assessing LLMs as Cognitive Interpreters of Student Prompts: A Typological Framework },
journal = { International Journal of Computer Applications },
issue_date = { Aug 2025 },
volume = { 187 },
number = { 27 },
month = { Aug },
year = { 2025 },
issn = { 0975-8887 },
pages = { 1-11 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume187/number27/assessing-llms-as-cognitive-interpreters-of-student-prompts-a-typological-framework/ },
doi = { 10.5120/ijca2025925477 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2025-08-02T01:56:29+05:30
%A Tadeu da Ponte
%A Matevz Vremec
%A Matej Mertik
%T Assessing LLMs as Cognitive Interpreters of Student Prompts: A Typological Framework
%J International Journal of Computer Applications
%@ 0975-8887
%V 187
%N 27
%P 1-11
%D 2025
%I Foundation of Computer Science (FCS), NY, USA
Abstract

This paper introduces a typology of student cognitive actions in interactions with large language model (LLM)-based tutors. Drawing on the CoMTA dataset of 188 anonymized math tutoring dialogues from Khan Academy, student-generated questions were analyzed as evidence of reasoning processes. The methodology combines a natural language processing (NLP) pipeline for semantic clustering with a dual-stage human classification of communicative intent and cognitive action. The resulting typology is synthesized into a partially ordered taxonomy that captures the complexity and multidimensionality of student thinking in AI-mediated learning contexts. Two research questions guide this investigation: (1) Can a typology be derived directly from unsupervised NLP clustering methods? and (2) To what extent can LLMs replicate expert-driven classification schemes? Findings from RQ1 reveal that semantic clustering via PCA and KMeans offers only limited alignment with pedagogically meaningful distinctions. In contrast, results from RQ2 show that several LLMs–particularly Deepseek, Grok, and Gemini–can reliably extend the typology to unseen data, demonstrating high accuracy in classification. These results suggest that scalable, cognitively informed AI tutoring may be supported by combining expert frameworks with strategically configured LLM architectures.

References
  1. Perplexity AI. Perplexity assistant semantic classifier, 2025. Zero-/few-shot classification based on semantic alignment with labeled examples.
  2. Anthropic. Claude 3.7 sonnet classification process, 2025. Few-shot reasoning using semantic patterns from labeled student questions.
  3. Simon P. Bates, Ross K. Galloway, Jonathan Riise, and Danny Homer. Assessing the quality of a student-generated question repository. Physical Review Special Topics - Physics Education Research, 10(2):020105, 2014.
  4. Benjamin S. Bloom, Max D. Engelhart, Edward J. Furst, Walker H. Hill, and David R. Krathwohl. Taxonomy of Educational Objectives: The Classification of Educational Goals. Handbook I: Cognitive Domain. David McKay Company, New York, 1956.
  5. Ng D., LuoWanying, Chan H., and Chu S. Using digital story writing as a pedagogy to develop AI literacy among primary students. Computers and Education: Artificial Intelligence, 2022.
  6. Ng D., Wu Wenjie, Leung J., Chiu T., and K. Chu S. Design and validation of the AI literacy questionnaire: The affective, behavioural, cognitive and ethical approach. British Journal of Educational Technology, 2023.
  7. Jennifer Daries and colleagues. Ai literacy framework: Review draft. Unpublished draft, 2024.
  8. Google DeepMind. Gemini 25pro question classification process, 2025. Few-shot classification based on phrasing and inferred student intent.
  9. Michael Hansen, Claire Scoular, and Patrick Griffin. Evidence for assessment of 21st century competencies: A review of 19 instruments. Assessment in Education: Principles, Policy & Practice, 30(1):43–68, 2023.
  10. H. Y. Yim I. and Su Jiahong. Artificial intelligence (AI) learning tools in K-12 education: A scoping review. Journal of Computers in Education, 2024.
  11. Ian T. Jolliffe. Principal Component Analysis. Springer, 2nd edition, 2002.
  12. Stacy M. Kula and Ting Chang. Student-generated questions: Encouraging academic engagement, critical thinking, and student voice. College Teaching, 65(3):126–135, 2017.
  13. Xin Li and Dan Roth. Learning question classifiers. In Proceedings of the 19th International Conference on Computational Linguistics, pages 1–7, 2002.
  14. David Long and Brian Magerko. What is ai literacy? competencies and design considerations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI ’20), pages 1–16, New York, NY, USA, 2020. ACM.
  15. Casal-Otero Lorena, Catal´a Alejandro, Fern´andez-Morante Carmen, Taboada M., Cebreiro Beatriz, and Barro S. Ai literacy in K-12: a systematic literature review. International Journal of STEM Education, 2023.
  16. J. MacQueen. Some methods for classification and analysis of multivariate observations. In Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, pages 281–297. University of California Press, 1967.
  17. Pepper Miller and Kristen Dicerbo. Llm based math tutoring: Challenges and dataset. Technical report, Khan Academy, 2024. Available at: https://github.com/Khan/tutoringaccuracy- dataset/blob/main/LLM Based Math Tutoring.pdf.
  18. Davy Tsz Kit N.G., Luo Wanru, Chan H., and Chu S. An examination on primary students’ development in AI literacy through digital story writing. Computers and Education: Artificial Intelligence, 2022.
  19. OECD. Introducing the OECD AI Capability Indicators. OECD Publishing, Paris, 2025.
  20. OpenAI. Chatgpt-4o nlp classification meta-process, 2025. Performed classification of student questions using TF-IDF + kNN pipeline in Python.
  21. OpenAI. Chatgpt-o3 classification meta-process, 2025. Executed TF-IDF + logistic regression on labeled student questions using scikit-learn.
  22. OpenAI. Chatgpt-o4-mini classification meta-process, 2025. TF-IDF + logistic regression classification from 42 labeled examples.
  23. OpenAI. Chatgpt-o4-minihigh classification meta-process, 2025. Same approach as o4-mini, applied to the same task using TF-IDF + logistic regression.
  24. Stacey Pylman. Student-generated questions as formative assessment: A review of the literature. Educational Practice and Theory, 42(2):5–25, 2020.
  25. Nils Reimers and Iryna Gurevych. Sentence-bert: Sentence embeddings using siamese bert-networks. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing, 2019.
  26. Deepseek Team. Gpt-4 based semantic classification, 2025. Used semantic matching to assign question categories without retraining.
  27. Manus AI Team. Manus ensemble classification engine, 2025. Used Naive Bayes, SVM, and Random Forest in soft-voting mode with TF-IDF inputs.
  28. xAI. Grok 3 student query classification, 2025. Manual category assignment with semantic and intent-based rule application.
  29. Zhou Xiaofei, Van Brummelen Jessica, and Lin Phoebe. Designing AI Learning Experiences for K-12: Emerging Works, Future Opportunities and a Design Framework. arXiv.org, 2020.
  30. Fu-Yun Yu. Student question-generation: The learning processes involved and their relationships with students’ perceptions of a learning environment. Instructional Science, 39(3):325–346, 2011.
Index Terms

Computer Science
Information Sciences

Keywords

LLM-based tutoring cognitive typology prompt classification Khanmigo educational interaction