| International Journal of Computer Applications |
| Foundation of Computer Science (FCS), NY, USA |
| Volume 187 - Number 69 |
| Year of Publication: 2025 |
| Authors: Khalid A.H. Alazawi, Nantha Kumar Subramaniam |
10.5120/ijca2025926152
|
Khalid A.H. Alazawi, Nantha Kumar Subramaniam . Assessing Public Ability to Distinguish AI-Generated from Real News: Accuracy, Confidence, and Influencing Factors. International Journal of Computer Applications. 187, 69 ( Dec 2025), 30-34. DOI=10.5120/ijca2025926152
The rapid advancement of generative artificial intelligence (AI) has intensified concerns about the spread of highly convincing synthetic news. This study examines the public’s ability to distinguish between real and AI-generated news, investigates the misalignment between confidence and actual performance, and identifies the demographic, behavioural, and technological factors that influence detection accuracy. A total of 382 participants completed an online survey containing one real and one AI-generated news article; after rigorous preprocessing to remove bot-generated, inattentive, and uniform (“lazy”) responses, 210 valid cases were analysed. Results reveal a significant detection challenge: only 8% of respondents accurately identified both articles, while 32% failed to correctly classify either one. Despite these low accuracy levels, confidence was disproportionately high, with approximately 62% reporting that they were “mostly confident” or “fully confident” in their judgments. This confidence–accuracy mismatch highlights a critical cognitive vulnerability that may amplify susceptibility to misinformation. Regression analyses further show that commonly assumed protective factors—such as education level, age, and news-checking frequency—do not reliably predict the ability to detect AI-generated content. Only technological proficiency displayed a meaningful positive correlation with performance, although the effect was modest. These findings challenge traditional assumptions about digital literacy and indicate that demographic attributes alone cannot safeguard users against sophisticated AI-driven deception. Instrument reliability was strong (Cronbach’s α = .89; Composite Reliability = .80), affirming the stability of the measures used to assess credibility judgments. The implications of this study underscore the urgent need for redefined digital literacy frameworks that emphasizes critical reading, linguistic awareness, and metacognitive regulation. Technological interventions, such as AI-based detection tools and transparency mechanisms for synthetic content, are also necessary to complement user education. The study concludes that in the age of generative AI, human judgment alone is insufficient to ensure news authenticity, and coordinated efforts across education, platform design, and policy are essential to preserve information integrity.