CFP last date
20 April 2026
Call for Paper
May Edition
IJCA solicits high quality original research papers for the upcoming May edition of the journal. The last date of research paper submission is 20 April 2026

Submit your paper
Know more
Random Articles
Reseach Article

The Authenticity Spectrum Framework: Classifying Deepfake and Generative AI Risks in Synthetic Media

by Francis Martinson
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 187 - Number 88
Year of Publication: 2026
Authors: Francis Martinson
10.5120/ijca2026926538

Francis Martinson . The Authenticity Spectrum Framework: Classifying Deepfake and Generative AI Risks in Synthetic Media. International Journal of Computer Applications. 187, 88 ( Mar 2026), 34-38. DOI=10.5120/ijca2026926538

@article{ 10.5120/ijca2026926538,
author = { Francis Martinson },
title = { The Authenticity Spectrum Framework: Classifying Deepfake and Generative AI Risks in Synthetic Media },
journal = { International Journal of Computer Applications },
issue_date = { Mar 2026 },
volume = { 187 },
number = { 88 },
month = { Mar },
year = { 2026 },
issn = { 0975-8887 },
pages = { 34-38 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume187/number88/the-authenticity-spectrum-framework-classifying-deepfake-and-generative-ai-risks-in-synthetic-media/ },
doi = { 10.5120/ijca2026926538 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2026-03-20T22:55:20+05:30
%A Francis Martinson
%T The Authenticity Spectrum Framework: Classifying Deepfake and Generative AI Risks in Synthetic Media
%J International Journal of Computer Applications
%@ 0975-8887
%V 187
%N 88
%P 34-38
%D 2026
%I Foundation of Computer Science (FCS), NY, USA
Abstract

The rapid advancement of generative artificial intelligence technologies—including large language models, diffusion models, and deepfake systems—has created unprecedented capabilities for synthetic media generation while simultaneously enabling novel vectors for fraud, disinformation, and exploitation. However, existing governance frameworks fail to differentiate between beneficial applications such as AI-generated marketing content, accessibility tools, and creative expression, and harmful uses including identity fraud, non-consensual intimate imagery, and political disinformation. This paper introduces the Authenticity Spectrum Framework (ASF), a novel five-level classification system for AI-generated content based on three dimensions: disclosure transparency, creator intent, and harm potential. Building on prior research examining exploitation architectures in gaming systems [1] and smartphone vulnerabilities [2], the ASF extends dual-use technology analysis to synthetic media governance. Through systematic analysis of current synthetic media platforms including AI avatar generators, video generation models, and voice cloning services, we demonstrate the framework's practical application to real-world governance challenges. The framework provides regulators, platform operators, and AI developers with a standardized taxonomy for risk assessment aligned with emerging requirements under the EU AI Act, NIST AI Risk Management Framework, and FTC guidelines.

References
  1. Martinson, F., & Rangel, D. (2023). A Comprehensive Analysis of Game Hacking through Injectors. International Journal of Computer Applications, 185(33), 56-63.
  2. Abukari, A. M., Amini, M., & Martinson, F. (2023). A Revealed Architecture of Camera-based Attacks for Smartphones. International Journal of Computer Applications, 185(27), 45-49.
  3. Goodfellow, I., et al. (2014). Generative adversarial nets. NeurIPS, 27.
  4. Rombach, R., et al. (2022). High-resolution image synthesis with latent diffusion models. CVPR, 10684-10695.
  5. Rossler, A., et al. (2019). FaceForensics++: Learning to detect manipulated facial images. IEEE/CVF ICCV.
  6. Chesney, R., & Citron, D. K. (2019). Deep fakes: A looming challenge. California Law Review, 107, 1753-1820.
  7. Partnership on AI. (2023). Framework for Responsible Practices in Synthetic Media.
  8. Vaccari, C., & Chadwick, A. (2020). Deepfakes and disinformation. Social Media + Society, 6(1).
  9. NIST. (2023). AI Risk Management Framework. NIST AI 100-1.
  10. European Union. (2024). AI Act. Official Journal of the EU.
  11. FTC. (2024). Guidance on AI in Advertising.
  12. Mirsky, Y., & Lee, W. (2021). The creation and detection of deepfakes. ACM Computing Surveys, 54(1), 1-41.
  13. Kietzmann, J., et al. (2020). Deepfakes: Trick or treat? Business Horizons, 63(2), 135-146.
  14. Hancock, J. T., & Bailenson, J. N. (2021). The social impact of deepfakes. Cyberpsychology, Behavior, and Social Networking, 24(3).
  15. Westerlund, M. (2019). The emergence of deepfake technology. Technology Innovation Management Review, 9(11).
Index Terms

Computer Science
Information Sciences

Keywords

Deepfakes Generative AI Synthetic Media AI Governance Large Language Models Risk Classification EU AI Act