| International Journal of Computer Applications |
| Foundation of Computer Science (FCS), NY, USA |
| Volume 187 - Number 52 |
| Year of Publication: 2025 |
| Authors: Rishika Singh, Swati Joshi |
10.5120/ijca2025925888
|
Rishika Singh, Swati Joshi . Explainable Federated Learning: Taxonomy, Evaluation Frameworks, and Emerging Challenges. International Journal of Computer Applications. 187, 52 ( Nov 2025), 52-58. DOI=10.5120/ijca2025925888
Solutions that guarantee data privacy and model transparency are required due to the quick integration of AI into delicate industries like cybersecurity, healthcare, and finance. Federated Learning (FL) is a promising paradigm that allows for cooperative model training across decentralized datasets while maintaining privacy by avoiding the sharing of raw data. Simultaneously, Explainable AI (XAI) makes otherwise opaque models interpretable, promoting stakeholder trust and assisting with regulatory compliance. Using techniques like SHAP, LIME, Grad-CAM, fuzzy logic, and rule-based systems, recent research has investigated the nexus between FL and XAI in tasks like intrusion detection, fraud detection, and medical diagnosis. Despite the impressive performance of these efforts, there are still unresolved issues with scalability, non- IID data, privacy–interpretability trade-offs, standardized evaluation metrics, and resilience to adversarial manipulation. The present state of research is compiled in this review, which also identifies important gaps, emphasizes methodological trends, and suggests future directions. These issues could be resolved by integrating FL and XAI, which could lead to reliable, private, and interpretable AI systems in high-stakes situations where security and explainability are crucial.