International Journal of Computer Applications |
Foundation of Computer Science (FCS), NY, USA |
Volume 185 - Number 45 |
Year of Publication: 2023 |
Authors: Abujar S. Shaikh, Rahul M. Samant, Kshitij S. Patil, Nilesh R. Patil, Aarya R. Mirkale |
10.5120/ijca2023923263 |
Abujar S. Shaikh, Rahul M. Samant, Kshitij S. Patil, Nilesh R. Patil, Aarya R. Mirkale . Review on Explainable AI by using LIME and SHAP Models for Healthcare Domain. International Journal of Computer Applications. 185, 45 ( Nov 2023), 18-23. DOI=10.5120/ijca2023923263
In the dynamic realm of healthcare research and the burgeoning utilization of artificial intelligence (AI), multiple research studies converge to accentuate the immense potential and persistent hurdles facing AI systems. At its core, artificial intelligence seeks to emulate human intelligence, enabling the performance of tasks, pattern recognition, and outcome prediction through the assimilation of data from diverse sources. Its far-reaching applications encompass autonomous driving, e-commerce recommendations, fin-tech, natural language comprehension, and healthcare, with the latter domain undergoing significant transformation. Historically, healthcare leaned heavily on rule-based methodologies rooted in curated medical knowledge. However, the landscape has evolved considerably, with the emergence of machine learning algorithms like deep learning, capable of comprehending intricate interplays within medical data. These algorithms have demonstrated exceptional performance in healthcare applications. Yet, a critical impediment lingers: the enigma of explainability. Despite their prowess, certain AI algorithms struggle to gain full acceptance in practical clinical environments due to their lack of interpretability. In response to this challenge, Explainable Artificial Intelligence (XAI) has risen as a pivotal solution. XAI functions as a conduit for elucidating the inner workings of AI algorithms, shedding light on their decision-making processes, behaviors, and actions. This newfound transparency fosters trust among healthcare professionals, enabling them to judiciously apply predictive models in real-world healthcare scenarios, rather than passively adhering to algorithmic predictions. Nonetheless, the journey toward rendering XAI genuinely effective in clinical settings remains ongoing, a testament to the intricate nature of medical knowledge and the multifaceted challenges it presents. In summation, this research paper underscores the importance of XAI in the domain of healthcare. It emphasizes the necessity for transparency and interpretability to fully harness the potential of AI systems while navigating the intricate landscape of medical practice, thus heralding a transformative era in healthcare research and delivery.