Contact Us Search Paper

Interpretable AI in Medical Imaging: Enhancing Diagnostic Accuracy through Human-Computer Interaction

Ritunsa Mishra1,*, Rabinarayan Satpathy2, Bibudhendu Pati3

Corresponding Author:

Ritunsa Mishra

Affiliation(s):

1 Faculty of Emerging Technologies, Sri Sri University, Cuttack, Odisha (India)

Email: [email protected] 

2 Faculty of Emerging Technologies, Sri Sri University, Cuttack, Odisha (India)

Email: [email protected]

3 Department of Computer Science, Rama Devi Women’s University, Bhubaneswar, Odisha (India)

Email: [email protected]

*Corresponding Author: Ritunsa Mishra, Email: [email protected]

Abstract:

This study delves into the realm of Machine Learning (ML) transparency, with the goal of demystifying intricate model operations in terms of interpretability and explainability. Taking a human-centered design approach, transparency is viewed as a relational aspect between algorithms and users rather than an inherent trait of the ML model. The process involves the pivotal elements of prototyping and user evaluations to arrive at effective transparency solutions. In specialized fields such as medical image analysis, applying human-centered design principles encounters challenges due to limited user access and a knowledge gap between users and ML designers. A systematic review spanning from 2017 to 2023 scrutinized 2307 records, ultimately identifying 78 articles that met the inclusion criteria. The findings underscore the prevailing emphasis on computational feasibility in current transparent ML techniques, often at the expense of considering end users, including clinical stakeholders. Notably, a deficiency exists in formative user research guiding the design and development of transparent ML models. In response to these gaps, we put forth the INTRPRT guideline—a design directive for transparent ML in medical image analysis. Anchored in human-centered design, this guideline underscores the importance of formative user research to comprehend user needs and domain requirements. The ultimate aim is to enhance the likelihood that ML algorithms offer transparency, enabling stakeholders to harness its benefits effectively.

Keywords:

Explainable AI, Human Cantered Design, Bio-Medical Imaging, Diagnostic Accuracy

Downloads: 8 Views: 30
Cite This Paper:

Ritunsa Mishra, Rabinarayan Satpathy, Bibudhendu Pati (2024). Interpretable AI in Medical Imaging: Enhancing Diagnostic Accuracy through Human-Computer Interaction. Journal of Artificial Intelligence and Systems, 6, 96–111. https://doi.org/10.33969/AIS.2024060107

References:

[1] Ghassemi, M., Oakden-Rayner, L. & Beam, A. L. The false hope of current approaches to explainable artificial intelligence in health care. Lancet Digital Health 3, e745–e750 (2021).

[2] McCoy, L. G., Brenna, C. T., Chen, S. S., Vold, K. & Das, S. Believing in black boxes: Machine learning for healthcare does not need explainability to be evidencebased. J. Clin. Epidemiol. 142, 252–257 (2022)

[3] Vellido, A. The importance of interpretability and visualization in machine learning for applications in medicine and health care. Neural Comput. Appl. 32, 18069–18083 (2020).

[4] Holzinger, A., Langs, G., Denk, H., Zatloukal, K. & Müller, H. Causability and explainability of artificial intelligence in medicine. Wiley Interdiscip. Rev.: Data Mining Knowl. Discov. 9, e1312 (2019)

[5] Jin, S. V., & Ryu, E. (2020). Instagram fashionistas, luxury visual image strategies and vanity. Journal of Product & Brand Management, 29(3), 355-368.

[6] Kolecki, R.; Pr˛egowska, A.; D ˛abrowa, J.; Skuci´nski, J.; Pulanecki, T.; Walecki, P.; van Dam, P.M.; Dudek, D.; Richter, P.; Proniewska, K. Assessment of the utility of mixed reality in medical education. Transl. Res. Anat. 2022, 28, 100214.

[7] Cecil, J.; Gupta, A.; Pirela-Cruz, M. An advanced simulator for orthopedic surgical training. Int. J. Comput. Assist. Radiol. Surg. 2018, 13, 305–319.

[8]  Eswaran, M.; Bahubalendruni, M.R. Challenges and opportunities on AR/VR technologies for manufacturing systems in the context of industry 4.0: A state of the art review. J. Manuf. Syst. 2022, 65, 260–278.

[9] Minopoulos, G. M., Memos, V. A., Stergiou, K. D., Stergiou, C. L., & Psannis, K. E. (2023). A Medical Image Visualization Technique Assisted with AI-Based Haptic Feedback for Robotic Surgery and Healthcare. Applied Sciences, 13(6), 3592.

[10] Zhou, T., Li, L., Bredell, G., Li, J., Unkelbach, J., & Konukoglu, E. (2023). Volumetric memory network for interactive medical image segmentation. Medical Image Analysis, 83, 102599.

[11] Alsabhan, W. (2023). Human–Computer Interaction with a Real-Time Speech Emotion Recognition with Ensembling Techniques 1D Convolution Neural Network and Attention. Sensors, 23(3), 1386.

[12] Xu, J., Pan, J., Cui, T., Zhang, S., Yang, Y., & Ren, T. L. (2023). Recent Progress of Tactile and Force Sensors for Human–Machine Interaction. Sensors, 23(4), 1868.

[13] Xue, J., Zou, Y., Deng, Y., & Li, Z. (2022). Bioinspired sensor system for health care and human‐machine interaction. EcoMat, 4(5), e12209.

[14] Nazir, S., Dickson, D. M., & Akram, M. U. (2023). Survey of explainable artificial intelligence techniques for biomedical imaging with deep neural networks. Computers in Biology and Medicine, 106668.

[15] Hadjiiski, L., Cha, K., Chan, H. P., Drukker, K., Morra, L., Näppi, J. J., ... & Armato III, S. G. (2023). AAPM task group report 273: Recommendations on best practices for AI and machine learning for computer‐aided diagnosis in medical imaging. Medical Physics, 50(2), e1-e24.

[16] Xu, W. (2020). From automation to autonomy and autonomous vehicles: Challenges and opportunities for human-computer interaction. Interactions, 28(1), 48-53.

[17] O’Neill, T., McNeese, N., Barron, A., & Schelble, B. (2022). Human–autonomy teaming: A review and analysis of the empirical literature. Human factors, 64(5), 904-938.

[18] Rahwan, I., Cebrian, M., Obradovich, N., Bongard, J., Bonnefon, J. F., Breazeal, C., & Wellman, M. (2019). Machine behaviour. Nature, 568(7753), 477-486.

[19] Christopher Brill, J., Cummings, M. L., Evans III, A. W., Hancock, P. A., Lyons, J. B., & Oden, K. (2018, September). Navigating the advent of human-machine teaming. In Proceedings of the human factors and ergonomics society annual meeting (Vol. 62, No. 1, pp. 455-459). Sage CA: Los Angeles, CA: SAGE Publications.

[20] Mittelstadt, B. (2019). Principles alone cannot guarantee ethical AI. Nature machine intelligence, 1(11), 501-507.

[21] Xu, W., Dainoff, M. J., Ge, L., & Gao, Z. (2021). From human-computer interaction to human-AI Interaction: new challenges and opportunities for enabling human-centered AI. arXiv preprint arXiv:2105.05424, 5.

[22] Zanzotto, F. M. (2019). Human-in-the-loop artificial intelligence. Journal of Artificial Intelligence Research, 64, 243-252.

[23] Lundervold, A. S., & Lundervold, A. (2019). An overview of deep learning in medical imaging focusing on MRI. Zeitschrift für Medizinische Physik, 29(2), 102-127.

[24] Rajpurkar, P., Irvin, J., Ball, R. L., Zhu, K., Yang, B., Mehta, H., & Lungren, M. P. (2018). Deep learning for chest radiograph diagnosis: A retrospective comparison of the CheXNeXt algorithm to practicing radiologists. PLoS medicine, 15(11), e1002686.

[25] Litjens, G., Kooi, T., Bejnordi, B. E., Setio, A. A. A., Ciompi, F., Ghafoorian, M., & Sánchez, C. I. (2017). A survey on deep learning in medical image analysis. Medical image analysis, 42, 60-88.

[26] Lakhani, P., & Sundaram, B. (2017). Deep learning at chest radiography: automated classification of pulmonary tuberculosis by using convolutional neural networks. Radiology, 284(2), 574-582.

[27] Nayak, S., Nagesh, B., Routray, A., & Sarma, M. (2021). A Human–Computer Interaction framework for emotion recognition through time-series thermal video sequences. Computers & Electrical Engineering, 93, 107280.

[28] Huo, W., Zheng, G., Yan, J., Sun, L., & Han, L. (2022). Interacting with medical artificial intelligence: Integrating self-responsibility attribution, human–computer trust, and personality. Computers in Human Behavior, 132, 107253.

[29] Alexander, V., Blinder, C., & Zak, P. J. (2018). Why trust an algorithm? Performance, cognition, and neurophysiology. Computers in Human Behavior, 89, 279-288.

[30] Inkpen, K., Chappidi, S., Mallari, K., Nushi, B., Ramesh, D., Michelucci, P., & Quinn, G. (2023). Advancing Human-AI Complementarity: The Impact of User Expertise and Algorithmic Tuning on Joint Decision Making. ACM Transactions on Computer-Human Interaction, 30(5), 1-29.