Evaluating Usability, Trust, and Explainability in AI-Powered Clinical Decision Support: A Mixed-Methods Study of a Diabetes Risk Prediction System | IJCSE Volume 10 – Issue 2 | IJCSE-V10I2P11

IJCSE International Journal of Computer Science Engineering Logo

International Journal of Computer Science Engineering Techniques

ISSN: 2455-135X
Volume 10, Issue 2  |  Published:
Author

Abstract

The successful integration of artificial intelligence (AI) into clinical workflows depends critically on user acceptance, trust, and perceived usability among healthcare professionals. This study presents a mixed-methods evaluation of an AIpowered diabetes risk prediction system designed for NHS clinical environments, examining usability, trust, explainability, and workflow integration. The system combines machine learning (Gradient Boosting classifier) with rule-based logic aligned to NICE guidelines, providing personalised lifestyle and dietary recommendations alongside risk predictions. Evaluation employed the System Usability Scale (SUS), Likert-scale trust assessments, task efficiency measurements, and qualitative feedback analysis. Results demonstrated excellent usability (SUS score: 82.5/100), with learnability (84.7) and usability (81.2) sub-scores indicating intuitive design. Trust and explainability ratings were consistently high (median 4–5/5) across dimensions including perceived accuracy, transparency, safety, and willingness to use. Task efficiency was strong, with median completion times of 42 seconds for data entry, 18 seconds for result interpretation, and 23 seconds for report export, achieving 92–100% success rates. Qualitative analysis identified key facilitating factors: minimalistic interface design, automated calculations (e.g., BMI), clear risk visualisations, feature importance explanations, and visible security indicators (HTTPS, JWT tokens). The findings provide evidence that thoughtfully designed AI clinical decision support systems can achieve high user acceptance while maintaining transparency and alignment with clinical governance requirements, offering practical guidelines for healthcare AI implementation.

Keywords

clinical decision support, usability evaluation, healthcare AI, trust in AI, explainable AI, System Usability Scale, user experience, NHS, human-computer interaction

Conclusion

This study demonstrates that AI-powered clinical decision support systems can achieve excellent usability (SUS: 82.5), high trust ratings (median 4–5/5), and efficient task performance (18–42 seconds per task) when designed with attention to simplicity, explainability, and clinical workflow integration. The combination of machine learning predictions with rulebased guideline logic, transparent feature explanations, and visible governance indicators effectively supports user confidence while avoiding over-reliance. These findings provide evidence-based guidance for healthcare AI developers seeking to bridge the gap between technical capability and clinical adoption. Future work should focus on longitudinal evaluation in live NHS settings, comparative studies across different CDSS designs, and investigation of adoption patterns among diverse clinical specialties.

References

[1]E. H. Shortliffe and M. J. Sepulveda, “Clinical decision support in the era of artificial intelligence,” JAMA, vol. 320, no. 21, pp. 2199–2200, 2018. [2]Q. Yang et al., “Investigating how practitioners perceive and use AI,” in Proc. CHI, 2019, pp. 1–12. [3]F. Cabitza et al., “Unintended consequences of machine learning in medicine,” JAMA, vol. 318, no. 6, pp. 517–518, 2017. [4]J. D. Lee and K. A. See, “Trust in automation: Designing for appropriate reliance,” Human Factors, vol. 46, no. 1, pp. 50–80, 2004. [5]A. B. Arrieta et al., “Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges,” Information Fusion, vol. 58, pp. 82–115, 2020. [6]J. Morley et al., “The ethics of AI in health care: A mapping review,” Social Science & Medicine, vol. 260, p. 113172, 2020. [7]B. Middleton et al., “Enhancing patient safety and quality of care by improving the usability of electronic health record systems,” JAMIA, vol. 20, no. e1, pp. e2–e8, 2013. [8]M. Zahabi et al., “Usability and safety in electronic medical records interface design: A review,” Human Factors, vol. 57, no. 5, pp. 805– 834, 2015. [9]J. Brooke, “SUS: A quick and dirty usability scale,” Usability Evaluation in Industry, vol. 189, no. 194, pp. 4–7, 1996. [10]A. Bangor et al., “An empirical evaluation of the System Usability Scale,” Int. J. Human-Computer Interaction, vol. 24, no. 6, pp. 574– 594, 2008. [11]O. Asan et al., “Artificial intelligence and human trust in healthcare,” J. Medical Internet Research, vol. 22, no. 6, p. e15154, 2020. [12]C. J. Cai et al., “Hello AI: Uncovering the onboarding needs of medical practitioners,” in Proc. CHI, 2019, pp. 1–12. [13]A. Jacovi et al., “Formalizing trust in artificial intelligence,” in Proc. FAccT, 2021, pp. 624–635. [14]S. Tonekaboni et al., “What clinicians want: Contextualizing explainable machine learning for clinical end use,” in Proc. MLHC, 2019, pp. 359– 380. [15]S. M. Lundberg and S. I. Lee, “A unified approach to interpreting model predictions,” in Proc. NeurIPS, 2017, pp. 4765–4774. [16]M. F. Alwashmi, “The use of digital health in the detection and management of COVID-19,” Int. J. Environ. Res. Public Health, vol. 17, no. 8, p. 2906, 2020. [17]J. Sauro and J. R. Lewis, Quantifying the User Experience: Practical Statistics for User Research. Morgan Kaufmann, 2012.
Š 2025 International Journal of Computer Science Engineering Techniques (IJCSE).
Submit Your Paper