An Interpretable Model With Probabilistic Integrated Scoring for Mental Health Treatment Prediction: Design Study

Anthony Kelly*, Esben Kjems Jensen, Eoin Martino Grua, Kim Mathiasen, Pepijn Van de Ven

*Corresponding author for this work

Research output: Contribution to journalJournal articleResearchpeer-review

2 Downloads (Pure)

Abstract

Background: Machine learning (ML) systems in health care have the potential to enhance decision-making but often fail to address critical issues such as prediction explainability, confidence, and robustness in a context-based and easily interpretable manner. Objective: This study aimed to design and evaluate an ML model for a future decision support system for clinical psychopathological treatment assessments. The novel ML model is inherently interpretable and transparent. It aims to enhance clinical explainability and trust through a transparent, hierarchical model structure that progresses from questions to scores to classification predictions. The model confidence and robustness were addressed by applying Monte Carlo dropout, a probabilistic method that reveals model uncertainty and confidence. Methods: A model for clinical psychopathological treatment assessments was developed, incorporating a novel ML model structure. The model aimed at enhancing the graphical interpretation of the model outputs and addressing issues of prediction explainability, confidence, and robustness. The proposed ML model was trained and validated using patient questionnaire answers and demographics from a web-based treatment service in Denmark (N=1088). Results: The balanced accuracy score on the test set was 0.79. The precision was ≥0.71 for all 4 prediction classes (depression, panic, social phobia, and specific phobia). The area under the curve for the 4 classes was 0.93, 0.92, 0.91, and 0.98, respectively. Conclusions: We have demonstrated a mental health treatment ML model that supported a graphical interpretation of prediction class probability distributions. Their spread and overlap can inform clinicians of competing treatment possibilities for patients and uncertainty in treatment predictions. With the ML model achieving 79% balanced accuracy, we expect that the model will be clinically useful in both screening new patients and informing clinical interviews.

Original languageEnglish
Article numbere64617
JournalJMIR Medical Informatics
Volume13
Number of pages24
ISSN2291-9694
DOIs
Publication statusPublished - 2025

Keywords

  • AI
  • artificial intelligence
  • explainability
  • explainable AI
  • machine learning
  • mental health
  • Monte Carlo dropout
  • XAI

Fingerprint

Dive into the research topics of 'An Interpretable Model With Probabilistic Integrated Scoring for Mental Health Treatment Prediction: Design Study'. Together they form a unique fingerprint.

Cite this