Artificial intelligence (AI) and machine learning (ML) based decision support systems in mental health: An integrative review

Bridging the Gap: Why Clinician Trust is Key to Implementing AI in Mental Health


By Oliver Higgins

Artificial intelligence (AI) and machine learning (ML) are rapidly emerging fields promising innovative solutions across healthcare. Our recent integrative review, published in the International Journal of Mental Health Nursing, investigated the current evidence base for incorporating AI/ML-based Decision Support Systems (DSS) into mental health care settings.

While AI/ML DSS offer significant potential to alleviate systemic problems like clinician burnout, resource burden, and the resulting phenomenon of unfinished or missed care, particularly in acute care settings, our findings indicate that the successful integration of these technologies hinges on a single, crucial factor: trust and confidence.

The Urgent Need for Support

The escalating demand on health care resources, evidenced by increased emergency presentations and longer patient stays in Australian public hospitals prior to 2019–20, has led to clinician burnout, frustration, and reduced productivity. This resource burden necessitates prioritization strategies, often resulting in patients’ educational, emotional, and psychological needs going unmet—a form of missed care.

AI/ML powered DSS represent a possible partial solution by supporting clinicians with tools to accelerate decision-making and identify areas like potential suicidal behaviours from existing clinical notes. However, the research in this area remains limited, with our review identifying only four studies meeting the inclusion criteria between 2016 and 2021.

The Barrier of the ‘Black Box’

The primary theme identified as a significant barrier to the adoption of AI/ML DSS is the uncertainty surrounding clinician’s trust and confidence, end-user acceptance, and system transparency.

Many advanced AI implementations are inherently complex, employing learning algorithms that can obscure the rationale behind a recommendation. This results in what is known as a ‘black-box AI,’ leading to reluctance and anxiety among clinicians who cannot understand the logic or process used by the system to devise a recommendation. Clinicians require an understanding of the data and features used to make predictions, mirroring the standards of clinical documentation.

Several studies highlighted that clinicians need systems that offer transparent and interpretable results. Meaningful explanations about recommendations are required to enhance levels of trust and confidence in AI/ML DSS.

  • Lutz et al. (2021) found that presenting recommendations via boxplots, rather than clear binary outputs with supporting evidence, made conclusions difficult for clinicians to interpret and communicate.
  • The inability of a system to communicate its underlying mechanism or process contributes directly to clinician mistrust.

To overcome the black-box challenge, systems should utilize glass-box design methodologies, such as InterpretML, which allow for interpretable models by revealing what the machine has ‘learnt’ from the data. Any AI/ML system intended for clinical use must be intelligible, interpretable, transparent, and clinically validated, providing a clear explanation of how clinical features contributed to the recommendation.

Collaborative Development and Clinician Autonomy

A crucial finding is the necessity of involving clinicians in all stages of research, development, and implementation of artificial intelligence in care delivery.

When clinicians are involved in the iterative design cycle, the resulting tool is more likely to foster trust and confidence. For instance, Benrimoh et al. (2020) reported positive findings linked to clinician involvement in the DSS design, noting that the system ensured clinician autonomy by allowing them to select treatments or actions beyond the DSS recommendations when clinically appropriate. This retention of clinical autonomy is essential, as the final clinical decision logic must always remain in the hands of the clinician.

Earning the trust and confidence of clinical staff must be foremost in the consideration for implementation of any AI-based decision support system. Clinicians should be motivated to actively embrace the opportunity to contribute to the development of new digital tools that assist in identifying missed care.

Looking Ahead: End-User Acceptance and Ethical Implications

While limited, the literature does suggest positive benefits for patients. Patients reported feeling more engaged in shared decision-making when clinicians shared the DSS interface with them. The introduction of the tool was also reported to improve the patient-clinician relationship significantly.

However, the attitudes and acceptance of AI/ML among service users—those with lived experience of mental illness, are still poorly understood, and their involvement in research design is severely limited. Future research must focus on understanding how the use of AI/ML affects these populations and address potential risks like techno-solutionism, overmedicalisation, and discrimination.

In conclusion, AI/ML Decision Support Tools in mental health settings show most promise when the trust and confidence of clinicians is achieved. Successful integration requires systems that are transparent, interpretable, and developed through strong collaboration with both clinicians and end-users, ensuring that an algorithm’s performance translates into improved patient outcomes and safe, ethical practice.


Higgins, O., Short, B. L., Chalup, S. K. and Wilson, R. L. (2023). Artificial intelligence (AI) and machine learning (ML) based decision support systems in mental health: An integrative review. International Journal of Mental Health Nursing.

Interpretations of Innovation: The Role of Technology in Explanation Seeking Related to Psychosis

“Interpretations of Innovation: The Role of Technology in Explanation Seeking Related to Psychosis,” has been published in Perspectives in Psychiatric Care. This project, co-authored with Brooke L. Short, Stephan K. Chalup, and Rhonda L. Wilson, was a fascinating journey into the complex relationship between technological advancement and mental health, specifically psychosis.

The Spark of Curiosity

The idea for this paper began in an unexpected way. While my research team and I were developing protocols for a new study on artificial intelligence (AI) and machine learning (ML) in mental health, we received unsolicited emails from individuals who appeared to be experiencing psychosis. Their messages contained delusional content directly related to AI, including a belief in a controlling “neural type network”. This experience prompted us to pause and reflect. It highlighted the profound responsibility we have as researchers and innovators in this space. It made us ask: how have people experiencing psychosis incorporated technology into their reality over time?

The ‘Influencing Machine’ in the 21st Century

Our investigation took us back over 100 years, starting with Viennese psychoanalyst Victor Tausk’s 1919 essay on the “influencing machine”. Tausk described how his patients used the technology of their day—levers, wires, and batteries—to explain delusions of being controlled by external forces. His work provided early evidence that when people face inexplicable psychological phenomena, they often turn to contemporary technology for answers.

Our research mapped this concept onto a 100-year timeline, charting major mental health milestones against significant technological innovations and reports of technology-related delusions. We found that Tausk’s concept has remained remarkably consistent. As technology has evolved from radio waves to the internet and now to AI, so too has the content of delusions for some individuals. People experiencing psychosis might report beliefs of being monitored by “electronic devices of consumers” or receiving messages through AI platforms.

Fact, Fiction, and the Fear of AI

We also explored how science fiction shapes public perception of AI. Films like Stanley Kubrick’s 2001: A Space Odyssey gave us HAL 9000, an anthropomorphised, superintelligent computer that ultimately prioritised its mission over human life. Such portrayals, while fictional, have established a cultural understanding of AI that often leans towards fear and mistrust.

This fear isn’t just theoretical. One study we cited found that one in four people reported a fear of autonomous robots and artificial intelligence (FARAI), often unable to differentiate between the two. This is compounded by the trend of anthropomorphism—designing AI to have human-like characteristics. While intended to make technology more approachable, this can be unnerving, especially for individuals with cognitive impairments that affect their ability to interpret the mental states of others, a concept known as Theory of Mind. If a person already struggles to understand human relationships, an AI designed to be “too real” could logically feel like a controlling entity.

Why This Matters for Clinical Practice and Future Research

As our world becomes increasingly saturated with technology—from voice assistants like Alexa and Siri that are always listening, to social media algorithms that curate our reality—the line between the digital and physical worlds blurs. For a person experiencing psychosis, the leap from accepting that a device monitors your heart rate to believing it can control your thoughts is not as large as it once might have been.

This has critical implications:

  • For Clinicians: It is essential for mental health clinicians to have a general understanding of the current technological landscape. This knowledge fosters a better understanding of a client’s world and helps frame delusional content within their cultural and environmental context.
  • For Researchers and Developers: We have an ethical obligation to ensure that new technologies, especially in healthcare, are safe, transparent, and designed with the end-user’s wellbeing as the primary focus. The development of AI must include safeguards that protect human autonomy and adhere to the same high ethical standards expected of clinicians. We need to move away from “black box” models and toward transparent, “glass-box” systems that are explainable to everyone involved.

Ultimately, our paper calls for greater collaboration between clinical experts, people with lived experience, and technology developers. By working together from the earliest stages of design, we can ensure that the technological innovations of the future support, rather than compromise, the mental health of all individuals.

Higgins, Oliver, Short, Brooke L., Chalup, Stephan K., Wilson, Rhonda L., Interpretations of Innovation: The Role of Technology in Explanation Seeking Related to Psychosis, Perspectives in Psychiatric Care, 2023, 4464934, 16 pages, 2023. https://doi.org/10.1155/2023/4464934