Using Machine Learning to Assist Decision Making in the Assessment of Mental Health Consumers Presenting to Emergency Departments
Higgins, Oliver (2024). Using Machine Learning to Assist Decision Making in the Assessment of Mental Health Consumers Presenting to Emergency Departments. RMIT University. Thesis. https://doi.org/10.25439/rmt.28440278
Abstract
Artificial intelligence (AI) and machine learning (ML) have the potential to unlock a deeper understanding of the human experience in health settings. This research investigated the integration of AI and ML into mental health care, specifically focusing on their application within emergency departments (EDs). EDs serve as critical access points for mental health services, yet they face significant challenges, including overcrowding, long wait times, and a scarcity of specialised resources. The study explored how AI and ML could be harnessed to address these challenges, improve clinical decision making, and enhance patient outcomes. The thesis begins by framing the problem within the broader context of mental health care, highlighting the significance of the study. It underscores the importance of the Sustainable Development Goals, ethical AI use, and the inclusion of lived experience in shaping mental healthcare strategies. This foundational approach emphasises the need for an integrated model of care that balances technological innovation with ethical considerations and human-centred care. The research explored the role of technology in mental health, particularly its impact on individuals experiencing conditions such as suicidality. The dual nature of technology was examined, recognising its capacity to drive innovation and empowerment while also acknowledging the potential for technology to induce anxiety, fear, or paranoia among vulnerable populations. This exploration underscores the need for a careful and ethical approach to the integration of AI in mental health care, considering the broader implications for patients and clinicians alike. A comprehensive literature review examines the concept of trust in AI, particularly in the context of health care. The review challenges the assumption that AI systems are inherently trustworthy and explores the anthropomorphisation of AI, questioning whether these systems should be deemed deserving of trust. The study also highlights the rapid advancements in AI and ML, particularly through the development of large language models. These models have significantly expanded the capabilities of natural language processing and offer considerable potential for healthcare applications. However, the research identified ethical challenges associated with these technologies, including bias, privacy concerns, and the need for transparency in AI deployment. The thesis advocates the responsible development of AI systems that enhance rather than replace human clinical judgement. The methodological framework employed in this research was designed to ensure a thorough understanding of the needs and concerns of various stakeholders, including clinicians, patients, and interdisciplinary experts. The study utilised design thinking, coupled with a retrospective cohort analysis of sociodemographic factors and presentation features of individuals seeking mental health care in EDs. Ethical considerations were prioritised throughout the research, with a particular focus on cultural sensitivity and the inclusion of diverse perspectives. The findings of the research provide a detailed analysis of the complexities involved in mental health care delivery in EDs. This research underscores the essential role of EDs in providing frontline mental health services while also identifying significant challenges, such as the need for more specialised resources and the ethical integration of AI into clinical practice. The research demonstrates the potential of ML to improve service provision, particularly through predictive modelling and clinical decision support systems. However, it also highlights limitations related to data quality, trust, and ethical considerations, emphasising the importance of a cautious and responsible approach to AI integration. In conclusion, the thesis discusses the broader implications of the research findings for clinical practice and policy. It offers recommendations for future research, including the evaluation of safety planning interventions, the exploration of collaborative care models, and the utilisation of ML for enhanced predictive modelling in mental health settings. The research advocates a comprehensive and integrated approach to mental health care that leverages the potential of AI while maintaining a strong ethical framework. Overall, the research contributes to the growing body of knowledge on the role of AI and ML in mental health care. It highlights the need for trust, transparency, and ethical considerations in the development of AI-based tools, ensuring that these technologies complement human clinical judgement rather than replace it. The findings suggest that, with careful and ethical deployment, AI has the potential to significantly improve mental healthcare delivery, leading to more effective, equitable, and patient-centred services.