Interpretations of Innovation: The Role of Technology in Explanation Seeking Related to Psychosis

“Interpretations of Innovation: The Role of Technology in Explanation Seeking Related to Psychosis,” has been published in Perspectives in Psychiatric Care. This project, co-authored with Brooke L. Short, Stephan K. Chalup, and Rhonda L. Wilson, was a fascinating journey into the complex relationship between technological advancement and mental health, specifically psychosis.

The Spark of Curiosity

The idea for this paper began in an unexpected way. While my research team and I were developing protocols for a new study on artificial intelligence (AI) and machine learning (ML) in mental health, we received unsolicited emails from individuals who appeared to be experiencing psychosis. Their messages contained delusional content directly related to AI, including a belief in a controlling “neural type network”. This experience prompted us to pause and reflect. It highlighted the profound responsibility we have as researchers and innovators in this space. It made us ask: how have people experiencing psychosis incorporated technology into their reality over time?

The ‘Influencing Machine’ in the 21st Century

Our investigation took us back over 100 years, starting with Viennese psychoanalyst Victor Tausk’s 1919 essay on the “influencing machine”. Tausk described how his patients used the technology of their day—levers, wires, and batteries—to explain delusions of being controlled by external forces. His work provided early evidence that when people face inexplicable psychological phenomena, they often turn to contemporary technology for answers.

Our research mapped this concept onto a 100-year timeline, charting major mental health milestones against significant technological innovations and reports of technology-related delusions. We found that Tausk’s concept has remained remarkably consistent. As technology has evolved from radio waves to the internet and now to AI, so too has the content of delusions for some individuals. People experiencing psychosis might report beliefs of being monitored by “electronic devices of consumers” or receiving messages through AI platforms.

Fact, Fiction, and the Fear of AI

We also explored how science fiction shapes public perception of AI. Films like Stanley Kubrick’s 2001: A Space Odyssey gave us HAL 9000, an anthropomorphised, superintelligent computer that ultimately prioritised its mission over human life. Such portrayals, while fictional, have established a cultural understanding of AI that often leans towards fear and mistrust.

This fear isn’t just theoretical. One study we cited found that one in four people reported a fear of autonomous robots and artificial intelligence (FARAI), often unable to differentiate between the two. This is compounded by the trend of anthropomorphism—designing AI to have human-like characteristics. While intended to make technology more approachable, this can be unnerving, especially for individuals with cognitive impairments that affect their ability to interpret the mental states of others, a concept known as Theory of Mind. If a person already struggles to understand human relationships, an AI designed to be “too real” could logically feel like a controlling entity.

Why This Matters for Clinical Practice and Future Research

As our world becomes increasingly saturated with technology—from voice assistants like Alexa and Siri that are always listening, to social media algorithms that curate our reality—the line between the digital and physical worlds blurs. For a person experiencing psychosis, the leap from accepting that a device monitors your heart rate to believing it can control your thoughts is not as large as it once might have been.

This has critical implications:

  • For Clinicians: It is essential for mental health clinicians to have a general understanding of the current technological landscape. This knowledge fosters a better understanding of a client’s world and helps frame delusional content within their cultural and environmental context.
  • For Researchers and Developers: We have an ethical obligation to ensure that new technologies, especially in healthcare, are safe, transparent, and designed with the end-user’s wellbeing as the primary focus. The development of AI must include safeguards that protect human autonomy and adhere to the same high ethical standards expected of clinicians. We need to move away from “black box” models and toward transparent, “glass-box” systems that are explainable to everyone involved.

Ultimately, our paper calls for greater collaboration between clinical experts, people with lived experience, and technology developers. By working together from the earliest stages of design, we can ensure that the technological innovations of the future support, rather than compromise, the mental health of all individuals.

Higgins, Oliver, Short, Brooke L., Chalup, Stephan K., Wilson, Rhonda L., Interpretations of Innovation: The Role of Technology in Explanation Seeking Related to Psychosis, Perspectives in Psychiatric Care, 2023, 4464934, 16 pages, 2023. https://doi.org/10.1155/2023/4464934