AI Could Help Predict Suicides—but Rushing The Technology Could Lead To Big Mistakes
A quarter of people dying from suicide in the UK talked to a health care worker the previous week and most talked to someone in the previous month. However, assessing a patient's suicide risk remains extremely difficult.
In 2021, 5219 suicides were recorded in the UK. Although suicide rates in England and Wales have fallen by around 31% since 1981, most of this decline occurred before 2000. Suicide is three times more common in men than in women, and this gap has widened. in time.
An October 2022 study conducted by the Black Dog Institute at the University of New South Wales found that artificial intelligence (AI) models outperformed clinical risk assessment. It looked at 56 studies between 2002 and 2021 and found that AI correctly predicted 66% of people who would commit suicide and 87% who would not. In comparison, traditional assessment methods performed by medical professionals are only marginally better than the case.
AI is the subject of much research in other medical fields such as cancer. However, despite their promise, artificial intelligence models for mental health have not been widely used in the clinical setting.
Why is suicide so difficult to predict?
A 2019 study by Sweden's Karolinska Institute identified four traditional scales used to predict suicide risk after a recent episode of self-harm. The challenge of predicting suicide stems from the fact that the patient's intentions can change rapidly.
The self-harm guidelines used by healthcare professionals in the UK specifically state that suicide risk assessment tools and scales should not be considered. Instead, professionals should use clinical interviews. Although doctors do structured risk assessments, they are used to make the most of interviews, not to provide a scale for determining who receives treatment.
Risk of AI
The Black Dog Institute study shows promising results, but if 50 years of traditional (non-AI) predictive research yields methods that are only marginally better than the case, we must ask ourselves if we should trust AI. When a new development gives us what we want (in this case, a better estimate of suicide risk), we may be tempted to stop asking questions. But we can't rush this technology. The consequences of mistakes are literally life or death.
AI models always have limitations, including how their performance is evaluated. For example, using accuracy as a measure can be misleading if the dataset is unbalanced. The model can achieve 99% accuracy by always predicting no suicide risk if only 1% of the patients in the dataset are at high risk.
It is also important to evaluate AI models on data other than the ones they study. This avoids overfitting, where the model can learn to perfectly predict the results from the training material, but has difficulty handling new data. The model may have worked perfectly during development but misdiagnosed real patients.
For example, AI has proven very useful for surgical scars on patients' skin when used to detect melanoma (a type of skin cancer). Doctors use blue pens to highlight suspicious lesions, and the AI has learned to associate these markers with a higher likelihood of cancer. In practice, this leads to misdiagnosis when blue lighting is not used.
It can also be difficult to understand what AI models have learned, for example because they predict certain levels of risk. This is a central problem of artificial intelligence systems in general and has led to an entire area of research known as explainable AI.
The Black Dog Institute found that 42 of the 56 studies analyzed had a high risk of bias. In this scenario, bias means that the model overly or under-predicts the average suicide rate. For example, the data has a 1% suicide rate, but the model predicts a 5% rate. A high bias leads to misdiagnosis, both due to the lack of high-risk patients and the association of risk with low-risk patients.
This bias stems from factors such as the selection of participants. For example, some studies have high case-control ratios, which means that the suicide rate in those studies is higher than it actually is, so the AI model is likely to overestimate the patient's risk.
Promising prospect
The model mainly uses electronic health record data. But some also include data from interviews, self-assessment surveys, and clinical notes. The advantage of using AI is that it can study large amounts of data faster and more efficiently than humans, detecting patterns that overworked doctors lack.
Although progress has been made, AI approaches to suicide prevention are not yet ready for practical use. Researchers are already working to solve many problems with AI-based suicide prevention models, such as the difficulty of explaining why algorithms make the predictions they make.
However, suicide prediction isn't the only way to reduce suicide rates and save lives. Accurate predictions are useless if they don't lead to effective interventions.
Artificial intelligence-based suicide prediction will not prevent all deaths. But it could provide mental health professionals with another tool for treating their patients. It can be as life-changing as recent heart surgery if it raises alarm for neglected patients.
If you are struggling with suicidal thoughts, the following services can help you: In the UK and Ireland call Samaritans UK at 116 123. 1 -800-784-2433. In Australia, call Lifeline Australia on 13 11 14. In other countries, visit the IASP or Suicide.org to find the helpline in your country.
This article was reprinted from The Conversation under a Creative Commons license. Read the original article.
excerpt : AI Can Help Predict Suicide, But Rushed Technology Can Lead to Big Mistakes (2022, Oct 21) Accessed Oct 22, 2022, from https://medicalxpress.com/news/2022-10-ai -suicidesbut- from the technology website. large html:
This document is subject to copyright. Except for fair use for personal study or research purposes, no part may be reproduced without written permission. The content is provided for informational purposes only.
0 Comments:
Post a Comment
Subscribe to Post Comments [Atom]
<< Home