This clinical trial describes a retrospective analysis of patients’ records (approximately 6 million records) between September 2012 and November 2018 to develop a mental crises risk model inputting data from 17,122 patients into a machine learning algorithm. After the retrospective phase, a prospective cohort study was conducted to evaluate the crisis prediction algorithm in clinical practice from November 2018 to May 2019, before the pandemic. The initial follow-up of the prospective research applying the algorithm’s use in clinical practice was valuable in reducing the risk of mental crises in 64% of cases. The researchers tested different machine learning techniques, such as decision trees, probabilistic, ensembles, and deep learning-based classifiers.
How many predictions could we make running machine learning protocols in Health and Education institutions regarding millions of subjects’ records?
References
Garriga, R., Mas, J., Abraha, S., Nolan, J., Harrison, O., Tadros, G., & Matic, A. (2022). Machine learning model to predict mental health crises from electronic health records. Nature Medicine, 28(6), 1240-1248. 10.1038/s41591-022-01811-5
Hi Alexei, Thank you so much for sharing this! I find this especially exciting because I am a firm believer that good mental health is essential for learning to take place.
I think this is an example of how different ‘industries’ can converge for impactful innovations to occur. Here, mental health, education, data collection and machine learning are converging to confront old challenges. I think this is a key component of good innovation – creating synergies across different fields. We, as educators (and students of the MET program) are increasingly asked to explore new areas and reflect on their implications to the field of education.
Thanks again, Alexei, I can wait to see what kind of projections can be produced by machine learning in the upcoming years. It’s such a powerful tool already.
Hi Sage,
One of the barriers to the learning machine in patient records is a lack of patterns in diagnostic and evaluation expressions. Sometimes, we find many descriptors indicating similar meanings but are challenging to organize and classify. A possible analogy is with multiple choice and open-ended questions. In general, open-ended questions demand much more analysis and interpretation, complicating learning for both human and machine components.
So glad Alexei shared this – this is something that really hits close to home (in the sense of my work/interests).
Professionally, I work at a mental health hospital, which understandably, is being hit with massive intake/waitlists of years/misdiagnoses. To address this, we are collaborating with a company in the US that intends to use AI and machine learning to address these issues through using tools to assist in proper diagnosis from the beginning (intake/referral) to ensure patients are sorted in the right clinics to decrease waitlist times.
I think what is so fascinating about this, beyond the benefits it will bring to the hospital, is that fields of work are seeming less divided: healthcare and technology are “collaborating” to change the way the world works. AI/machine learning originally seemed to be something saved for the engineering world, but to see this included in healthcare has the ability to change individuals’ lives.
I think this is a really fascinating and potentially very impactful application of technology.
My knowledge of mental health issues is not extensive but from theory and experiences, I’d say that one of the great difficulties is that issues can be unreported or undiagnosed. Having a background system checking everyone for possible flags would hopefully lead to positive interventions for many.
In education, I could see this being adapted for many other support or enrichment issues. Dyslexia, anxiety, aptitude in particular areas – possibly scanning a large amount of student work and records would present new opportunities. Maybe it could even help with more prosaic tasks like forming groups or seating plans to place complementary personalities or similar interests together.
Understandably there are legitimate concerns about privacy but I think properly done this would be helpful. A shortcoming of many AI systems is an inability to explain why a decision was reached, which could be helpful here: a recommendation is made to watch for particular flags of concern in an individual, but without showing the inputs or logic that made that brought forward that suggestion.
Devon