Artificial intelligence is increasingly taking the spotlight in medical diagnosis, as advancements like computer vision and natural language processing allow algorithms to review datasets. Today, AI is increasingly used to review X-ray images and CT scans for calcifications and lesions, but health professionals are increasingly envisioning a much broader and brighter future.

AI algorithms work by reviewing datasets, finding patterns, and classifying images, text, and spoken text based on those patterns. This allows AI to classify images to recognize medical problems such as calcifications or tumors, to highlight text to show differences, and to create patterns in nearly any type of dataset. As machine and deep-learning progresses, this capacity and accuracy will likely increase exponentially.
While AI is not currently used in mental health diagnosis of any kind beyond the experimental, it increasingly shows potential and a bright future in the field. Today’s advances in Ai show a lot of promise for the future of mental health diagnosis.
Some Forays Into AI Mental Health Diagnosis
Machine and deep learning projects are everywhere and at least some of these heavily relate to mental illness. Some projects of note include the Human Neuroimaging Laboratory, where researchers are building algorithms to review complex neurobiological data ranging from blood samples to components of the decision-making process to identify mental illness. The World Well-Being Project created an algorithm that could successfully predict medical history of depression based on Facebook posts and updates. This algorithm uses natural language to identify depression-associated language markers, with a significant degree of accuracy.
Others, like a machine-learning algorithm from Vanderbilt University predict risk of suicide based on hospital admissions data, with an 85% rate of accuracy over one week, based on patient data from 5,000 individuals who had committed self-harm after admittance. Another study used an AI to monitor individual smartphone usage, exploring how typing speed, voice tone, usage, and even location to detect risk of depression or suicide.
Some experiments like Quartet and Ginger use AI to flag online mental health risks, directing individuals towards assistance, counseling, and therapy. Even Facebook uses Ai to recognize flags for depression, sharing messages to individuals posting potential high-risk messages encouraging that they reach out to suicide prevention hotlines or therapy. While these algorithms are not “AI” in that they don’t use deep-learning or self-learning algorithms to the extent used by labs, they use the same technology to predict patterns in behavior using massive datasets.
Barriers to AI in Mental Health Diagnosis
AI’s greatest strength and weakness is that it heavily relies on massive quantities of data to be efficient or even successful. Deep-learning algorithms evolve based on data-input to become more efficient and more accurate over time. Today’s mental health patients are often highly protected, meaning their data is not readily available, mental health providers are unlikely to start sharing that data, and no large quantities of data will likely become available for study. This means that any progress in the field of automated mental health diagnosis will be slow.
This is especially important considering that unavailable data and poor communication between institutes is linked to as many as 10% of all preventable patient deaths. Proper access to patient data is a barrier to treatment even in traditional human-driven diagnosis, and this will be a significant barrier to AI ever becoming a reliable diagnostic tool.
Other barriers, including research, time-investment, and patient acceptance will likely be easier to cross. As barriers to personal health data are reduced or individuals begin to opt into experimental programs with more frequency, the speed of this research will likely increase.
AI Uses Datasets to Analyze Patterns
Current algorithms are increasingly relying on public data such as social media posts, survey responses, and public hospital data to predict information. Others rely on data from therapy sessions, survey responses, and private hospital data, which typically requires seeking approval from thousands of individual patients.
AI can also review patterns relating to physical manifestations of mental disorders such as genetics, blood samples, saliva samples, and brain scans. Experts like Pearl Chiu of the Human Neuroimaging Laboratory at the Fralin Biomedical Research Institute at Virginia Tech Carilion actively works to train AI to recognize both physical and mental symptoms of mental illness, using algorithms to recognize patterns physicians can’t see on their own. Her research integrates data for survey responses, MRIs, behavioral data, speech, and psychological assessment. The algorithm works to recognize patterns in neurobiological data that would take clinicians months to find on their own. In fact, some assessments take weeks, even with AIs ability to process thousands of items per second.
This makes massive datasets available for diagnosis, even when those datasets are speech patterns, changes in typing or speaking speed, changes in facial expression, or slight changes in brain patterns. A machine can recognize even tiny patterns that a human would struggle to notice, correlate changes across the body that a human wouldn’t be able to notice and create new patterns and models to show what mental illnesses look like from a holistic approach.
Is AI A Feasible Way to Diagnose Mental Health?
Artificial intelligence is increasingly smarter, more valuable, and more accurate than ever before. AI can effectively recognize medical health problems, sometimes with comparable accuracy to human evaluation. However, any AI is only as effective as the data it’s fed. Machine diagnosis can result in false positives and false negatives, and often based on different criterion than human-diagnosed false-positive/false-negative. For example, the famous case of an MRI scan of a dead salmon showing brain activity. If AI is given improper data or parameters, it will be wrong.
However, AI is increasingly seen as a faster way to help clinicians diagnose problems. AI can review cases, sort through more data, and highlight cases for individual review. AI can also help therapists back up their professional opinion with a data-driven one. And, AI can help catch people who slip through the cracks by prompting clinicians to look into a patient they diagnosed as healthy.
AI won’t be used to diagnose mental illness anytime soon and it likely won’t ever be a standalone solution. However, it could become a valuable tool in the world of mental healthcare, helping physicians to use the data they have to make better and stronger diagnoses.
Artificial intelligence has a lot of potential. Today, that potential is largely still being explored and we won’t likely see practical applications of AI in therapy or psychology anytime soon. However, AI is increasingly showing evidence of adding value in fields of data analysis, reviewing additional datasets, and flagging cases that human observation might not. New research shows promise in identifying depression based on factors like admissions data or facial expression, helping doctors to better-connect and reach out to patients. Algorithms excel at pattern recognition and many can process the same data a doctor would take weeks to review in a matter of minutes. Implementing AI into future diagnostic efforts could greatly improve the speed and accuracy at which people get help.
Today, AI isn’t yet ready to be a diagnostic tool for mental health in anything but an experimental setting. However, doctors are still learning, still improving how and when individuals are diagnosed, and still improving diagnosis. AI will likely eventually add to this, further improving our ability to recognize and treat mental