Facial recognition software can assess facial weakness, aid diagnosis
Trained deep learning model could assist in improving clinical care
Patterns of facial weakness can be quantified with facial recognition software and computer models and used to diagnose and monitor myasthenia gravis (MG), a study finds.
“This study delivers a ‘proof of concept’ for a [deep learning] model that can distinguish MG from [healthy controls] and classifies disease severity,” the researchers wrote in “Assessing facial weakness in myasthenia gravis with facial recognition software and deep learning,” which was published in Annals of Clinical and Translational Neurology.
MG is marked by muscle weakness and fatigue, which occurs when the immune system attacks certain proteins involved in nerve-muscle communication. Although it can affect several muscles, the ones controlling eye and eyelid movements and those involved in speaking, swallowing, and chewing are the most commonly affected.
Since facial features in people with MG are distinct from those of healthy people, researchers in the Netherlands investigated whether facial weakness could be quantified automatically using facial expression recognition software and if it could help diagnose and monitor the disease. They analyzed data from 70 patients and 69 healthy people, all recruited at the neurology outpatient clinic of the Leiden University Medical Center between May 2019 and September 2020.
Measuring facial weakness with six recognized expressions
Facial weakness was analyzed using video recordings that were assessed with software that recognizes six facial expressions — anger, fear, happiness, surprise, disgust, and sadness. The software classifies expressions by generating a value between 0 and 1, with a higher value denoting a more pronounced expression.
In patients, the mean scores were significantly lower when expressing emotions, including anger (0.33 vs. 0.45), fear (0.12 vs. 0.24), and happiness (0.59 vs. 0.8), compared with healthy controls.
The area under the curve was calculated for each emotion. This statistical analysis can measure the accuracy of a given diagnostic test in values from 0 to 1, with higher values reflecting greater accuracy at distinguishing people with a disease from those without it.
For anger, the area under the curve was 0.6, with that emotion showing an accuracy of 58% at identifying MG. For fear, the area under the curve was 0.64, with an accuracy of 54%. For happiness, the area under the curve was 0.71 and the accuracy reached 70%.
Diagnosing, classifying disease severity with deep learning
Subsequently, a deep learning computer model was trained to diagnose and classify MG severity, using videos of 50 patients and 50 controls. Deep learning is a machine learning technique that teaches computers to learn by example, similar to what humans do naturally.
For diagnosis, the area under the curve was 0.75, with the computer model showing an accuracy of 76% at identifying the disease. For disease severity, the area under the curve was also 0.75, with the computer model showing an accuracy of 80% at correctly assessing it.
The results were validated using unseen videos of 20 patients and 19 healthy people. For diagnosis, the area under the curve was 0.82 with an accuracy of 87%. For disease severity, the area under the curve was 0.88, with an accuracy of 94%.
For diagnosing and classifying disease severity, the deep learning model performed better than four neurologists specialized in neuromuscular diseases, solely taking into account a video of facial expressions.
“This differs significantly from clinical practice, in which a detailed history, physical examination, and ancillary tests are essential elements for establishing the diagnosis,” the researchers wrote, adding “this is both a strength and weakness of our study.”
The study shows that assessing expressions might be sufficient for diagnosing or estimating the severity of MG. Including additional information would likely result in more relevant results, however.
For diagnosing MG, the deep learning model is considered a proof of concept by the researchers, since the study featured a small number of patients and didn’t compare their features with those of other diseases that also cause facial weakness.
The researchers said for monitoring disease severity, the model could assist standard of care to improve patients’ clinical care. Specifically, “it could reduce the need to travel long distances for control visits, assist patients in adapting the dose of their maintenance medication according to prespecified personalized rules and potentially avoid hospitalization by immediately starting emergency treatment in case of an alarming deterioration.”