Deep-Learning Model Improves ADHD Detection in MRI

Article

A novel, multichannel AI model outperformed 3 single-channel models in a randomized trial.

Lili He, PhD

Lili He, PhD

ADHD was better detected in patients given an MRI with a multichannel, deep-learning model, according to a new study which compared the procedure to MRI with single-scale artificial intelligence (AI) models.

A team of Cincinnati investigators have shared new findings indicating their own multichannel deep neural network model for multiscale brain functional connectome data is capable of detecting ADHD in patients at a better rate than comparative models.

The connectome data, which is increasingly regarded as the key to understanding brain disorders, is based on a dataset of 973 patients from NeuroBureau ADHD-200. The dataset provided personal characteristic data including age, sex, handedness, and intelligence quotients. Eligible participants had no history of psychiatric, neurologic or medical disorders other than ADHD.

Senior study author Lili He, PhD, from the Cincinnati College of Medicine and Cincinnati Children’s Hospital Medical Center, and colleagues sought to compare their multichannel AI to single-channel models in a randomized, cross-validation study.

They scanned patients using different scanners and parameters across 8 different sites, in order to develop more robust diagnostic models. Participants were randomly placed into into 5 subsets for a five-fold cross-validation of the AI model.

One subset (20% of participants) in each iteration was retained as a validation set to test the model. The investigators built the classification model on the other 4 subsets (80% of participants—training set). There were 30 rounds of five-fold cross-validation to reduce variability.

The Cincinnati team’s AI model, which used multiscale brain connectome data and personal characteristic data, achieved the best performance in detecting ADHD, with an area under the receiver operating characteristic curve (AUC) of 0.82 (95% CI, 0.8—0.83).

Single-channel AI models—independently using features of the brain connectome at individual scale and personal characteristic data—correctly identified patients with ADHD with an AUC of 0.67 (95% CI, 0.66—0.68), 0.69 (95% CI, 0.67–0.7), and 0.77 (95% CI, 0.76–0.78).

The multiscale model was significantly (P <.001) and clinically higher than the single-channel model for the detection of ADHD.

Dr. He and colleagues validated their hypotheses: that the predictive power of using personal characteristic data is comparable with that of using functional connectivity features; the predictive power of using brain connectome data at an individual scale is comparable among individual scales; and the fusion of multiscale brain connectome data improves prediction performance.

MRI has a potential role in diagnosing ADHD because the neurological condition results from a breakdown or disruption in the connectome, which is built from spatial regions across the MRI.

The results of the study emphasize the predictive power of the brain connectome, Dr. He said in a statement.

“The constructed brain functional connectome that spans multiple scales provides supplementary information for the depicting of networks across the entire brain,” Dr. He said.

Because the model improved diagnostic accuracy, AI-aided MRI-based diagnosis could also lead to earlier interventions for patients with ADHD. The model can also be generalized to other neurologic deficiencies, Dr. He added. The investigators had previously used the AI to predict cognitive deficiency in preterm infants to predict neurodevelopmental outcomes at 2 years of age.

More research is needed to get the AI to achieve at least a 90% accuracy and for it to be useful in clinical practice, the investigators noted. They also want to better understand the specific disruptions in the connectome identified by the AI that are associated with ADHD.

The study, “A Multichannel Deep Neural Network Model Analyzing Multiscale Functional Brain Connectome Data for Attention Deficit Hyperactivity Disorder Detection,” was published online in Radiology: Artificial Intelligence.

Related Videos
Sejal Shah, MD | Credit: Brigham and Women's
Insight on the Promising 52-Week KarXT Data with Rishi Kakar, MD
Sunny Rai, PhD: “I” Language Markers Do Not Detect Depression in Black Individuals
Rebecca A. Andrews, MD: Issues and Steps to Improve MDD Performance Measures
A Voice Detecting Depression? Lindsey Venesky, PhD, Discusses New Data
Daniel Karlin, MD: FDA Grants Breakthrough Designation to MM120 for Anxiety
© 2024 MJH Life Sciences

All rights reserved.