Artificial Intelligence Effectively Assesses Cell Therapy Functionality

Article

A fully automated artificial intelligence-based system could effectively classify function and potency of cell therapy.

Nathan Hotaling

Nathan Hotaling from the National Institutes of Health

Nathan Hotaling

A fully automated artificial intelligence (AI)-based multispectral absorbance imaging system effectively classified function and potency of induced pluripotent stem cell derived retinal pigment epithelial cells (iPSC-RPE) from patients with age-related macular degeneration (AMD), according to research presented at the 2018 ARVO annual meeting.

The software, which uses convolutional neural network (CNN) deep learning algorithms, effectively evaluated release criterion for the iPSC-RPE cell-based therapy in a standard, reproducible, and cost-effective fashion. The AI-based analysis was as specific and sensitive as traditional molecular and physiological assays, without the need for human intervention.

"Cells can be classified with high accuracy using nothing but absorbance images," wrote lead investigator Nathan Hotaling and colleagues from the National Institutes of Health in their poster. "Using multispectral absorbance images provides a non-invasive, automated, fast, and robust assay to assess induced PSC maturity, function, and identity."

Artificial intelligence has rapidly gained acceptance in the medical community, with several clear applications. In early April 2018, the FDA cleared the first AI-powered medical device for the detection of diabetic retinopathy. The diagnostic tool, known as IDx-DR, uses AI to analyze images taken with a Topcon NW400 retinal camera. The system provides a diagnosis within minutes, demonstrating the potential power of AI-based approaches.

For the study presented at the ARVO annual meeting, human iPSC derived RPE cells were cultured on biodegradable PLGA nanofiber scaffolds. These cells were assessed using weekly absorbance imaging, electrophysiology, and supernatant collection, for ELISA and VEGF secretion. Daily treatments with Aphidicolin and Hedgehog Pathway Inhibitor 4 (HPI4) were administered to predict RPE function.

To appropriately classify cells, RPE samples from 8 clones of 3 donors were imaged and analyzed using machine learning. In total, there were 2 to 5 replicates per clone and more than 300 intensity and texture features associated with cell pigmentation and morphology were assessed across approximately 50,000 cells per replicate.

The first step in the study was to validate the multispectral images of cell pigmentation. For this, the researchers compared bright field imaging with multispectral absorbance. This showed dramatic exposure differences with each magnification with bright field but not multispectral absorbance imaging, which only differed by approximately 0.015 AU between 20x and 40x magnification.

The utility of the multispectral absorbance approach was also assessed for quantifying healthy RPE pigmentation, along with immunohistochemistry and scanning electron microscope. Overall, these tests showed that the approach was viable and could effectively monitor RPE pigmentation.

"Multispectral absorbance imaging can be used to assess the pigmentation development of health iRPE," Hotaling et al wrote. "Absorbance images also contain enough information to allow CNNs to accurately segment RPE borders live."

To assess clustering, CNN segmentation (deep neural network segmentations [DNN-S]) was compared with hand corrected segmentation on the absorbance images. Histogram errors were seen in 7.94% of cases (±4.42). Pixel agreement between the manual and DNN-S approach was 66% to 71%.

CNN was then used to predict transepithelial electrical resistance (TER) using absorbance images to assess release criterion. If used as a classifier, CNN on absorbance images demonstrated 97% accuracy for a release criterion of 400 Ω cm2. The sensitivity for this same criterion was 94% and the specificity was 100%.

CNN was then combined with machine learning to further find characteristics within the absorbance images to assess the clinical grade iRPE, based on training from RPE clones. Linear support vector machine classification (L-SVM) was compared with the DNN approach for classifying donor source.

Overall, L-SVM was 76.4% accurate compared with an accuracy of 85.4% for the DNN. Sensitivity was 64.6% versus 80.9% and specificity was 82.3% versus 86.8%, for L-SVM and DNN, respectively.

"CNNs can analyze these images and accurately predict cell function and potency," the authors wrote. "Visual features can be isolated from these images and identification of clonal outliers is also possible."

As with other AI-based applications, the more the system is introduced to training datasets the more precise the detection ability will become. The initial proof of concept for detecting potency and function for iPSC-RPE cells will still need to withstand validation. Prior to reaching the market, the IDx-DR system was trained using 495,000 retinal images across a variety of backgrounds, suggesting the next steps for other AI applications.

Sign up to get frontline clinical insights directly to your inbox.

Related Coverage >>>

New Diagnostic Screen for ADHD Developed Using Machine Learning

AI Aids Weight Loss Management in Patients with T2D

Related Videos
Kelley Branch, MD, MSc | Credit: University of Washington Medicine
Sejal Shah, MD | Credit: Brigham and Women's
Stephanie Nahas, MD, MSEd | Credit: Jefferson Health
Kelley Branch, MD, MS | Credit: University of Washington Medicine
© 2024 MJH Life Sciences

All rights reserved.