New Screening Model for Early Detection of AMD Accurate, Portable, Cost-Effective

Article

The model was tested using 402 normal and 708 AMD color fundus images, achieving a 95.45% average accuracy with a 10-fold cross validation strategy and a 91.17% average accuracy with blindfold validation.

Jen Hong Tan, PhD

An international, multi-disciplinary study led by Jen Hong Tan, PhD, from the Department of Electronics and Computer Engineering at Ngee Ann Polytechnic in Singapore, has developed an accurate, low-cost, portable, 14-layer deep convolutional neural network (CNN) model to automatically detect signs of age-related macular degeneration (AMD) during rapid, comprehensive eye exams.

The model was tested using 402 normal and 708 AMD color fundus images, and according to Tan and colleagues, it achieved a 95.45% average accuracy with a 10-fold cross validation strategy and a 91.17% average accuracy with blindfold validation.

The research team, which included ophthalmologists, biomedical researchers, laboratory scientists, and medical engineers from the United States, India, the UK, Malaysia, and Singapore, was inspired by what they called an “unmet need to develop simple, cheap, and portable diagnostic and analytical tools to allow early diagnosis and prompt referral for treatment.” The early detection of AMD symptoms through the use of fundus photography is, according to Tan, the preferred tool for detection and classification of the condition, but “the visual interpretation of fundus images can be subjective and is prone to intraobserver variabilities.”

Tan stated that despite rising the interest and research on Computer-Aided Diagnosis (CAD) systems to automatically identify AMD fundus images, “no attempt of implementing deep learning solutions has been proposed to support AMD detection” prior to the researchers’ development of the 14-layer CNN.

The main purpose of the model is to “predict the input fundus image into normal or AMD classes” using CNN made up of many layers of unique and pooled convolutions. Each convolution picks up unique features from the fundus images, generates feature maps, and learns from its past experiences by refining its sensitivity and understanding of the fundus image features and their connection to AMD.

Once the CNN architecture was created, Tan and colleagues used an optimizer to estimate the CNN’s adaptive learning rate—called the “Adam” algorithm—and a total of 400 normal and 700 fundus images were divvied into 10 segments for training and testing. In total, 9 of the sets were used for training, with the tenth used for validation testing. Tan reported that each of the training images was also augmented 4 times by being presented in different configurations. The original fundus image was scanned and then rescanned after being flipped to the left, flipped downwards, and flipped downwards and to the left.

The CNN model was trained for accuracy, sensitivity, and specificity in relation to correct and competent identification of AMD features through fundus images. A blindfold strategy of CNN model performance diagnosed 328 true positive diagnoses of AMD, 178 true negative diagnoses of AMD, and only 26 false negatives and 23 false positives. The blindfold strategy obtained a 91.17% accuracy, 92.66% sensitivity, and 88.56% specificity, all of which show that “the proposed system is robust and capable of identifying an unknown fundus image,” according to the research team. A 10-fold cross-validation strategy test of accuracy, sensitivity, and specificity demonstrated even higher numbers; using multiple passes to validate data, it revealed an average accuracy of 95.45%, an average sensitivity of 96.43%, and an average specificity of 93.75%.

Tan and colleagues stated that the data collected shows their proposed CNN model has achieved high performance, partially because in contrast with previous research, the CNN model training dataset used the highest number of fundus images. The images, pulled from various databases, “were combined to provide a more diverse range of fundus images for training and testing,” Tan wrote.

The group argued that when paired with the system architecture, this diverse range of images has allowed their CNN model to obtain high performance in multiple folds, which indicates the “superiority of the developed system.” Tan and colleagues also noted that the already high performance, as a result of the CNN’s deep learning capabilities, will improve with use and exposure to additional fundus images.

In addition to high-performance, Tan and colleagues argued that their proposed CNN model has other advantages, first that it is fully automatic and does not require “meticulous engineering to design a feature extraction” by manual means, which could save time and reduce frustration. The proposed model also utilizes cloud system technology which allows images to be “sent to the could for classification through [a] web browser.”

Currently, AMD is diagnosed primarily through Optical Coherence Tomography (OCT), but the OCT machine is, according to Tan, “heavy, immobile, and expensive,” especially in comparison to the proposed CNN which can be “installed as a CAD eye screening tool” for polyclinics as a way to diagnose AMD and refer patients with suspected AMD to specialists where an OCT can be performed.

Tan also wrote that the potential capabilities of their 14-layer CNN CAD model for AMD could be applied to other CAD applications in various medical fields.

Tan and colleagues believe that their CNN model can “be effortlessly applied to clinics” to assist with computer-aided diagnosis of AMD, as “a second opinion tool to assist ophthalmologists in their diagnosis,” and due to the low-cost and portability, “can also be installed in third world countries or in rural areas where ophthalmology care is limited.”

The study, “Age-related Macular Degeneration detection using deep convolutional neural network" was published in Future Generation Computer Systems.

Related Videos
Peter A. Campochiaro, MD: RGX-314 for nAMD | Image Credit: Johns Hopkins Medicine
© 2024 MJH Life Sciences

All rights reserved.