Classification of Diabetic Retinopathy from Fundus Images Using Hybrid Deep Learning Feature Combination and Divide-andConquer Approach

Diabetic Retinopathy (DR) is the leading cause of blindness in diabetic patients. Different ophthalmologists capture high-resolution fundus images of the eye with varying sizes and quality. The International Clinical Diabetic Retinopathy (ICDR) severity scales divide the progress of the DR into five...

Description complète

Détails bibliographiques
Auteur principal: MUHAMMAD MOHSIN, BUTT
Format: Thèse
Langue:anglais
Publié: UNIMAS 2025
Sujets:
Accès en ligne:http://ir.unimas.my/id/eprint/47753/
Abstract Abstract here
Description
Résumé:Diabetic Retinopathy (DR) is the leading cause of blindness in diabetic patients. Different ophthalmologists capture high-resolution fundus images of the eye with varying sizes and quality. The International Clinical Diabetic Retinopathy (ICDR) severity scales divide the progress of the DR into five stages. Multiclass classification of DR is a challenging task that relies on precisely identifying subtle anomalies. In this work, a multiclass classification system for DR from fundus images of the eye is proposed. The system considers the varying size and quality of the images and proposes a new pre-processing framework to deal with degraded fundus images. Two new frameworks are also proposed for multiclass DR classification. The processed images are used to extract high-quality features, which are passed on to different Machine Learning (ML) classifiers for training and testing. The first framework utilises the inherent feature extraction capabilities of Convolutional Neural Networks (CNN) to extract hybrid features that are used for classification. High-resolution fundus images can discern nuanced vessel textures and other critical features which can further improve multiclass DR classification but require high computational resources and memory requirements. The second framework introduces a memory efficient divide and conquer approach for extracting high-resolution fundus image features. The hybrid framework presented achieves a maximum accuracy of 92.69%, 88.02%, and 83.62% for the binary, three class, and five class DR classification respectively for the Diabetic Retinopathy Dataset (DDR) dataset. The divide and conquer approach further improve the accuracy of the system with values of 97.60%, 89.46%, and 85.64% for the binary, three class, and five class DR classification respectively for the DDR dataset. The results provide significant performance improvement compared to recent studies in the literature.