Title : Automating glaucoma surveillance a biomarker driven deep learning framework for the glaucoma detection and progression using retinal fundus images
Abstract:
Background: Glaucoma, the leading cause of irreversible blindness, demands early diagnosis and consistent monitoring, which remain difficult due to the subjectivity of traditional assessments. Retinal fundus imaging combined with deep learning offers a promising avenue for the objective and automated detection and evaluation of key glaucoma biomarkers. We present a deep learning framework for precise glaucoma detection, segmentation, and quantitative biomarker extraction from retinal fundus images, enabling reliable disease monitoring and supporting early diagnosis and personalized management in patients with diabetes.
Method: The proposed methodology utilizes a deep learning framework for the automated detection and segmentation of the optic disc and cup in preprocessed retinal fundus images. The initial preprocessing of retinal fundus images was performed using CLAHE for contrast enhancement, gamma correction, bilateral filtering, and image sharpening. We developed a comprehensive deep learning-based pipeline for glaucoma detection by integrating the EfficientNetB3 architecture. A diverse labeled dataset was created by combining five public sources, ORIGA, LAG, G1020, DRISHTI-GS, and AIROGS Dev RG for robust glaucoma detection performance.
The progression assessment approach employs a U-Net convolutional neural network architecture augmented with a ResNet34 encoder to enhance feature extraction capabilities. We employed a combination of three datasets - REFUGE, PAPILA, and ORIGA - which comprise high-resolution retinal fundus images with expert-annotated segmentation of the optic disc and cup. Multi-label mask generated from preprocessed images by combining disc and cup masks during training. Biomarkers from the optic disc and cup morphology, such as the vertical cup-to-disc ratio, neuroretinal rim thickness, and focal notches, were derived from the predicted mask of retinal fundus images. The ISNT rule was subsequently validated based on these extracted features. The performance was evaluated using the Dice coefficient and intersection-over-union (IoU). Using the extracted features, the severity of glaucoma was categorized as normal, suspected, mild, moderate, or severe, and a comprehensive clinical report was generated using an LLM on these features.
Results: The detection system exhibited exceptional performance, achieving an accuracy of 99%. The proposed progression system demonstrated superior performance in optic disc and cup segmentation, achieving Dice coefficients of 0.97 and 0.95, respectively, thereby surpassing traditional segmentation methods. Glaucoma detection and severity classification using feature extraction and rule validation provided reliable predictions, whereas LLM-enabled conversion to standardized clinical reports supports automated, reproducible analysis and advanced progression assessment.
Conclusion: This deep learning framework enables reliable glaucoma biomarker detection, precise measurements, and severity classification, addressing critical gaps in clinical monitoring. It supports deployment across clinical settings to improve the consistency and timeliness of glaucoma detection and progression assessment. Future expansions could incorporate longitudinal fundus imaging to predict disease progression and enhance early and personalized glaucoma management.