An-Najah University Journal for Research - A (Natural Sciences)

Scopus

Scopus profile and journal metrics

This journal is indexed in Scopus. Use these metrics for a quick publishing snapshot, then open the Scopus page for the authoritative profile.

Scopus
An-Najah University Journal for Research - A (Natural Sciences) Indexed in Scopus since 2019
CiteScore 0.8
Indexed since 2019
First decision 5 Days
Submission to acceptance 160 Days
Acceptance to publication 20 Days
Acceptance rate 14%

SCImago

SCImago Journal Rank preview

Use SCImago when you want a quick visual view of the journal ranking profile and external discoverability signals.

An-Najah University Journal for Research - A (Natural Sciences) SCImago Journal & Country Rank

DOAJ

Directory of Open Access Journals listing

The DOAJ record is useful for readers, librarians, and authors who want a direct open-access directory entry for the journal.

DOAJ
An-Najah University Journal for Research - A (Natural Sciences) Open directory record
Original full research article

Explainable Hybrid Deep Learning Framework with Multimodal Inputs for Diabetic Retinopathy Detection

Published
2025-10-10
Pages
319 - 332
Full text

Keywords

  • Diabetic Retinopathy
  • Eyepacs
  • Explainability
  • Shap
  • Grad-Cam
  • Lime
  • Fundus Image

Abstract

Diabetic Retinopathy (DR) is a leading cause of vision loss, making accurate and interpretable detection critical. This study proposes a hybrid interpretable machine–deep learning framework that integrates multimodal data for enhanced DR severity classification. The model combines unstructured fundus images from EyePACS, Messidor, and APTOS with structured clinical and lifestyle variables such as age, sex, HbA1c, BMI, blood pressure, and diabetes duration. Fundus images undergo preprocessing through resizing, normalization, augmentation, and noise reduction, while clinical data are imputed, normalized, and one-hot encoded. For feature extraction, EfficientNetV2, ResNet50, and Swin Transformer are applied to images, and XGBoost, LightGBM, and TabNet to clinical data. Features are fused via concatenation and attention, followed by classification using Logistic Regression, Random Forest, and MLP. Explainability is provided by Grad-CAM for imaging data and SHAP/LIME for clinical data, supporting clinical interpretability. The proposed model outperformed unimodal baselines, achieving 99.34% accuracy, 98.5% precision, 98.0% recall, 99.0% specificity, 98.2% F1-score, and 0.99 AUC-ROC, with a 10% gain over ResNet50 alone. Performance improvements included a 9% increase in recall and 8% in F1-score, alongside excellent calibration. Confusion matrix analysis confirmed balanced severity detection, and clinicians validated the interpretability outputs. This framework demonstrates robust accuracy, generalization, and clinical applicability for DR screening.

Article history

Received
2025-08-26
Accepted
2025-09-22
Available online
2025-10-10
بحث أصيل كامل

Explainable Hybrid Deep Learning Framework with Multimodal Inputs for Diabetic Retinopathy Detection

Published
2025-10-10
الصفحات
319 - 332
البحث كاملا

الكلمات الإفتتاحية

  • Diabetic Retinopathy
  • Eyepacs
  • Explainability
  • Shap
  • Grad-Cam
  • Lime
  • Fundus Image

الملخص

Diabetic Retinopathy (DR) is a leading cause of vision loss, making accurate and interpretable detection critical. This study proposes a hybrid interpretable machine–deep learning framework that integrates multimodal data for enhanced DR severity classification. The model combines unstructured fundus images from EyePACS, Messidor, and APTOS with structured clinical and lifestyle variables such as age, sex, HbA1c, BMI, blood pressure, and diabetes duration. Fundus images undergo preprocessing through resizing, normalization, augmentation, and noise reduction, while clinical data are imputed, normalized, and one-hot encoded. For feature extraction, EfficientNetV2, ResNet50, and Swin Transformer are applied to images, and XGBoost, LightGBM, and TabNet to clinical data. Features are fused via concatenation and attention, followed by classification using Logistic Regression, Random Forest, and MLP. Explainability is provided by Grad-CAM for imaging data and SHAP/LIME for clinical data, supporting clinical interpretability. The proposed model outperformed unimodal baselines, achieving 99.34% accuracy, 98.5% precision, 98.0% recall, 99.0% specificity, 98.2% F1-score, and 0.99 AUC-ROC, with a 10% gain over ResNet50 alone. Performance improvements included a 9% increase in recall and 8% in F1-score, alongside excellent calibration. Confusion matrix analysis confirmed balanced severity detection, and clinicians validated the interpretability outputs. This framework demonstrates robust accuracy, generalization, and clinical applicability for DR screening.

Article history

تاريخ التسليم
2025-08-26
تاريخ القبول
2025-09-22
Available online
2025-10-10