Illumination-Robust Conjunctival Image Preprocessing for Accurate Segmentation and Anemia Detection Using Deep Learning
DOI:
https://doi.org/10.3991/ijoe.v21i07.54439Keywords:
anemia, deep learning, Non-invasive diagnostics, luminance correction processing, illumination variabilityAbstract
Anemia, defined by reduced hemoglobin or red blood cell levels, remains a critical public health issue, particularly in resource-limited settings where traditional diagnostics are inaccessible. Non-invasive detection via ocular conjunctiva imaging offers a viable solution but is challenged by illumination variability in outdoor environments. This study introduces a novel preprocessing pipeline to standardize conjunctival images, employing grayscale histogram normalization, LAB color space-based glare inpainting, and adaptive contrast enhancement to counter uneven lighting and reflections. Segmentation performance was assessed using U-Net, BiSeNet, and ConjunctiveNet; U-Net outperformed the others, achieving a precision of 84.22% with preprocessing versus 80.08% without preprocessing. For anemia classification, an artificial neural network (ANN), CNN-ResNet, and SLIC-GAT models were tested on the CP-AnemiC (Ghana) and Eyes-defy-anemia (India) datasets. Preprocessing significantly boosted ANN accuracy from 81.54% to 85.51% (Ghana) and 85.94% to 88.28% (India), with precision increasing by up to 6.33%. For CNN-ResNet, F1-scores improved from 81.91% to 89.15% (Ghana), while for ANN on the India dataset, F1-scores increased from 85.73% to 87.35%. These results highlight the pipeline’s ability to enhance segmentation accuracy and classification reliability, reducing false positives and enabling robust anemia detection under variable lighting, thus advancing non-invasive diagnostics for field applications.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Jose Humberto Fuentes-Beingolea, Facundo Palomino-Quispe, Julio Cesar Herrera-Levano, Willy Vargas-Mateos, Ruben Florez, Ana Beatriz Alvarez

This work is licensed under a Creative Commons Attribution 4.0 International License.

