Making STEM Subjects Graphs Accessible for Blind and Visually Impaired Students Using Document Understanding Transformer (DONUT) Model

Authors

  • Syed Muhammad Hassan Zaidi Sind Madressatul Islam University, Karachi, Pakistan
  • Abdul Hafeez Khan Sindh Madressatul Islam University, Karachi, Pakistan
  • Sarmad Shaikh Sindh Madressatul Islam University, Karachi, Pakistan
  • Imtiaz Hussain Sindh Madressatul Islam University, Karachi, Pakistan

DOI:

https://doi.org/10.3991/ijoe.v21i14.58675

Keywords:

Education, HCI, AI, STEM Subjects, Visually Impaired, Graphs, Donut, Transformer, Deep Learning

Abstract


Most STEM (Science, Technology, Engineering, and Mathematics) subjects rely heavily on graphs and charts, which remain largely inaccessible to students who are blind or visually impaired. While text can often be made accessible through screen readers, complex visual structures such as charts are much harder to interpret non-visually. This study presents a proof-of-concept system that applies the DONUT (Document Understanding Transformer) model to STEM charts. The model was trained and evaluated on the Benetech STEM dataset and tested on multiple images, demonstrating promising results in extracting key information such as chart type, chart ID, and x–y coordinate values. Although no user-centered trials or formal educational studies have yet been conducted, this work establishes an initial technical foundation for converting chart data into accessible formats. By enabling interpretation of chart types and data trends, the proposed system has the potential to improve accessibility in STEM education for blind and visually impaired learners, pending further validation and integration with assistive technologies.

References

P. Argüeso, “Human ocular mucins: The endowed guardians of sight,” Advanced Drug

Delivery Reviews, vol. 180, p. 114074, Jan. 2022, doi: 10.1016/j.addr.2021.114074.

[2]. P. Boyce, "Light, lighting and human health," Lighting Research & Technology, vol.

54, no. 2, pp. 101–144, Apr. 2021, doi: 10.1177/ 4771535211010267.

[3]. Gollbo, Anton. "Graph Attention Networks for Link Prediction in Semantic Word

Grouping." (2023).

[4]. Ayub Khan, A., Laghari, A. A., Shaikh, A. A., Bourouis, S., Mamlouk, A. M., & Alshazly,

H. (2021). Educational blockchain: A secure degree attestation and verification traceability

architecture for higher education commission. Applied Sciences, 11(22), 10917.

[5]. F. K. Astuti, E. Ellianawati, M. Masturi, W. Wiyanto, and W. Sumarni, “Central Java

Teachers’ Perspective on Science, Technology, Engineering and Mathematics (STEM)

Learning,” Journal of Innovative Science Education, vol. 12, no. 1, pp. 74–81, Apr. 2023,

doi: 10.15294/jise.v12i1.53846.

[6]. Shaikh, Z. A., Khan, A. A., Baitenova, L., Zambinova, G., Yegina, N., Ivolgina, N., ... &

Barykin, S. E. (2022). Blockchain hyperledger with non-linear machine learning: A novel

and secure educational accreditation registration and distributed ledger preservation

architecture. Applied Sciences, 12(5), 2534.

[7]. K. Marriott et al., “Inclusive data visualization for people with disabilities,” Interactions,

vol. 28, no. 3, pp. 47–51, Apr. 2021, doi: 10.1145/3457875.

[8]. S. Zhu, K. Ota, and M. Dong, “Energy-Efficient Artificial Intelligence of Things With

Intelligent Edge,” IEEE Internet of Things Journal, vol. 9, no. 10, pp. 7525–7532, May

2022, doi: 10.1109/jiot.2022.3143722.

[9]. A. Budrionis, D. Plikynas, P. Daniušis, and A. Indrulionis, “Smartphone-based computer

vision travelling aids for blind and visually impaired individuals: A systematic review,”

Assistive

Technology, vol. 34, no. 2, pp. 178–194, Apr. 2020, doi:

10.1080/10400435.2020.1743381.

[10]. B. Kuriakose, R. Shrestha, and F. E. Sandnes, “Tools and Technologies for Blind and

Visually Impaired Navigation Support: A Review,” IETE Technical Review, vol. 39, no. 1,

pp. 3–18, Sep. 2020, doi: 10.1080/02564602.2020.1819893.

[11]. G. Alexiou, "How AI Is Being Used To Help Blind Students 'Visua ize' Graphs And Charts,"

Forbes.

https://www.forbes.com/sites/gusalexiou/2022/07/27/how-ai-is-being-used-to

help-blind-students-visualize-graphs-and-charts/?sh=457af8f02c6d.

[12]. C. Jung, S. Mehta, A. Kulkarni, Y. Zhao, and Y.-S. Kim, “Communicating Visualizations

without Visuals: Investigation of Visualization Alternative Text for People with Visual

Impairments,” IEEE Transactions on Visualization and Computer Graphics, vol. 28, no. 1,

pp. 1095–1105, Jan. 2022, doi: 10.1109/tvcg.2021.3114846.

[13]. J. L. Joyner and S. T. Parks, “Scaffolding STEM Literacy Assignments To Build Greater

Competence in Microbiology Courses,” Journal of Microbiology & Biology

Education, vol. 24, no. 1, Apr. 2023, doi: 10.1128/jmbe.00218-22.

[14]. Welsh, R. “Foundations of Orientation and Mobility”; Technical Report; American Printing

House for the Blind: Louisville, KY, USA, 1981.

[15]. B. W. Stone, D. Kay, and A. Reynolds, “Teaching Visually Impaired College Students in

Introductory Statistics,” Journal of Statistics Education, vol. 27, no. 3, pp. 225–237, Sep.

2019, doi: 10.1080/10691898.2019.1677199.

[16]. B. Whitburn, “‘A really good teaching strategy’: Secondary students with vision impairment

voice their experiences of inclusive teacher pedagogy,” British Journal of Visual

Impairment, vol. 32, no. 2, pp. 148–156, Apr. 2014, doi: 10.1177/0264619614523279.

[17]. Y. Yang, K. Marriott, M. Butler, C. Goncu, and L. Holloway, “Tactile Presentation of

Network Data: Text, Matrix or Diagram?,” Proceedings of the 2020 CHI Conference on

Human Factors in Computing Systems, Apr. 2020, doi: 10.1145/3313831.3376367.

[18]. A. Sharif, O. H. Wang, A. T. Muongchan, K. Reinecke, and J. O. Wobbrock, "VoxLens:

Making Online Data Visualizations Accessible with an Interactive JavaScript Plugin," CHI

Conference on Human Factors in Computing Systems, Apr. 2022, doi:

10.1145/3491102.3517431.

[19]. C. Engel, E. F. Müller, and G. Weber, “SVGPlott,” Proceedings of the 12th ACM

International Conference on PErvasive Technologies Related to Assistive Environments,

Jun. 2019, doi: 10.1145/3316782.3316793.

[20]. S. Bi et al., “A Survey on Artificial Intelligence Aided Internet-of-Things Technologies in

Emerging Smart Libraries,” Sensors, vol. 22, no. 8, p. 2991, Apr. 2022, doi:

10.3390/s22082991.

[21]. B. H. Lee and Y. J. Lee, “Evaluation of medication use and pharmacy services for visually

impaired persons: Perspectives from both visually impaired and community pharmacists,”

Disability and Health Journal, vol. 12, no. 1, pp. 79–86, Jan. 2019, doi:

10.1016/j.dhjo.2018.07.012.

[22]. C. Clark and S. Divvala, “PDFFigures 2.0,” Proceedings of the 16th ACM/IEEE-CS on

Joint Conference on Digital Libraries, Jun. 2016, doi: 10.1145/2910896.2910904.

[23]. K. Swathi, B. Vamsi, and N. T. Rao, “A Deep Learning-Based Object Detection System for

Blind People,” Lecture Notes in Networks and Systems, pp. 223–231, 2021, doi:

10.1007/978-981-16-1773-7_18.

[24]. L. D. Lopez et al., “A framework for biomedical figure segmentation towards image-based

document retrieval,” BMC Systems Biology, vol. 7, no. Suppl 4, p. S8, 2013, doi:

10.1186/1752-0509-7-s4-s8.

[25]. B. Davis, B. Morse, B. Price, C. Tensmeyer, C. Wigington, and V. Morariu, “End-to-End

Document Recognition and Understanding with Dessurt,” Computer Vision – ECCV 2022

Workshops, pp. 280–296, 2023, doi: 10.1007/978-3-031-25069-9_19.

[26]. National Federation of The Blind, "Blindness Statistics | National Federation o the Blind,"

Nfb.org, Jan. 2019. https://nfb.org/resources/blindness-statistics.

[27]. W. A. Erickson, S. VanLooy, S. von Schrader, and S. M. Bruyère, “Disability, Income, and

Rural Poverty,” Disability and Vocational Rehabilitation in Rural Settings, pp. 17–41, Nov.

2017, doi: 10.1007/978-3-319-64786-9_2.

[28]. A. F. Siu et al., “COVID-19 highlights the issues facing blind and visually impaired people

in accessing data on the web,” Proceedings of the 18th International Web for All

Conference, Apr. 2021, doi: 10.1145/3430263.3452432.

[29]. J. Choi, S. Jung, D. G. Park, J. Choo, and N. Elmqvist, “Visualizing for the Non‐Visual:

Enabling the Visually Impaired to Use Visualization,” Computer Graphics Forum, vol. 38,

no. 3, pp. 249–260, Jun. 2019, doi: 10.1111/cgf.13686.

[30]. J. Hwang, K. H. Kim, J. G. Hwang, S. Jun, J. Yu, and C. Lee, “Technological Opportunity

Analysis: Assistive Technology for Blind and Visually Impaired People,” Sustainability,

vol. 12, no. 20, p. 8689, Oct. 2020, doi: 10.3390/su12208689.

[31]. J. Wang, S. Wang, and Y. Zhang, “Artificial intelligence for visually impaired,” Displays,

vol. 77, p. 102391, Apr. 2023, doi: 10.1016/j.displa.2023.102391.

[32]. A. Shelton and T. Ogunfunmi, “Developing a Deep Learning-enabled Guide for the Visually

Impaired,” 2020 IEEE Global Humanitarian Technology Conference (GHTC), Oct. 2020,

doi: 10.1109/ghtc46280.2020.9342873.

[33]. J. S. Kallimani, K. G. Srinivasa, and R. B. Eswara, “Extraction and interpretation of charts

in technical documents,” 2013 International Conference on Advances in Computing,

Communications and Informatics (ICACCI), Aug. 2013, doi: 10.1109/icacci.2013.6637202.

[34]. A. K. Triantafyllidis and A. Tsanas, “Applications of Machine Learning in Real-Life Digital

Health Interventions: Review of the Literature,” Journal of Medical Internet Research, vol.

21, no. 4, p. e12286, Apr. 2019, doi: 10.2196/12286.

Internet

[35]. K. Manjari, M. Verma, and G. Singal, “A survey on Assistive Technology for visually

impaired,”

of

10.1016/j.iot.2020.100188.

Things, vol. 11, p. 100188, Sep. 2020, doi:

[36]. C. Park, C. C. Took, and J.-K. Seong, “Machine learning in biomedical engineering,”

Biomedical Engineering Letters, vol. 8, no. 1, pp. 1–3, Feb. 2018, doi: 10.1007/s13534-018

0058-3.

[37]. B. K. Swenor, P. Y. Ramulu, J. R. Willis, D. Friedman, and F. R. Lin, “The Prevalence of

Concurrent Hearing and Vision Impairment in the United States,” JAMA Internal Medicine,

vol. 173, no. 4, p. 312, Feb. 2013, doi: 10.1001/jamainternmed.2013.1880.

[38]. S. C. Daggubati and J. Sreevalsan-Nair, “ACCirO: A System for Analyzing and Digitizing

Images of Charts with Circular Objects,” International Conference on Computational

Science, pp. 605–612, 2022, doi: 10.1007/978-3-031-08757-8_50.

[39]. J. Ganesan, A. T. Azar, S. Alsenan, N. A. Kamal, B. Qureshi, and A. E. Hassanien, “Deep

Learning Reader for Visually Impaired,” Electronics, vol. 11, no. 20, p. 3335, Oct. 2022,

doi: 10.3390/electronics11203335.

[40]. R. Tasnim, S. T. Pritha, A. Das, and A. Dey, “Bangladeshi Banknote Recognition in Real

time using Convolutional Neural Network for Visually Impaired People,” 2021 2nd

International Conference on Robotics, Electrical and Signal Processing Techniques

(ICREST), Jan. 2021, doi: 10.1109/icrest51555.2021.9331182.

[41]. M. Mukhiddinov and J. Cho, “Smart Glass System Using Deep Learning for the Blind and

Visually Impaired,” Electronics, vol. 10, no. 22, p. 2756, Nov. 2021, doi:

10.3390/electronics10222756.

[42]. P. Mishra, S. Kumar, M. K. Chaube, and U. Shrawankar, “ChartVi: Charts summarizer for

visually impaired,” Journal of Computer Languages, vol. 69, p. 101107, Apr. 2022, doi:

10.1016/j.cola.2022.101107.

[43]. R. S. Bankar and S. R. Lihitkar, "JAWS (Job Access With Speech)," Advances in

Educational Technologies and Instructional Design, pp. 19–40, Apr. 2022, doi:

10.4018/978-1 7998-4736-6.ch002.

[44]. Acces, N. V. " via." URL https://www. nvaccess. org (2019).

[45]. D. Jung et al., “ChartSense,” Proceedings of the 2017 CHI Conference on Human Factors

in Computing Systems, May 2017, doi: 10.1145/3025453.3025957.

Downloads

Published

2025-12-12

How to Cite

Zaidi, S. M. H., Khan, A. H., Shaikh, S., & Imtiaz Hussain. (2025). Making STEM Subjects Graphs Accessible for Blind and Visually Impaired Students Using Document Understanding Transformer (DONUT) Model. International Journal of Online and Biomedical Engineering (iJOE), 21(14), pp. 4–19. https://doi.org/10.3991/ijoe.v21i14.58675

Issue

Section

Papers