Gesture-Based Smartwatch Text Entry: Design, Evaluation, and Future Applications
DOI:
https://doi.org/10.3991/ijim.v20i05.58365Keywords:
Smartwatch, Text Entry, Continuous Gesture Recognition, gestural interactionsAbstract
This work presents an approach for text entry on smartwatches through continuous gesture recognition of geometric shapes. The method allows users to input characters using simple and easily reproducible gestures, such as straight lines and curves, which are recognized in real time as they are performed. A Naïve Bayes classifier categorizes gestures into letters based on conditional probability, and a trie data structure stores words together with their corresponding usage probabilities to enable word suggestion generation during input. The system also incorporates a mechanism that considers both shorter and more frequent words by balancing word length and usage probability. A user evaluation assessed perceived usability using the System Usability Scale (SUS), resulting in an average score of 92.5, which reflects a strong perception of ease of use, low complexity, and rapid learnability. In addition, a quantitative performance analysis indicated an average entry speed of 16.0 words per minute (WPM), providing a complementary characterization of the user interaction behavior. Together, these results indicate that the method provides a feasible interaction approach for devices with limited input space and offers a solid foundation for future studies exploring its applicability in wearable computing, accessibility, and educational contexts.
References
[1] M. Weiser, “The computer for the 21st century,” Scientific American, vol. 265, no. 3, pp. 94–105, 1991. [Online]. Available: http://www.jstor.org/stable/24938718.
[2] H. A. Almusawi, C. M. Durugbo, and A. M. Bugawa, “Wearable Technology in Education: A Systematic Review,” IEEE Transactions on Learning Technologies, vol. 14, no. 4, pp. 540–554, 2021, doi: 10.1109/TLT.2021.3107459.
[3] R. Aloqlah, “Exploring AI-Powered Mobile Technologies in Educational Leadership: Perceptions, Challenges, and Opportunities,” International Journal of Interactive Mobile Technologies (iJIM), vol. 19, no. 13, pp. 78–95, Jul. 2025, doi: 10.3991/ijim.v19i13.53081.
[4] T. Horbylon Nascimento, C. B. R. Ferreira, W. G. Rodrigues, and F. Soares, “Interaction with smartwatches using gesture recognition: A systematic literature review,” in 2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC), 2020, pp. 1661–1666.
[5] R. Lutze and K. Waldho¨r, “Personal health assistance for elderly people via smartwatch based motion analysis,” in 2017 IEEE International Conference on Healthcare Informatics (ICHI), 8 2017, pp. 124–133.
[6] D. Xuanfeng, “Gesture recognition and response system for special education using computer vision and human–computer interaction technology,” Disability and Rehabilitation: Assistive Technology, pp. 1–18, 2025, doi: 10.1080/17483107.2025.2527226.
[7] S. Papadakis, A. M. Striuk, H. M. Kravtsov, M. P. Shyshkina, M. V. Marienko, and H. B. Danylchuk, “Embracing digital innovation and cloud technologies for transformative learning experiences,” in Proceedings of the 11th Workshop on Cloud Technologies in Education (CTE 2023), Kryvyi Rih, Ukraine, Dec. 22, 2023, CEUR Workshop Proceedings, 2024.
[8] M. D. Dunlop, A. Komninos, and N. Durga, “Towards high quality text entry on smartwatches,” in CHI ’14 Extended Abstracts on Human Factors in Computing Systems, ser. CHI EA’14. New York, NY, USA: ACM, 2014, pp. 2365–2370.
[9] I. Oakley and D. Lee, “Interaction on the edge: Offset sensing for small devices,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ser. CHI ’14. New York, NY, USA: ACM, 2014, pp. 169–178.
[10] M. Nebeling, A. To, A. Guo, A. A. de Freitas, J. Teevan, S. P. Dow, and J. P. Bigham, “WearWrite: Crowd-Assisted Writing from Smartwatches,” in Proc. 2016 CHI Conf. Human Factors in Comput. Syst. (CHI ’16), Santa Clara, CA, USA, 2016, pp. 3834–3846, doi: 10.1145/2858036.2858169.
[11] T. H. Nascimento, J. P. Felix, J. L. S. Silva, and F. Soares, “Text Entry on Smartwatches Using Continuous Gesture Recognition and Word Dictionary,” in Human-Computer Interaction, M. Kurosu and A. Hashizume, Eds. Cham, Switzerland: Springer Nature, 2023, pp. 550–562. doi: 10.1007/978-3-031-35596-7_39.
[12] G. Rakhmetulla and A. S. Arif, “Crownboard: A One-Finger Crown-Based Smartwatch Keyboard for Users with Limited Dexterity,” in Proc. 2023 CHI Conf. Human Factors in Comput. Syst. (CHI ’23), Hamburg, Germany, 2023, Art. no. 46, pp. 1–22, doi: 10.1145/3544548.3580770.
[13] Z. Liu, X. Liu, and L. Wang, “CrossKeys: Text Entry for Virtual Reality Using a Single Controller via Wrist Rotation,” International Journal of Human–Computer Interaction, vol. 41, no. 9, pp. 5150–5162, 2024, doi: 10.1080/10447318.2024.2358456.
[14] C. Li, Z. Xi, J. Feng, and J. Zhou, “FineType: Fine-grained Tapping Gesture Recognition for Text Entry,” in Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI ’25), 2025, pp. 1–20, doi: 10.1145/3706598.3714278.
[15] J. Gong, Z. Xu, Q. Guo, T. Seyed, X. A. Chen, X. Bi, and X.-D. Yang, “WrisText: One-Handed Text Entry on Smartwatch Using Wrist Gestures,” in Proc. 2018 CHI Conf. Human Factors in Comput. Syst. (CHI ’18), Montreal, QC, Canada, 2018, pp. 1–14, doi: 10.1145/3173574.3173755.
[16] J.-M. Cha, E. Choi, and J. Lim, “Virtual Sliding QWERTY: A new text entry method for smartwatches using Tap-N-Drag,” Appl. Ergon., vol. 51, pp. 263–272, 2015, doi: 10.1016/j.apergo.2015.05.008.
[17] P. C. Wong, K. Zhu, and H. Fu, “FingerT9: Leveraging Thumb-to-Finger Interaction for Same-Side-Hand Text Entry on Smartwatches,” in Proc. 2018 CHI Conf. Human Factors in Comput. Syst. (CHI ’18), Montreal, QC, Canada, 2018, pp. 1–10, doi: 10.1145/3173574.3173752.
[18] S. Oney, C. Harrison, A. Ogan, and J. Wiese, “ZoomBoard: A Diminutive Qwerty Soft Keyboard Using Iterative Zooming for Ultra-small Devices,” in Proc. SIGCHI Conf. Human Factors in Comput. Syst. (CHI ’13), Paris, France, 2013, pp. 2799–2802, doi: 10.1145/2470654.2481387.
[19] R. Darbar, P. Dash, and D. Samanta, “ETAO Keyboard: Text Input Technique on Smartwatches,” Procedia Comput. Sci., vol. 84, pp. 137–141, 2016, doi: 10.1016/j.procs.2016.04.078.
[20] A. Komninos and M. Dunlop, “Text Input on a Smart Watch,” IEEE Pervasive Comput., vol. 13, no. 4, pp. 50–58, 2014, doi: 10.1109/MPRV.2014.77.
[21] M. Gordon, T. Ouyang, and S. Zhai, “WatchWriter: Tap and Gesture Typing on a Smartwatch Miniature Keyboard with Statistical Decoding,” in Proc. 2016 CHI Conf. Human Factors in Comput. Syst. (CHI ’16), San Jose, CA, USA, 2016, pp. 3817–3821, doi: 10.1145/2858036.2858242.
[22] J. Hong, S. Heo, P. Isokoski, and G. Lee, “SplitBoard: A Simple Split Soft Keyboard for Wristwatch-Sized Touch Screens,” in Proc. 33rd Annu. ACM Conf. Human Factors in Comput. Syst. (CHI ’15), Seoul, Republic of Korea, 2015, pp. 1233–1236, doi: 10.1145/2702123.2702273.
[23] V. Mäkelä, J. Kleine, M. Hood, F. Alt, and A. Schmidt, “Hidden Interaction Techniques: Concealed Information Acquisition and Texting on Smartphones and Wearables,” in Proc. 2021 CHI Conf. Human Factors in Comput. Syst. (CHI ’21), Yokohama, Japan, 2021, Art. no. 248, pp. 1–14, doi: 10.1145/3411764.3445504.
[24] K. Akamine, R. Tsuchida, T. Kato, and A. Tamura, “PonDeFlick: A Japanese Text Entry on Smartwatch Commonalizing Flick Operation with Smartphone Interface,” in Proc. 2024 CHI Conf. Human Factors in Comput. Syst. (CHI ’24), Honolulu, HI, USA, 2024, Art. no. 941, pp. 1–11, doi: 10.1145/3613904.3642569.
[25] J. Lai, L. Zhou, K. Wang, and D. Zhang, “Robust Text Input for Smartwatches: Compensating for Imprecise Tapping and Swiping,” International Journal of Human–Computer Interaction, pp. 1–13, 2025, doi: 10.1080/10447318.2025.2490709.
[26] X. Miao, M. Lv, B. Li, Q. Wang, L. Ban, and J. Zhao, “GazePinch: Text Entry for MR Using Any Hand with Pinch Gestures and Gaze,” International Journal of Human–Computer Interaction, pp. 1–28, 2025, doi: 10.1080/10447318.2025.2530086.
[27] R. Banerjee, S. A. M. Faleel, O. Baheti, K. Hasan, and P. Irani, “ThumbSwype: Thumb-to-Finger Gesture Based Text-Entry for Head Mounted Displays,” Proceedings of the ACM on Human-Computer Interaction, vol. 9, no. 5, Art. no. MHCI031, pp. 1–23, Sep. 2025, doi: 10.1145/3743708.
[28] O. Lamsellak, J. Hdid, A. Benlghazi, A. Chetouani, and A. Benali, “Multi-Approach Learning with Embedded Sensors Application in Gesture Recognition,” International Journal of Interactive Mobile Technologies (iJIM), vol. 18, no. 24, pp. 51–81, Dec. 2024, doi: 10.3991/ijim.v18i24.48785.
[29] I. El Magrouni, A. Ettaoufik, S. Aouad, and A. Maizate, “Hand Gesture Recognition for Virtual Mouse Control,” International Journal of Interactive Mobile Technologies (iJIM), vol. 19, no. 02, pp. 53–64, Jan. 2025, doi: 10.3991/ijim.v19i02.51879.
[30] P.-J. Lin, C.-H. Shih, and T.-H. Weng, “Contactless and Real-Time Hand Gesture Recognition Using Inductive Proximity Technique for Wrist-Worn Wearables,” IEEE Sensors Journal, vol. 25, no. 11, pp. 20474–20485, 2025, doi: 10.1109/JSEN.2025.3556937.
[31] Y. Liu, “Design and Optimization of Human-Computer Interaction System for Education Management Based on Artificial Intelligence,” International Journal of Interactive Mobile Technologies (iJIM), vol. 18, no. 07, pp. 107–124, Apr. 2024, doi: 10.3991/ijim.v18i07.48337.
[32] S. Fallah and I. S. MacKenzie, “LeapBoard: Integrating a Leap Motion Controller with a Physical Keyboard for Gesture-Enhanced Interactions,” Journal on Multimodal User Interfaces, 2025, doi: 10.1007/s12193-025-00461-4.
[33] S. Papadakis, S. H. Lytvynova, S. M. Ivanova, and I. A. Selyshcheva, “Advancing lifelong learning with AI-enhanced ICT: A review of 3L-Person 2024,” in Proceedings of the 9th International Workshop on Professional Retraining and Life-Long Learning using ICT: Person-oriented Approach (3L-Person 2024), Lviv, Ukraine, Sep. 25, 2023, CEUR Workshop Proceedings, 2024.
[34] T. H. Nascimento, F. A. A. M. N. Soares, H. A. D. Nascimento, R. L. Salvini, M. M. Luna, C. Gonçalves, and E. F. Souza, “Interaction with Platform Games Using Smartwatches and Continuous Gesture Recognition: A Case Study,” in Proc. IEEE 42nd Annu. Comput. Softw. Appl. Conf. (COMPSAC), 2018.
[35] T. H. Nascimento, F. A. A. M. N. Soares, P. P. Irani, L. L. G. de Oliveira, and A. D. S. Soares, “Method for Text Entry in Smartwatches Using Continuous Gesture Recognition,” in Proc. IEEE 41st Annu. Comput. Softw. Appl. Conf. (COMPSAC), vol. 2, 2017, pp. 549–554, doi: 10.1109/COMPSAC.2017.168.
[36] T. H. Nascimento, F. A. A. M. N. Soares, M. A. Vieira, J. P. Felix, J. G. Mombach, L. M. C. Campos, W. G. Rodrigues, W. F. de Miranda, and R. M. da Costa, “Using Smartwatches as an Interactive Movie Controller: A Case Study with the Bandersnatch Movie,” in 2019 IEEE 43rd Annual Computer Software and Applications Conference (COMPSAC), vol. 2, 2019, pp. 263–268, doi: 10.1109/COMPSAC.2019.10217.
[37] P. O. Kristensson and L. C. Denby, “Continuous recognition and visualization of pen strokes and touch-screen gestures,” in Proc. 8th Eurographics Symp. Sketch-Based Interfaces and Modeling (SBIM ’11), Vancouver, BC, Canada, 2011, pp. 95–102, doi: 10.1145/2021164.2021181.
[38] L. Chen and S. Wang, “Automated Feature Weighting in Naive Bayes for High-dimensional Data Classification,” in Proc. 21st ACM Int. Conf. Information and Knowledge Management (CIKM ’12), Maui, HI, USA, 2012, pp. 1243–1252, doi: 10.1145/2396761.2398426.
[39] R. A. Virzi, “Refining the test phase of usability evaluation: How many subjects is enough?,” Human Factors, vol. 34, no. 4, pp. 457–468, 1992.
[40] J. Sauro and J. R. Lewis, Quantifying the User Experience: Practical Statistics for User Research. Burlington, MA, USA: Morgan Kaufmann, 2016.
[41] J. Nielsen, Usability Engineering. Burlington, MA, USA: Morgan Kaufmann, 1994.
[42] T. S. Tullis, J. N. Stetson et al., “A comparison of questionnaires for assessing website usability,” in Proceedings of the Usability Professionals Association Conference, vol. 1, pp. 1–12, Minneapolis, USA, 2004.
[43] K. Vertanen and P. O. Kristensson, “A Versatile Dataset for Text Entry Evaluations Based on Genuine Mobile Emails,” in Proc. 13th Int. Conf. Human-Computer Interaction with Mobile Devices and Services (MobileHCI ’11), Stockholm, Sweden, 2011, pp. 295–298, doi: 10.1145/2037373.2037418.
[44] R. W. Soukoreff and I. S. MacKenzie, “Metrics for text entry research: An evaluation of MSD and KSPC, and a new unified error metric,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’03), Ft. Lauderdale, FL, USA, 2003, pp. 113–120, doi: 10.1145/642611.642632.
[45] J. Brooke, “SUS: A ‘Quick and Dirty’ Usability Scale,” in Usability Evaluation in Industry, 1st ed., London, U.K.: Taylor and Francis, 1996, pp. 189–194.
[46] J. Brooke, “SUS: A Retrospective,” Journal of Usability Studies, vol. 8, no. 2, pp. 29–40, Feb. 2013.
[47] A. Bangor, P. Kortum, and J. Miller, “Determining what individual SUS scores mean: Adding an adjective rating scale,” Journal of Usability Studies, vol. 4, no. 3, pp. 114–123, May 2009.
[48] J. R. Lewis and J. Sauro, “Item benchmarks for the System Usability Scale,” Journal of Usability Studies, vol. 13, no. 3, 2018.
[49] M. Y. Mustar, R. Hartanto, and P. I. Santosa, “Interactive Tangible User Interface for Early Childhood Education: A Usability Study on Teaching Geometric Shapes in Kindergarten,” International Journal of Interactive Mobile Technologies (iJIM), vol. 19, no. 06, pp. 153–181, Mar. 2025, doi: 10.3991/ijim.v19i06.52113.
[50] G. Chen, “Design and Application of Scenario-Based Perception of Smart Wearable Device Interaction Method,” International Journal of Interactive Mobile Technologies (iJIM), vol. 18, no. 13, pp. 69–81, Jul. 2024, doi: 10.3991/ijim.v18i13.49071.
[51] X. Tian and Y. Wang, “VR/AR-Based Mobile Interaction for Virtual Simulation Training in Higher Education,” International Journal of Interactive Mobile Technologies (iJIM), vol. 19, no. 20, pp. 168–182, Oct. 2025, doi: 10.3991/ijim.v19i20.58429.
[52] Y. Lu, W. Zhang, and X. Mi, “Development and Evaluation of an AR-Based Interactive System for Occupational Safety Education,” International Journal of Interactive Mobile Technologies (iJIM), vol. 19, no. 19, pp. 107–121, Oct. 2025, doi: 10.3991/ijim.v19i19.58319.
[53] K. Parveen, W. J. Obidallah, A. A. Alghamdi, Y. A. Alduraywish, and M. Shafiq, “Gesture-enhanced adaptive learning platform for personalized AI-driven education,” Interactive Learning Environments, pp. 1–22, 2025, doi: 10.1080/10494820.2025.2542897.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2026 Thamer Horbylon Nascimento, Afonso U. Fonseca, Juliana Felix, Fabrizzio Soares

This work is licensed under a Creative Commons Attribution 4.0 International License.

