Design of an Immersive Task-Driven Mobile Interactive Platform for EFL Writing and Analysis of Language Output Complexity
DOI:
https://doi.org/10.3991/ijim.v20i06.60863Keywords:
mobile learning, EFL writing, immersive task-driven learning, AR, context awareness, language complexity analysisAbstract
The widespread adoption of mobile learning has accelerated the development of English writing support tools; however, existing applications often suffer from limited immersion, weak adaptivity, and an overreliance on post-hoc outcome-based analyses of language complexity, with insufficient capture of the writing process itself. To address these limitations, this study designs and implements an immersive task-driven mobile interactive platform for English as a foreign language (EFL) writing, integrating a multi-technology immersive writing environment with an automated language complexity analysis framework. The platform introduces three core innovations: (1) a context-aware dynamic task engine driven by situational perception, (2) a multimodal augmented reality (AR) interactive interface to enhance immersion, and (3) a full-process data pipeline specifically designed for fine-grained language complexity analysis. A mobile–cloud collaborative architecture is adopted, in which task generation is dynamically optimized through a hybrid algorithm combining rule-based logic and reinforcement learning. Results from controlled experiments indicate that, compared with the control group, learners in the experimental group achieved significantly greater improvements in both syntactic complexity—average sentence length (+22.4%), mean clause length (+19.8%), and subordinate clause density (LD) (+17.6%)—and lexical complexity, including lexical diversity (+17.7%) and academic vocabulary usage (+58.5%). All between-group differences reached a highly significant level (p < 0.001). Further analysis reveals that task immersion, feedback uptake rate, and scenario–task alignment are key predictors of language complexity gains, jointly explaining 68.3% of the variance. In addition, the platform demonstrates robust performance across heterogeneous mobile devices, achieving an AR scene recognition accuracy of at least 81.2% and a feedback generation latency of no more than 268 ms. By deeply integrating multiple technologies, this study establishes a closed-loop intervention framework encompassing “context–task–feedback– analysis,” addressing a critical gap in existing research on multi-technology-enabled writing instruction. The findings provide a novel paradigm and empirical evidence for innovation in mobile technology–driven language education.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2026 Yaling Zhao, Junxia Zhao

This work is licensed under a Creative Commons Attribution 4.0 International License.

