Exploring the Feedback Quality of an Automated Writing Evaluation System Pigai
DOI:
https://doi.org/10.3991/ijet.v16i11.19657Keywords:
feedback quality, Automated Writing Evaluation system, PigaiAbstract
The study made an exploration of the feedback quality of an Automated Writing Evaluation system (AWE) Pigai, which has been widely applied in English teaching and learning in China. The study not only focused on the diagnostic precision of the feedback but also investigated the students’ perceptions of the feedback use in their daily writing practices. Taking 104 university students’ final exam essays as the research materials, the paired sample t-test was conducted to compare the mean number of errors identified by Pigai and professional teachers. It was found that Pigai feedback could not so well diagnose the essays as the human feedback given by the experienced teachers, however, it was quite competent in identifying lexical errors. The analysis of students’ perceptions indicated that most students thought Pigai feedback was multi-functional, but it was inadequate in identifying the collocation errors and giving suggestions in syntactic use. The implications and limitations of the study were discussed at the end of the paper.
Downloads
Published
2021-06-04
How to Cite
Gao, J. (2021). Exploring the Feedback Quality of an Automated Writing Evaluation System Pigai. International Journal of Emerging Technologies in Learning (iJET), 16(11), pp. 322–330. https://doi.org/10.3991/ijet.v16i11.19657
Issue
Section
Short Papers