Measuring Learner Satisfaction with Formative e -Assessment Strategies

— The Student experience with different aspects of online instructional settings has been the focus of educational practitioners and researchers in many studies. However, concerning technology-enabled formative assessment, little is known about student satisfaction regarding different possible formative e-assessment strategies students are involved in. Using a 5-point Likert scale questionnaire, a web-based survey was developed to examine students’ satisfaction with the formative e-assessment strategies within an enriched virtual blended course. The results show that, in general, the students were satisfied with the quality of their engagement and the quality of feedback across all the formative e-assessment learning activities. The results also show that the student satisfaction varied between and within the formative e-assessment strategies. However, the gap between the student satisfaction mean ratings across all formative e-assessment strategies was marginal and could not help researchers decide upon which formative e-assessment strategy that stood out as the most preferred one. Learner satisfaction with different formative e-assessment strategies was positively correlated to each other at various levels but no relationship was found between students’ scores and learner satisfaction with formative e-assessment strategies. In the end, the study recommends a sustained and integrated use of all three formative e-assessment strategies (online knowledge survey, online student-generated questions and peer-responses, and electronic reflective journals) in the context of blended courses. The study suggests also further studies that would widen, diversify both the scope and research instruments to investigate learner satisfaction with formative e-assessment strategies.


Introduction
Recent developments in technology have changed in many ways how people work and live. In teaching and learning, technology is changing pedagogical practices, and with the advent of e-learning solutions, the Internet is revolutionizing instructional delivery methods. Higher education institutions seem to be under pressure [1] to partially or wholly move the teaching, learning and assessment activities online. [2] assert that that one of the important pedagogical factors to consider when designing online courses in higher education is to create a learning environment where the content and assessment are embedded and integrated into the learning experience and knowledge building.
Despite the fact that both formative assessment (assessment to support learning) and summative assessment (assessment for accreditation and validation) are important in online courses [3], there has been a tension between them [4]. Summative assessment has been dominating instructional processes in online higher education at the expense of formative assessment [5], [6], [3], [7]. For this reason, some authors (for example [8]) advocate for a shift from focusing heavily on summative assessment practices in order to develop instructional assessment tasks that not only assess the end-product or the performance but also provide ongoing feedback.
Some studies have demonstrated that the effective use of technology can improve and support formative assessment practices [9], [10]. Technology allows students to monitor their understanding whenever and wherever they want [11]. Technology can also support immediate feedback and allow a rapid change of students' misconceptions [5]. Technology helps in speeding up tracking, tracing, storing, processing and visualising students' results as well as actions [11]. In addition, technology can be a "resource-efficient way" to give timely feedback to students [12].
It is important to notice that, amid the progressive increase of using new technologies to support Formative Assessment (FA), the consideration of students' perceptions has a paramount importance. Students' acceptance and attitude towards these technologies seem to be part of the determining factors [13]. Research studies on students' attitude towards online FA [14], [15], [16], [17], [18]) have mainly focused on students' views and attitudes towards online FA with little emphasis on students' satisfaction. Therefore, the present study aims at exploring the student satisfaction with formative e-assessment strategies. This is a retrospective study that looks back at three previous studies about the implementation of formative e-assessment strategies in real classroom settings. We examined how the students perceived formative eassessment strategies (online knowledge survey; online student-generated questions and peer responses; and structured electronic reflective journals) they were involved in.

Literature Review
Studies on student perceptions of formative e-assessment have been contextualised within a growing need to respond to universities' concerns about the effectiveness and quality of online courses. Research on student satisfaction with online courses in higher education has involved both graduate and undergraduate students and across diverse populations of students [19]. In the following paragraphs, some of the theoretical approaches and models that have been used to define, understand, and assess the student satisfaction with online courses are reviewed.
The study by [20] that investigated the relationship between the constructs of a web-based learning and student satisfaction, identified five key constructs that predicted the student's perception with online courses: learner relevance, active learning, authentic learning, learner autonomy, and computer technology competence. According to [21], many studies have established that both quantity and quality of student interactions are highly correlated with student satisfaction in most learning environments. Student interaction plays an important role and constitutes one of the major factors that determine the student satisfaction in online course [22].
In their study that examined the satisfaction of students and instructors toward online learning tools and resources, [23] used the Expectancy Confirmation Theory (ECT) and Technology Acceptance Model (TAM). Their study's findings indicated that student expectation was a very important factor that helped the teachers design and develop effective technology-based instructional activities that enhance student learning. By extending the research on the community of inquiry framework [24] to understand online learning, [25] examined the effects of technology on the community of inquiry (social, content and teaching presence issues) and satisfaction with online courses. They specifically examined how the Learning Management System (LMS) provided people with the ability to take actions in an online course and one the major findings was that satisfaction with the LMS predicted course satisfaction.
Previous research studies also focused on some formative e-assessment-related areas such as student perceptions or views, effect, student satisfaction, evaluation, and student attitudes. A university student survey by [26] indicated the students' positive perceptions of an anonymous online peer feedback in formative e-assessment. The students' positive perceptions were also observed in the studies by [27] and [17]. They respectively found out that the students valued the utilization of formative feedback in an online learning environment and perceived the online homework use for formative assessment as useful.
Students' perceptions of the effect of formative e-assessment on their learning have also been investigated. [28] conducted interviews and a student survey to study students' perceptions of a "novel formative assessment" that consisted of involving students solving circuit problems online individually. Compared to a traditional online discussion, the majority of students reported more engagement, more learning, and more interaction with the instructor. In addition, [29] and [30] found out that the students thought their learning was improved as a result of taking part in online formative assessment instructional activities.
Some few research studies that have focused on student satisfaction with formative e-assessment practices suggested that students' high satisfaction with e-assessment [31] and with a web-based formative assessment [32], and a positive and collaborative learning resulting from online peer assessment led students to report strong satisfaction [33]. Research in this area has also focused on the students' evaluation of the effectiveness of formative e-assessment [34], [35] and students' attitudes towards different aspects and strategies of online formative assessment [36].
A close look at the research studies highlighted above may lead to two main observations. Firstly, in most cases, these previous studies were not based on the principles of good formative assessment and feedback practice proposed for example by [37] that may result into increased learning benefits when they are applied using technology in teaching and learning process. The second observation that can be drawn from the reviewed research studies is that the focus was put on the formative e-assessment strategies other than the ones the present study is concerned with which are: online knowledge survey, online student-generated questions and peer-responses, and the online reflective electronic journals. These strategies are briefly described in the following paragraphs.
Online knowledge surveys consist of sets of questions that cover the entire content of an [online or blended] course [38]. The students are expected to address these questions, not by providing actual and correct answers, but instead by responding to a rating scale of one's own confidence to respond with competence to each question [39]. Knowledge surveys are used as instructional tools students and teachers can use to analyse the student understanding of the course contents, and organise and review the curricula [40]. Knowledge survey practices can serve formative assessment purposes by providing students with an opportunity to monitor their understanding of the learning material, to know where and when they have deficiencies [39] and provide them with a sense of control over their own learning by making the learning more visible [41].
The use of student-generated questions can promote student learning. Studentgenerated questions can be an effective approach to assessment in online courses [42]. Questioning process is fundamental to intelligent understanding [43] and "a hallmark of self-directed, reflective learners in their ability to ask questions that help direct their learning" [44], p. 522). Students' questions can serve formative assessment purposes by providing instructors with "incidental" opportunities for gathering information about the students' understanding [4]. They can also help students with selfreflection and checking of their understanding throughout the teaching and learning process [45] Reflective learning journals are the written records that are created by the students as they reflect on their learning, on the critical events or incidents that were involved, or on the student-teacher interactions over a given period of time [46]. According to [47] learning journals can take various forms: they can be highly structured or free, on paper or in electronic forms. As far as formative assessment is concerned, reflective journals help understand the progress of students by providing good opportunities for teachers to gain better insights into how the students think and feel about the course, and the learning progress of the students throughout the courses [48].
The present study aims at expanding and taking to further step the investigation of student experience with formative e-assessment practices. Specific to this study is the measurement of learner satisfaction with formative e-assessment strategies which is driven by "the quality of student engagement" and "the quality of feedback" that seems to be important characteristics of a successful assessment that supports stu-dents' learning [49]. According to these authors, the analysis of the quality of student engagement in any successful assessment task should focus on a number of criteria. Those criteria include the sufficiency of assessment tasks, the variation and distribution of assessment tasks across all the course sections, whether assessment tasks are quite engaging: communicating clear and high standards criteria, whether assessment tasks are engaging students in meaningful learning activities (whether they are worth the time and efforts the students spend on them). They argue also that the analysis of the quality of feedback in any successful assessment task should focus on the sufficiency of feedback, the details of feedback, the timeliness of feedback, the appropriateness of feedback to the purpose of assessment task, and the clarity of feedback (whether the feedback clearly describes what the learner is supposed to do). The following research questions guided this study: • To what extent are students satisfied with the quality of their engagement with formative e-assessment learning activities? • To what extent are students satisfied with the quality of feedback received in formative e-assessment learning activities? • Does the student satisfaction differ between and within formative e-assessment strategies? • Does a relationship exist between the learner satisfaction ratings on formative eassessment strategies and the students' scores? • How are the learner satisfaction ratings on different formative e-assessment strategies related to each other?
The common denominator for all these research questions is the "learner satisfaction." However, each research question addresses a different aspect of the study as it is illustrated in Figure 1 below:

Context of the study
The present study aims at measuring learner satisfaction with formative eassessment strategies. It is a retrospective account of three studies about the implementation of formative e-assessment strategies in real classroom settings at the Uni-versity of Rwanda-College of Education. We examined how the student perceived three formative e-assessment strategies: online knowledge survey, online studentgenerated questions and peer responses, and structured electronic reflective journals.
Online knowledge survey: That was used in [50] study, was used as a formative e-assessment strategy to help students monitor their understanding and progress throughout an enhanced virtual course. Online knowledge surveys (KS) were developed basing on three key elements: learning objectives, module content, and the revised Bloom's Taxonomy of learning objectives [51]. The KS question items were developed using Moodle Feedback module and were sequenced along the four sections of the blended course.
Online student-generated questions and peer responses: Were used as a formative e-assessment strategy in the study by [52]. The student-generated questions and peer-responses were used in the context of the student-based formative e-assessment through peer scaffolding. Students were invited to generate learning material-related questions and to seek responses and support from peers. After each section of the blended course, the student-generated questions and answers were retrieved from Moodle learning management system for analysis by means of an assessment rubric that was structured on three levels of thinking: basic, medium, and high.
Structured electronic reflective journals: Were used as a formative e-assessment strategy in the study by [53] in a blended course. At the end of each course section, the participants were invited to reflect on their learning experience by completing a reflective e-journal. The students' reflective e-journals were analysed by means of a reflection framework and students were categorized into three groups: critical reflectors, reflectors, non-reflectors, and beginners.

Participants
The measurement of student satisfaction with formative e-assessment strategies covered three studies ( [50], [52], and [53] that had addressed year three studentteachers (n = 109). These students were accessing and engaged with formative eassessment learning tasks that were inbuilt in the blended course (EDC 301: Integration of ICT in Education) that was delivered through the University of Rwanda online learning platform (Moodle).

Instruments
This study used a self-completion questionnaire which facilitates the collection of large amounts of information in a relatively short time from the respondents who have a greater feeling of anonymity and more comfortable in expressing their real feelings [54].
A twenty-seven-item questionnaire was used to measure student satisfaction with formative e-assessment strategies. The respondents were invited to indicate their level of satisfaction with the question items' statements on a range of five-point Likert-type satisfaction scale These items were constructed based on "the quality of student engagement" and "the quality of feedback" that [49] consider being the two important characteristics of any successful assessment that supports students' learning.
As this study used a multiple item Likert-scale based questionnaire, we deemed it necessary to determine if the scale was reliable. To determine the level of internal consistency among the questionnaire items, Cronbach's alpha test was run in SPSS for 12 items that measured student satisfaction with e-assessment strategies in terms of the quality of student engagement and for 15 items that measured student satisfaction with e-assessment strategies in terms of the quality of feedback. The Cronbach's alpha was respectively 0.878 and 0.951 for the quality of student engagement items and the quality of feedback items. Since the commonly recommended acceptable level of internal consistency is ≥ 0.70 [55], the test results indicated a high level of internal consistency for the Likert scale that was used.

Data collection procedure
The questionnaire that was used in this study was made of self-rating questions where a respondent was asked to rate how s/he was satisfied with a statement. A 5point Likert-type scale questionnaire was created using Google Forms and the link was sent to the respondents via email. The questionnaire was pre-tested by asking some of the potential respondents to complete it before it was sent out to the actual research respondents. The pre-test of the questionnaire allowed the researchers to identify some of the eventual flaws within the questionnaire and address them. In this study, 109 electronic questionnaires were sent out. Of these, 108 satisfaction questionnaires were returned and represented an overall response rate of 99%.

Data analysis
Through Google Forms, the respondents' answers were automatically saved onto a computer file at the time of collection. These data were subsequently exported into excel sheets that are compatible with SPSS analysis software using the predefined codes. Descriptive statistics were used first and included the measurement of means and standard deviations. In addition, a Cronbach's alpha reliability test was done to measure the level of internal consistency for the Likert scale that was used. A One-Way ANOVA test was run in SPSS to determine whether there was a significant difference in learner satisfaction between and within the formative e-assessment strategies. Finally, a Pearson's r data analysis was calculated to determine whether there was any relationship between learner satisfaction and students' scores, and between the learner satisfaction ratings on different formative e-assessment strategies.

4.1
To what extent are the students satisfied with the quality of their engagement with the formative e-assessment learning activities?
To answer this question, data collected from 12 items (item number one through item number twelve) of the questionnaire were used. The students were asked to report how they perceived the engagement level of formative e-assessment activities within EDC 301 course. The students were asked to indicate their level of satisfaction with the formative e-assessment tasks in terms of sufficiency of formative eassessment tasks, the variation and coverage, engaging standards and criteria, whether completing assessment tasks was worth the time and efforts the students spent.
In general, the students were satisfied with the quality of every engagement criteria across all the formative e-assessment strategies (see Table 1). The highest level of the Likert satisfaction scale that was used was 5 (very satisfied) and the students' satisfaction mean rating (see Table 1) was ≥ 4.28 (SD = 0.70). If taken separately, there are variations in students' satisfaction with formative eassessment strategies regarding the quality of student engagement. The results show that the respondents were, in most cases, dominantly satisfied with the quality of student engagement in formative e-assessment tasks they completed in knowledge survey. In fact, within knowledge survey, the students' satisfaction mean rating was 4.64 (SD = 060) for the variation of assessment tasks, 4.56 (SD = 0.63) for the completion of assessment tasks that was worth the time and efforts the students spent, 4.55 (SD = 0.54) for the sufficiency of assessment tasks, and 4.45 (SD = 0.69) for the fact that assessment tasks were engaging enough.
The students' satisfaction mean rating was the same for some engagement criteria of formative e-assessment strategies. The mean rating was 4.38 for the completion of assessment tasks that was worth the time and efforts the students spent in online student-generated questions (SD = 0.67) and the fact that assessment tasks were engaging enough (SD = 0.69) in online student-generated questions. The mean rating was also the same (4.37) for both the variation (SD = 0.68) and sufficiency (SD = 0.73) of assessment tasks in electronic reflective journals. This was also observed in the variation of assessment tasks (M: 4.31, SD = 0.69) in online student-generated questions and in the fact that assessment tasks were engaging enough (M: 4.31, SD = 072). The results show that, based on the extent to which the students were satisfied with the quality of their engagement, knowledge survey was an e-assessment strategy the students were mostly satisfied with; followed by electronic reflective journals and online student-generated questions.
Two clusters emerged from the analysis of the student satisfaction mean ratings of the quality of student engagement with formative e-assessment tasks. Three formative e-assessment engagement criteria within knowledge survey were included in the first cluster and had the student satisfaction mean rating that was greater than 4.50. The nine remaining engagement criteria were included in the second cluster with the student satisfaction mean rating of 4.45 ≤ M ≥ 4.28.

4.2
To what extent are the students satisfied with the quality of feedback received in formative e-assessment learning activities?
To answer this question, the data collected from 15 items (item number 13 through item number 27) of the questionnaire were used. The students were asked to report how they perceived the quality of feedback within formative e-assessment activities they were involved in. Using a 5-point scale (very satisfied: 5, satisfied: 4, neither satisfied nor dissatisfied (neutral): 3, dissatisfied: 2, and very dissatisfied: 1), students were asked to indicate their level of satisfaction with the formative e-assessment tasks in terms of the sufficiency of feedback, the details of feedback, the timeliness of feedback, the appropriateness of feedback, and the clarity of feedback.
In general, the students were satisfied with the quality of every feedback criteria across all the formative e-assessment strategies. The highest level of the Likert satisfaction scale that was used was 5 (very satisfied) and the students' satisfaction mean rating (see Table 2) was ≥ 4.03 (SD = 0.93).
If taken separately, there are variations in students' satisfaction with formative eassessment strategies regarding feedback. The results show that the respondents were, in most cases, predominantly satisfied with the quality of feedback of formative eassessment activities they completed in knowledge survey. In fact, knowledge survey takes the first three highest mean ratings for student satisfaction with the quality of feedback. The students' satisfaction mean rating was 4.41 (SD = 0.74) for the appropriateness of feedback, 4.25 (SD = 0.80) for the clarity of feedback, and 4.23 (SD = 0.73) for the timeliness of feedback. The students' satisfaction mean rating was the same for three feedback criteria of formative e-assessment strategies: enough details of feedback and the sufficiency of feedback in electronic reflective journals with the respective mean ratings of 4.13 (SD = 0.86) and 4.13 (SD = 0.81). The results show that, based on the extent to which the students were satisfied with the quality of feedback, knowledge survey was aneassessment strategy the students were mostly satisfied with; followed by electronic reflective journals and online student-generated questions.

Does the student satisfaction differ between and within formative eassessment strategies?
The results illustrated in Table 1 and 2 show that there is variation in the extent to which students were satisfied with formative e-assessment strategies. However, to determine whether the differences were statistically significant, the analysis of the results was taken to another level. A One-Way ANOVA test (see Table 3) was run in SPSS assuming the equality of the means for learner total satisfaction scores of the three-formative e-assessment strategies (H0: µKnowledge survey = µOnline student-generated questions = µElectronic reflective journals) The one-way between-subjects analysis of variance revealed a reliable effect of learner satisfaction with individual formative e-assessment strategy on the overall learner satisfaction with three formative e-assessment strategies, F (2, 321) = 3.61, p = 0.03, MSerror = 29.60, α = 0.05. Since the p value associated with the F ratio is less than the α level, then we could reject the null hypothesis that the means for learner total satisfaction scores of the three-formative e-assessment strategies are equal. Thus, student satisfaction was different between and within the formative eassessment strategies, they were involved in. Since the F ratio was statistically significant, we looked at the multiple comparisons output (see Table 4) to analyse the results of a Least Significant Difference (LCD) Post-Hoc tests. The results illustrated in Table 4 show that there was a significant difference in learner satisfaction total score when paired comparisons were conducted between online knowledge survey and online student-generated questions (p = 0.02), online knowledge survey and electronic reflective journals (p = 0.03). However, the paired comparisons did not show a significant difference (p = 0.83) in learner satisfaction total score between online student-generated questions and electronic reflective journals.

Does a relationship exist between learner satisfaction ratings on formative e-assessment strategies and the students' scores?
To measure the relationship between the students' scores on the blended course and the learner satisfaction with formative e-assessment strategies, A Pearson's r correlation coefficient (see Table 5) was run in SPSS. The students' scores (M = 69.2, SD = 12.36) were correlated with the learner satisfaction ratings on the quality of the student engagement and the quality of feedback within formative e-assessment learning activities. A Pearson's r data analysis showed that there was no correlation between these variables. No relationship was found between the students' scores and the learner satisfaction ratings on the quality of the student engagement and the quality of feedback across all formative e-assessment strategies.

4.5
How are the student satisfaction ratings on different formative eassessment strategies related to each other?
A Pearson's r data analysis (see Table 5) was run in SPSS to measure the relationship between different learner satisfaction ratings on the quality of student engagement and the quality of feedback within formative e-assessment strategies. In general, a Pearson's r data analysis revealed low, moderate, and high positive correlations.
Firstly, the high positive correlation (.59 ≤ r ≤ .54) was found where the students who reported high satisfaction ratings in one assessment strategy were highly likely to report higher satisfaction ratings in another formative e-assessment strategy. This was observed between the learner satisfaction with the quality of feedback in online knowledge survey and in online student-generated questions, in online studentgenerated questions and electronic reflective journals, and in online knowledge survey and electronic reflective journals.
Secondary, the moderate positive correlation (.43 ≤ r ≤ .30) was also found where the students who reported high satisfaction ratings in one assessment strategy were moderately likely to report higher satisfaction ratings in another formative eassessment strategy. This was observed for example between the learner satisfaction with quality of student engagement and the quality of feedback in electronic reflective journals, between learner satisfaction with the quality of student engagement in online knowledge survey and electronic reflective journals, in online student-generated questions and electronic reflective journals.
Thirdly, there was a low positive correlation (.26 ≤ r ≤ . 19) where the students who reported high satisfaction ratings in one assessment strategy were less likely to report higher satisfaction ratings in another formative e-assessment strategy. This low positive correlation was observed for instance between the learner satisfaction with the quality of student engagement in online student-generated questions and the learner satisfaction with the quality of feedback in electronic reflective journals. In addition, a low positive correlation was revealed between the learner satisfaction with the quality of feedback in online knowledge survey and the learner satisfaction with the quality of student engagement in online-student generated-questions.

Discussion and Conclusion
In this study, a satisfaction questionnaire was used to measure the learner satisfaction with the formative e-assessment strategies the students were involved in. The construction of the learner satisfaction questionnaire was guided by the 'the quality of student engagement' and 'the quality of feedback' as the two important characteristics of any successful assessment that supports students' learning [49]. The present study's aim was to measure the extent to which the students were satisfied with the quality of the student engagement and the quality of feedback in formative eassessment learning activities and determining any differences in student satisfaction between and within formative e-assessment strategies. In addition, the study aimed to determine whether there was a relationship between the learner satisfaction ratings on formative e-assessment strategies and the students' scores and to examine the relationship between the student satisfaction ratings on different formative e-assessment strategies.
In general, the students were satisfied with the quality of their engagement and the quality of feedback across all the formative e-assessment strategies. These findings concur with some previous studies that concluded that students reported positive perception towards online formative assessment [27], [17] and were highly satisfied [31], [32] with different e-assessment criteria. The present study showed that the students were satisfied with the quality of their engagement with formative e-assessment tasks. These findings are in accordance with [28]' s study where the students reported more engagement, more learning, and more interaction in online formative assessment. Concerning the quality of feedback, the present study indicated that the students were satisfied with the quality of every feedback criterion across all the formative eassessment strategies. This extends [26]'s findings about the students' positive perceptions of online feedback in formative e-assessment.
A Pearson's r data analysis revealed low, moderate, and high positive correlations between student satisfaction ratings on different formative e-assessment strategies. In most cases, it was found out that the students who reported high satisfaction ratings in one assessment strategy were moderately likely to report higher satisfaction ratings in another formative e-assessment strategy. However, unlike some previous research studies [56], [57] that established a link between learner satisfaction with various aspects of online or blended learning and performance, this study found no relationship between the students' scores and the learner satisfaction (see also [58]) with formative e-assessment strategies.
A one-way between-subjects analysis of variance revealed that the student satisfaction was different between and within the formative e-assessment strategies. In addition, for both the quality of student engagement and the quality of feedback, the results showed that knowledge survey was an e-assessment strategy that the students were mostly satisfied with; followed by electronic reflective journals and online student-generated questions. In line with [59] who claimed that the use of the Likertscale questionnaire does not allow the researcher to distinguish between spontaneous and constructed responses, the present study's results also showed that the gap between the student satisfaction mean ratings across all formative e-assessment tasks was marginal and could not help researchers clearly discriminate between these formative e-assessment strategies in terms of learner satisfaction.
Thus, as a conclusion, the study recommended a sustained and integrated use of all the three formative e-assessment strategies in the context of blended courses. Further studies were also recommended: there is a need to widen and diversify the scope of the study of the learner satisfaction with formative e-assessment strategies by extending it to more than one course and one classroom and using much more open-ended research instruments that would allow the respondents to freely express their views.