A Usability Testing of a Higher Education Mobile Application Among Postgraduate and Undergraduate Students

The advancement in technology has produced innovative applications for educational purposes. Numerous higher education applications have aroused in recent years. Therefore, it is essential to strive for a high usability application. This study presents a usability evaluation of a higher education mobile application in Universiti Kuala Lumpur. The study was conducted using a usability testing methodology based on ISO 9241-11 standard to measure three usability factors: effectiveness, efficiency, and satisfaction. Sixty-four participants from postgraduate and undergraduate programs were subjectively selected. The result reveals that the application is effective, efficient, and meet the users' satisfaction on its usability, with a satisfaction rate of 82.15%. However, several issues were highlighted by the respondents during the usability test. Therefore, despite positive results, several recommendations regarding improving the usability of the application were proposed in this study. Keywords—Usability, usability testing, mobile usability, higher education app


Introduction
Owing to the strong adoption of technology by new generations, the educational environment is swiftly evolving. A mobile application for teaching and learning is getting more popular in various education levels [1,2,3]. Higher education institutions are responsible for providing convenient infrastructure for all students and should focus more on higher-level interactions that entail a significant change in information and communication-based society. The exponential advancement of mobile technology has been generally recognized at all educational levels [4] and given an excellent chance for an outside-classroom engagement [5]. However, the factors that contribute to a mobile educational application's effectiveness are subject to the application itself's usability, its capability in attracting the users' interest [6], and its appropriateness [7]. Therefore, with the increasing number of educational applications, a better user experience while using the application is also increased. The application's usability must be evaluated with suitable usability evaluation methods [8].
Evaluating the functionality of mobile applications alone is not sufficient. Our argument that examining an application's usability is crucial even after the development team has fully functionally tested it. The real effects of using mobile apps in higher education will depend on students' points of view and whether they are usable. It is essential to select the most suitable measurement methods [9]. A proper usability evaluation should be conducted by assessing its effectiveness, efficiency and meet the users' satisfaction [10]. This study focuses on evaluating a higher education application, namely UniKL Link, which was developed specifically for Universiti Kuala Lumpur (UniKL) students for information retrieval, service assistance, and keeping them updated with the latest news or announcement. The usability testing was carried out to determine the application's usability level to provide some recommendations to improve the UniKL Link application for the university with the effort to increase the user experience. Mobile learning applications can improve the students' responsibility and self-direction, which contributes significantly to their involvement [11]. To disclose the influence of students' application, it is essential to perform comprehensive assessments and evaluations with students. These findings make a considerable contribution to the improvement and promotion of the application, thereby motivating students to use the app service and substantially increase the service's standard. Providing a handy application that is accessible to students will also help the university community to grow. This article aims to outline the usability evaluation of a mobile application used in higher education. The main focus is to identify its usability level according to ISO standards; effectiveness, efficiency, and satisfaction. The study's methodology is explained in section 3, along with results and discussion in Section 4, while conclusions and recommendations are eventually proposed at the end of the article.

Usability
A software application's usability relies on the user-friendly interface and how much convenience the user feels when using the application [12]. Evaluating usability involves looking into the minimum accomplishment degree of various usability factors, which can also be characterized as the degree to which a product or service can guarantee maximum user satisfaction, efficiency, effectiveness [13,14,15,16], learnability, and ease of use [14] for various type of users. Usability includes the ease with which the customer can access a product or service with minimal complexity and optimum satisfaction while meeting the desired goals [16]. Usability is a widespread issue in human-computer interaction and web technology, in particular [17].
International Organization for Standardization (ISO) 9241-11 standard described usability as a degree of quality in which specified users can use the system with effectiveness, efficiency, and satisfaction [10]. The two most regular metrics are effectiveness, which identifies how the user can execute the tasks accurately and efficiently and how easily the user can reach the goal using the available services and tools [18]. There is a high correlation between efficiency and effectiveness, and when the interface is simple, it is easier to communicate, resulting in better job performance. Both efficiency and effectiveness will then influence user satisfaction. The assessment of usability is specifically involved in evaluating the core concepts of web accessibility features. Distinguished models are used to calculate and test several usability factors of a software application interface [19]. Usability usually requires an impartial view of consistency. The usability testing approach's cornerstone is mainly focused on evaluation or measurement as participants engage with the product or service. Traditional usability evaluation concepts mainly concentrate on user activities and their efficiency in task completion, that is, the functional dimensions of the interaction between user and product [20]. Among the challenges of software interface design involved a broad range of usability aspects such as navigation, content usefulness, and user experience. Having a comprehensive usability evaluation is an approach that can be used to identify user interface design problems. Usability evaluation or research depends on the measurement of how people communicate with a specific product or service. Therefore, it can be concluded that usability evaluation which also known as usability testing, is mainly used for assessing the users' interaction with a particular application or product [21].

Usability characteristics for educational-based application
Usability evaluation can be conducted in numerous ways: pluralistic walkthrough, cognitive walkthrough, heuristic evaluation, and guideline reviews [12]. The most common measures for usability are effectiveness, efficiency, and satisfaction. Different measurement criteria should be used to assess different usability factors, where the measurement for effectiveness can be made through assessing the task completion rate. Simultaneously, efficiency can be measured through the users' time to complete the tasks, and satisfaction measurement involves paying attention to the users' confidence level on how the application fulfills their expectations [12,14]. Usability evaluation requires user involvement to make it a success.
Research on usability testing was conducted by [14]. This study conducted a usability evaluation on M-learning application to test the effectiveness, efficiency, and user satisfaction using two usability evaluation methods; a Formal experiment and a System Usability Scale (SUS) questionnaire with 100 participants. The findings of this study indicated the application as effective, efficient, and user-friendly. Another study was conducted by [22], who evaluated a Brazilian university's mobile social application to improve the interaction between students and the university. The study collected quantitative and qualitative data, adapted from the Computer System Usability Questionnaire (CSUQ) and Questionnaire for User Interface Satisfaction (QUIS). The latter used the observation technique to identify the usage of the application among participants and semi-structured contextual interviews. In [23], a usability study on university websites in the Kyrgyz Republic was conducted. The study looked into the aspects of visibility, usability, and accessibility. The usability results showed that most of the web sites took longer upload time, while more than half of the web sites provided inaccessible links, and all of them have browser compatibility problems. The study suggested that universities in the Kyrgyz Republic need to improve their websites' visibility, usa-bility, and accessibility. Another study conducted by [24] used questionnaires to evaluate the user interface's navigability and its design elements in the MyLA Usability instrument. The feedback of the students was committed via an online questionnaire on a five-point Likert scale and eye-tracking. Another usability study involved assessing the mobile counseling application based on Gardner's multiple intelligence theory [25]. The study noted positive results on the application's usability and quality requirements. Despite many studies conducted in assessing an educational app's usability, most of the studies focus on the scope of teaching and learning, and it is minimal to find such evaluation that focuses on campus service-based application.

Testing the usability
A usability test is one of the essential activities in software development as it helps to produce better software with maximum user satisfaction. Developers can use the outcome of the usability test for software improvements. Conducting a usability test may involve task completion metrics, observing the errors, time spent or preparing subjective satisfaction questionnaires. [26] characterized effectiveness as the consistency of users in completing their tasks without mistake, while performance is best described as how easily users can reliably, efficiently, and thoroughly complete their tasks, and satisfaction can be defined as the positive attitude of users towards using a system along with the ease with which they execute their tasks on the system. Therefore, the presence of users is very critical in performing usability testing for the accuracy of results.

Data collection method and analysis
As described by ISO 9241-11, usability testing should be conducted with specified users by assessing the application's degree of effectiveness, efficiency, and satisfaction [10]. The implementation of usability testing for the UniKL Link application was conducted with the following procedures: • Step 1 Involved preparing a usability test plan, test script (usability test tasks), and recruit the test participants. • Step 2 Was conducted by preparing for the test session and providing a briefing to participants before distributing the test scripts (tasks). Observation technique was used to record the task completion rate and time, and the think-aloud protocol was used to gather feedback. • Step 3 Implied analysis of the findings where the outcomes of the questionnaire from participants were statistically analyzed, presented as the mean calculation to represent the satisfaction rate. In contrast, participants' issues/feedback were investigated, and justifications and recommendations were provided accordingly.

Testing environment
The evaluation of the UniKL Link application was conducted with a total of 64 participants. Seven of them were from the Master of Computer Science program, while the remaining 57 were undergraduate students from various disciplines ranging from 20 to 27. All participants performed the tasks using their own mobile devices, which run on the Android or iOS platform. While laboratory research may be helpful, it is not practical for such tests to measure real user actions in a real environment. Therefore, actual mobile devices are highly suggested to be used during the assessment [27]. It is necessary to assess users' actual experiences and actions in the real-world using actual tool or devices, as these experiences lead to more accurate results. With this approach, the results may capture issues caused by various devices or platforms too.
All participants were informed about the test earlier, and two appointed facilitators monitored the session. The tests were separated into several sessions due to participants' availability. Each test took about 1 hour. During the session, the facilitator described the test activities and let every participant understand the implementation. Participants were asked to answer a brief background questionnaire, which includes questions regarding the program, semester, and the frequency of using mobile phones for educational purposes. The evaluation of gathered data was carried out in the computer labs of the university.

Task criteria for testing
Participants were provided with a list of tasks that need to be tested, grouped according to the module. A total of 30 task criteria needs to be accomplished. The participants read the task criteria and worked on the required information in the application while being appointed facilitators.

Usability evaluation
After interacting with the app through the first task (Section 3.3), the participants were asked to proceed with the post-task, completing a questionnaire by rating the application using a 5-point Likert Scale. The measurement ranges from Strongly Disagree with point value 1 to Agree using value 5 Strongly. The post-task scenario subjective measures include: 1. Do you think this application is easy to use? 2. Do you feel comfortable using this application on your smartphone? 3. Do you think it is easy to find the information needed? 4. Does the information provide in this application clear and understandable? 5. Do you think this application has a pleasant interface?
6. Do you feel that learning to use this application is easy? 7. Do you find it easy to view academic-related information? 8. Does this application able to speed up your task completion? 9. Does this application display error messages instead of fixing the problems faced? 10. Do you think that organization of information in this application is clear? 11. Do you find it easy to perform academic-related features? 12. Does this application provide all the functions you expected to have?
13. Does this application provide useful information to help you complete the tasks?
14. This application has no broken link/menu/page. 15. Do you find the steps to accomplish tasks in this application is simple? 16. Does the application provide clear and descriptive information/instruction? 17. Do you feel that navigating through the pages is easy? 18. Do you have the intention to use this application in the future? 19. Will you recommend this application to others? 20. Overall, are you satisfied using this application?

Task completion rate and time:
There were thirty tasks used in the study. Table 2 shows the time taken to complete each task. The time to complete each task was observed and recorded by facilitators. The participants were allocated a particular time to finish each task.  Based on the finding, it is clear that all participants were completed all tasks, with a 100% completion rate. Therefore, the effectiveness of the application is high. Most of the tasks took less than 10 seconds to be accomplished by most participants. However, the average is not persistent because some respondents take a longer time to complete the task (more than 60 seconds) due to having difficulties or getting confused with some features. The longest time spent was on tasks 17, 19, and task 25; view invoices and statements, view students' residential information and search questions using keywords. For task 17, some students, especially those who were using iOS unable to download the invoice even after several tries. For task 19, those who were not staying at the hostel found it confusing when they could not find their residential information, while for task 25, the search function did not return any matches or feedback. Figure 2 shows the total time taken by five users in each module. Data of user 17,18,19,20, and 21 were highlighted in this table. Five random students from 64 were chosen. From the table, the highest total time taken to accomplish the task is 217 seconds from the service desk module, and the lowest total time taken to accomplish the task is 9 seconds from login module by user 17, attendance module by user 20, financial module by user 19 and 20. Thus, it can be concluded that the efficiency of this mobile application is at a moderate level.

Task ratings
After completing the first task, participants are required to complete a post-task to rate the application's usability to identify the satisfaction of the completed tasks using a five-point Likert scale questionnaire, ranging from 1 (strongly disagree) to 5 (strongly agree). The results of the post-task questionnaire are shown in Table 3. The majority of the participants, 60 of them equal to 93.7%, agreed (agree or strongly agree) that they were satisfied using the UniKL Link application. The number corresponds to those who agreed to recommend this application to others and those who felt comfortable using it on their smartphones, which is 92.2%. This is also commensurate with the number of participants who thought this application is easy to use, which is 95.3%. A total of 85.9% have the intention to use this application in the future, and the same number of participants found the steps to accomplish tasks in this application is simple. Correspondingly, 87.5% agreed that navigating through the pages is easy, and 89.1% inferred that the application provides clear and descriptive information/instructions, and this lead to 81.3% of the participants found it easy to perform academicrelated features using this application.
In terms of information management, 75.0% of the participants thought that the information provided in this application was clear and understandable, 76.6% thought it was easy to find the information they needed, and 78.1% thought that organization of information in this application was straightforward. 73.4% of the participants opined that this application could speed up their task completion as they found it was handy to use a mobile application rather than a web portal. Correspondingly, the same number agreed that the application provided sufficient information to help them complete the tasks.
On the other hand, only half of the participants agreed that this application provides all the functions they expected it to have. They suggested some other features to be made available in the app in the future, as highlighted in Table 5. Meanwhile, only 56.3% of the participants agreed that this application displayed error messages with regards of fixing the problems faced.  Table 4 represents the satisfaction rate calculation. In conclusion, from data gathering and calculations, the satisfaction rate is equal to 82.15%, which is satisfactorily acceptable. It can be concluded that participants like most parts of the application except for extra features to be added and error messages organization.

Issues
This evaluation received several positive feedbacks from the participants where some of them pretty happy with how the application can solve their educational activities. However, the evaluation also gathered useful feedback from the participants concerning their issues for the improvement purpose. Investigations were made aligned with the feedback received, and justifications and recommendations were provided accordingly. This application should pop up the notification on the phone screen, not when students open the app only because this can make the student less aware or less sensitive about the notification that has been received.

Future improvement 10
The invoice and student statement are the same with the same content, but it is placed in different sections with different names. This will lead to confusion.
Future improvement 11 I would like to suggest that any activities that have been done in-app will get the notification from the app itself. Future improvement 12 Fix the crash problem the application has when opening finance invoice Investigated: unsupported OS

13
The welfare and activities section has to be made available as soon as possible because those are among the app's main components to be useful for students. Some issues arose due to the unsupported OS platform and device condition or age span. However, there were also issues related to participant's confusion, which led to dissatisfaction despite many positive comments. The enhancement should take into consideration a more visible user notification, which should be placed next to the app icon rather than inside the application. Having the notification pop up on the phone screen is more attentive. The app should not let the users see the empty page after performing a search but return the 'unable to return any match' message instead to prevent confusion. The application is also suggested to inform users about the minimum requirement of using the app. Adapting a think-aloud protocol enables the team to figure out the critical issues from the users' perspective and conduct in-depth analysis.

Conclusion
This study involved performing a usability assessment of the higher education application concerning effectiveness, efficiency, and satisfaction. The study was conducted with 64 UniKL students, a combination of postgraduate and undergraduate students as participants. The result affirmed that 100% of participants could complete both task 1 and post-task, which indicates that the app is good in terms of effectiveness. Based on the time taken to complete each task, it can be concluded that this app is efficient for use. The satisfaction rate measured using a post-questionnaire reveals that it is equal to 82.15%, which is satisfactorily acceptable. Overall, the application promoted an excellent user experience to both postgraduate and undergraduate students.
Nevertheless, the participants highlighted some issues during the test session, which indicates that this application demands severe improvement. This study's limitation is the inexistence of exploring the potential factors that may determine the behavioral intention towards the adoption of a campus service application. Future studies are expected to provide comparative results after some interface improvements were made based on this study's results and aimed at identifying the determinant factors of behavioral intent to use the higher education mobile application [28][29].

Authors
Nik Azlina Nik Ahmad is a lecturer at the Department of Software Engineering in Universiti Kuala Lumpur. She has experience working as a system analyst for a library system development and other information system projects. Her research interests lie in the area of software testing, usability, user experience, requirements engineering, software maintenance, and mobile UX design. She is certified with Certified Professional for Requirements Engineering (CPRE) and Certified Tester Foundation Level (CTFL). She had experience working on Fundamental Research Grant and Short-Term Research Grant in collaboration with other universities. She was also appointed as a technical program committee for several international conferences. She held the Pro-gram Coordinator position, Final Year Project Coordinator, became the External Examiner for the Lincoln University College, Programme Advisory Committee for Malaysian Polytechnic, and a Panel for Technical Vocational Education and Training (TVET) workshop.
Muhammad Hussaini is a postgraduate student at Universiti Kuala Lumpur. He was successfully delivered his project on collaborative filtering and content-based recommendation engines. His research interests include machine learning, deep learning, software engineering, and agile software development.