Heuristic Evaluation of an Institutional E-learning System: A Nigerian Case

Many African academic institutions have adopted the use of elearning systems, since they enable students to learn at their own pace, time, and without restriction to the classroom. However, evidence of usability evaluation of e-learning systems in Africa is mostly lacking in the literature. This paper reports the experimental heuristic evaluation of the e-learning system of a Nigerian University. The objective is to demonstrate the application of expertbased usability evaluation techniques such as Heuristic evaluation for assessing the attributes of existing e-learning systems. The study revealed that while the e-learning system has strong credentials in terms of support for Web 2.0 activities, good learning content and boasts of useful e-learning features, improvements are necessary in other areas such as interactive learning, assessment and feedback, and quality of learning content. The study adds to the body of extant knowledge in the area of usability evaluation of e-learning systems in African institutions. Keywords—E-learning, heuristic evaluation, learning managements systems, human-computer interaction


Introduction
E-learning connotes the body of activities that involve the use of electronic technology to aid the process of learning. Specific forms of e-learning include on-line courses, blended learning, telelearning, distance learning, virtual learning, and virtual campus [1]. In most organisational settings, e-learning are usually facilitated through the use of a Learning Management System (LMS) in order to ensure effective elearning. A LMS is a software application that enables the administration, documentation, delivery and attainment of e-learning objectives. LMS have capability to facilitate e-learning in virtual environments such as the Internet or Intranet. Specific examples of LMS include: Moodle, DotLRN, Sakai, WebCT, Blackboard, OpenACS, ILIAS, and OpenUSS [2], [3]. In the context of a university, the provision of elearning facilities is both primary and complementary for attaining teaching effectiveness and realisation of learning objectives. It is primary because e-learning will allow a university to reach out to more students through academic programmes that are run on the open and distance learning education option. Concrete initiatives such as online courses will allow remotely based students to acquire education through virtual degree, and distant learning programmes. E-learning is complementary in a university because it will foster an atmosphere of blended learning, where regular on campus students have access to courseware materials and other learning resources that are uploaded on the university's e-learning system, apart from the face-to-face lectures that they receive. This will ensure that students are able to continue their learning beyond the four walls of the classroom, and at their own pace and time. Thus, elearning will ultimately enhance the quality of learning experience of students, and help teachers to become more effective in the use of pedagogy, and general intellectual engagements with students for greater impact.
However, the objectives of e-learning, which is primarily to enhance the learning experiences of university students cannot be achieved if the key usability issues that pertain to the e-learning system are not adequately catered for. The usability of a university's e-learning system depends largely on four metric dimensions, which are the quality of e-learning tool, the quality of courseware -learning content, the quality of virtual environment, and the quality of mode of engagement of the e-learning system. Therefore, a holistic evaluation of the usability of a university's e-learning system must consider these four metric dimensions in order to perform a credible assessment. According to [4], many of the previous studies on usability evaluation of elearning in universities have not covered these dimensions adequately.
Generally, conducting usability evaluation for software tool platforms based on human-computer-interaction principles is not cheap [5]. Hence, the advent of techniques for discount usability evaluation such as heuristic evaluation, cognitive walkthrough, and user think aloud [6]. Heuristic evaluation, which is used in this study, is particularly popular for evaluating web-based systems. It involves the use of a few expert users to assess the usability of a system based on a set of predefined heuristics. The objective is to assess quickly the usability attributes of a system in terms of its strengths and weaknesses, and to discover usability problems [5].
In this paper, a customised set of heuristics have been used to assess the usability of the Moodle-based e-learning system of Covenant University, a leading private university in Nigeria. The customised heuristics were carefully selected to cover the aforementioned four metric dimensions in order to credibly assess the usability of the University's e-learning system. Therefore, the objective of this study is to identify the strengths and weaknesses of the e-learning system of Covenant University so that the management of the University can obtain useful feedback on areas that need improvement.
Although, some studies on the evaluation of e-learning platforms in African-based institutions have been reported in the literature, most of these studies do not have the orientation of a heuristic evaluation as done in this study. Rather, previous studies have focused largely on aspects of technology adoption and acceptance [7], [8], [9]. Our decision to consider the four metric dimensions that pertain to the usability of elearning systems (viz. e-learning tool, content quality, virtual environment, and mode of engagement) in order to formulate the set of customized heuristics used for assessing the Covenant University e-learning system is a novelty as far as heuristic evaluation of institutional e-learning platforms is concerned.
The rest of this paper is as follows. Section 2 presents a background and related work. Section 3 describes the process used to define the heuristics used for the evaluation, while Section 4 presents the study design, detailing the procedure used for the heuristic evaluation. In Section 5, we present the results and a discussion of the results. The paper is concluded in Section 6 with a brief note.

Background and Related Work
In this section, we discuss the key contextual themes that pertain to this work, such as e-Learning, learning management systems, expert usability evaluation, and related work.

E-learning
E-learning refers to the use of information and communication technology (ICT) to enhance and support teaching and learning processes. E-learning can also be defined as a computer based educational tool or system that enables a person to learn anywhere and at any time. E-learning makes use of electronic media and devices to facilitate access, promote evolution and improve the quality of education and training [10]. E-learning can be functionally implemented on a wide variety of ICT application platforms, which includes television and radio, compact discs (CDs) and digital versatile discs (DVDs), video conferencing, mobile technologies, web-based technologies, and electronic learning platforms [11]. E-learning goes beyond training and instruction to the delivery of information and tools to improve performance [12]. According to [13], e-learning can be synchronous and asynchronous. A synchronous learning environment involves the instructor and students being online simultaneously and communicating directly with each other, while asynchronous learning environment entails that the instructor only interacts with the student intermittently and not in real time such as through discussion groups, email, and online courses.

Learning Management Systems
Learning Management Systems (LMS) are virtual learning environments that allow learners and facilities that are provided for interactive on-line learning to be utilized efficiently. A learning management system is a software application that facilitates the administration, documentation, delivery and attainment of e-learning objectives. LMS facilitate e-learning in virtual environments such as the Internet or Intranet. LMS can be divided into two categories, which are corporate LMS and Educational LMS. Corporate LMS are those that harness key LMS features such as registration and management of classroom instruction as well as e-learning management and delivery, ecommerce capability and corporate policy aspects such as regulatory compliance, competency, performance, human capital and talent management for Human Resource development objectives. Most corporate LMS are focused on the management of self-directed (asynchronous) online learning because there is no guarantee that an instructor will always be present to teach personnel. Features such as course authoring and content management are not usually part of corporate LMS except when included as a component of a learning content management (LCMS).
On the other hand, Educational LMS are mostly used in academic institutions. They usually include features for registration and management of classroom instruction. Hence, Educational LMS are primarily for online learning, and they usually provide course authoring, and some content management features. They also include communication and collaboration features, which makes them able to support groupbased learning and peer interactions during learning sessions. Consequently, they are commonly called Course Management Systems (CMS) or Virtual Learning Environments (VLE). They are generally built on the assumption that an instructor would always be available to develop course content and to communicate with students.

Expert Usability Evaluation Techniques
Expert usability evaluation techniques afford a discount way for assessing the usability of systems in a quick way using a few experts. The aim is to allow expert evaluators identify potential usability problems by inspecting a system with a set of heuristics or predefined questions. The goal usually is quick, but quality feedback at minimal cost in terms of time and resources spent on the usability evaluation. Examples of expert usability evaluation techniques include cognitive walkthrough, heuristic evaluation, guideline review, and consistency inspection [15], [16].
Cognitive Walkthrough: The cognitive walkthrough (CW) method is a type of usability evaluation technique that can be used in the early stage of system design to evaluate the vision of the system designers. It is a highly structured procedure in which, an expert evaluates the user interface of a system by performing a set of predefined tasks in order to identify potential usability problems. The objective of CW is to assess the relationship between the intentions of a user and the features that are available in the system's interface, and to determine how easy it is for a novice user to successfully use the system without any guidance [17], [18], [19].
Performing a CW includes a preparation phase where the user's background and the nature of sample tasks to be performed are defined. CW involves four steps, which a user must perform [17]. These are 1) set a goal to be accomplished; 2) inspect the available actions on the user screen (in the form of menu items, buttons, icons, dashboard widgets etc.); 3) select one of the available actions that seems to make progress toward a current goal; and 4) perform the action and evaluate the system's feedback for evidence that progress is being made toward a current goal. In addition, during cognitive walkthrough (CW), the evaluator must attempt to answer four questions during each task 1) will the user try to achieve the correct effect; 2) will the user notice that the correct action is available; 3) will the user associate the correct action with the desired effect; and 4) if the user performed the right action, will the user notice that progress is being made toward accomplishment of his goal. If there are positive answers to all four questions, then it can be concluded that the execution of the specified action is devoid of usability flaws, but if there is a negative answer to any of the four questions, then the specified action has usability problems.
When compared to the heuristic evaluation technique, the CW appears to be more effective in finding severe than less severe problems [18], [19]. Hence, CW is preferred in situations when due to time or financial limitations, the emphasis are on detecting the more severe usability problems in a system's design.
Heuristic Evaluation: Heuristic evaluation has been a classic and very popular evaluation method in human-computer interaction since it was first introduced by Jakob Nielsen in 1990 [20] who created a method for using a set of guidelines to test the usability aspects of a product, which today is known as heuristic evaluation. Furthermore, Heuristic evaluation (HE) is a discount usability evaluation method that is used by an analyst to find usability problems by checking the user interface against a set of supplied heuristics or principles [20]. HE adopts an informal approach where a small number of evaluators inspects the interface of a system to ascertain how well it satisfies some predefined usability heuristics or guidelines. HE is a cheaper alternative to full-scale usability assessment that involves actual users. It does not require huge investment to implement but it is quite potent at finding usability problems in applications. Contrary to performing heuristic evaluations based on common sense, there are currently, 10 recommended usability heuristics that are based on Nielsen's methods of heuristic evaluation [21]. These include the following: User control and freedom, Flexibility and efficiency of use, Error prevention, Match between system and the real world, Help users recognize, diagnose, and recover from errors, Recognition rather than recall, Help and documentation, Consistency and standards, Visibility of system status and Aesthetic and minimalist design [22].

Overview of Related Work
The usability evaluation of e-learning systems has received considerable attention in the literature and this is because high usability of e-learning materials is essential in ensuring maximum educational impact [23], [24], especially when the material to be learnt is complex [25], [26]. Pipan et al. [27], developed an evaluation cycle management, which is aimed at usability testing for end users of e-learning platforms, the test revealed several deficiencies which could be corrected. The identified deficiencies include facilitation of navigation to e-testing, improvement of on-line help features, facilitation of terminological support texts in on-line documents, better color reconciliation and fonts for the user server. Ardito et al. [28] made a proposal for adapting systematic usability evaluation (SUE) for the e-learning environment. The work observed that experts can only evaluate ''syntactic'' aspects of e-learning applications, which are aspects related to interaction and navigation. Hence, the author advised that features concerning the didactic effectiveness of the e-learning systems must be considered in order to realize a better evaluation. In [29], ISO 9126 Quality Model was proposed as a standard framework for evaluating e-learning systems that are used in educational settings. The proposal was validated by applying the ISO 9126 Quality model to a commonly available e-learning system -Blackboard version 6.1.
The result showed that the proposed model was capable of detecting many design flaws. In [30] the development of a questionnaire-based usability evaluation method for e-learning applications was presented. The approach considered a combination of cognitive and affective considerations that may influence e-learning usability. The result of the empirical evaluation of the approach confirmed its reliability and validity. The study recommended the approach for adoption by usability practitioners when evaluating the design of e-learning applications. The evaluation of open source elearning platforms using questionnaire-based approach, focusing on adaptation issues was presented in [2]. The result of the study revealed that Moodle performed better than all other e-learning platforms and obtained the best rating in the adaptation category. In addition, [31] reviewed the advantages, and disadvantages of the Moodle 1.0 and Blackboard as e-learning platforms. The study concluded that Moodle is effective in the e-learning development, but lack adequate provision for handling some security concerns.
Some specific efforts in the area of heuristic evaluation of e-learning platforms have been reported in the literature. In [4], a meta-evaluation that investigated the heuristic evaluation of a web-based learning (WBL) application was conducted. The study performed evaluations using a framework of criteria that was synthesised based on different usability and learning paradigms that pertain to WBL environments. The synthesised framework of criteria was found to be effective in terms of the number and nature of problems identified in the target application by a complementary team of experienced experts. The empirical application and comparison of two heuristic sets was performed in [30].The result of the study revealed the differences in the strengths of the two heuristic sets in terms of coverage, distribution and redundancy. In addition, [32] compared two sets of heuristics, Nielsen's heuristics and the cognitive principles of Gerhardt-Powals and discovered that there were no significant differences between them in terms of effectiveness, efficiency and inter-evaluator reliability. In [33] three sets of heuristics were used, namely the Nielsen's original ten heuristics [16], e-learning heuristics, and child-learning heuristics to evaluate two children e-Learning software. The study concluded that Nielsen heuristics are generic, and does not include usability attributes that are specific to children e-learning. Rather than simply applying methods from usability research to the study of e-learning environments, [34] argued that usability research should encourage a task-oriented perspective towards e-learning and broaden our traditional definitions of user, task, and context. In [35] how to make e-learning effortless through the application of an affordance design approach after an evaluation of existing systems was proposed. According to the authors, Affordance-Based Design process involves determining the affordances that the artifact should and should not have, conceiving the artifacts' overall architecture, analyzing and redefining the affordances of the components, selecting a preferred architecture that will enhance desired affordances and minimize undesired affordances, and determining affordance structures and design affordance components [36].
Finally, in 2015, [37] concluded that, the demand of fully online courses has increased in universities and therefore making it important to determine the overall usability e-learning system. According to them, this will help faculty and designers in the improvement of the courses. In their work, it was also concluded that determining usability in e-learning environments should involve technical and pedagogical criteria of usability.
In this work, three sets of heuristics have been integrated to derive a customized set of heuristics to evaluate the Moodle-based e-learning system of Covenant University, Nigeria. The three heuristics are Web 2.0 Heuristics, Learning Heuristics, and Tool Heuristics -which focuses on other aspects of usability not covered by the ten Nielsen's heuristics. All of these, were designed to arrive at a holistic heuristic set that will enable a thorough evaluation of the Moodle-based e-learning system of the University.

Defining the Heuristics
In the context of usability evaluations, heuristics are the human computer interaction based guidelines that are used as metrics for assessing the usability of a system. In this work, three sets of heuristics are used. The heuristics are: i) Web 2.0 Heuristics -which combines the traditional Nielsen's heuristics and some considerations of the Web 2.0 paradigm such as user generated content, online collaboration, and Wiki; 2) Learning Heuristics -which are drawn from a set of educational and learning heuristics that have been profiled in the literature; and 3) Tool Heuristics -which embrace other aspects of usability apart from the Nielsen's heuristics that are considered essential for effective e-learning. The heuristics used in this study is a product of a thorough analysis of different sets of heuristics that have been proposed, or used in existing literature. The sets of heuristics used to synthesise the customised set of heuristics used in this study include Nielsen's Heuristics [16], heuristics on instructional design [38], Mehlenbacher et al. heuristics [34], Educational design heuristics [39], principles for effective online learning [40], and Web 2.0 extended framework for heuristic evaluation [41]. The description of our customised heuristics is presented next.
1. Web 2.0 Heuristics: The Web 2.0 heuristics are typically the original Nielsen heuristics plus three additional heuristics that are relevant to Web 2.0 platforms. An overview of the Web 2.0 heuristics (12 in total) is as follows: • Visibility of system status: It is the measure of the degree to which the user is kept informed about ongoing process and events taking place in the system. • Match between the system and the real world: It is the measure of the level of familiarity between the language, objects and annotations used in the elearning system to describe actions, concepts, function, and activities and that of the user. • User control and freedom: It is the measure of freedom that a user has, to select and schedule tasks according to his/her preferences. • Consistency and standards: It is the measure of clarity and consistency in the way different words, situations, or actions that are on the e-learning system are presented. • Error prevention: It is the measure of preventive mechanisms that exist, which can help the user to avoid making errors while on the e-learning system.
• Recognition rather than recall: It is a measure of the extent to which the elearning system makes instruction, objects, actions, and options visible to the user. • Flexibility and efficiency of use: This is the measure of how simple and suitable is the design of the e-learning system for all categories of users, be it novice or expert. • Interactive interface: It is the measure of how well the e-learning system supports active interaction with the user, including the support for interactive Web 2.0 features such as instant messaging, online chats, and discussion forums. • Help, documentation, and recovery from errors: this is the measure of how readily a user can receive help when executing a task on the e-learning system. • Data ownership and control: It is the measure of control that is provided on the e-learning system to ensure that only the relevant and appropriate type of user generated content are permitted to be uploaded on the e-learning system. • Multimedia Technologies: It is the measure of multimedia support such as voice and video that is provided on the e-learning system in order to enhance the quality of user experience. • Support for Web 2.0 Paradigm: It is the measure of support for key Web 2.0 features such as user driven and generated content, and wiki that is available on the e-learning system. 2. Learning Heuristics: The learning heuristics are synthesised from the educational design heuristics, and other learning heuristics that we identified. The learning heuristics (9 in all) used in this study are described as follows: • Quality of learning content: It is the measure of the quality of instructional content that is available on the e-learning system when they are compared to the curriculum. • Assessment and feedback: It is the measure of features that are available for assessment and feedback that have been provided by the e-learning system. • Motivation to Learn: It is the measure of innovative features that stimulate learning and further inquiry that have been implemented by the e-learning system. It is to determine whether there exist a mixture of fun and learning, and varied learning activities that increase the rate, and quality of learning. • Interactive learning: It is the measure of how well the e-learning system uses user-centred interactions such as games, simulations, role-playing, activities, and case studies to gain the attention, and maintain motivation of learners. • Learning orientation: It is the measure of how well the e-learning system makes what is to be accomplished and gained from its use clear to the user. It measures if there is clear communication of goals and objectives, and expected outcomes of courses in the e-learning system. • Support for group interaction and collaborative learning: It is the measure of support that is provided for learners to be able to interact with each other through discussion and other collaborative activities. It assesses whether tools are provided for learners to interact with peers and educators by means of asynchronous and synchronous communication.
• Personalised learning: It is the measure of how well individual learners can customise the e-learning system to suit their own learning strategy in order to learn at their own pace. • Support for variant Learning mechanisms: It is a measure of the degree to which the e-learning system supports different strategies for learning such as team-based learning, problem-based learning, and rote learning. • Support for problem-based learning, knowledge exploration and selflearning: It is the measure of how well the e-learning system supports transfer of skills beyond the learning environment and facilitates self-improvement of the learner. 3. Tool Heuristics: The tool heuristics focus on essential attributes that the e-learning system should have in order to support effective online learning. The tool heuristics (3 in total) are described as follows: • Accessibility: It is the measure of how well users can readily access the resources and features of the e-learning system. This could be in terms of being free from technical problems such as hyperlink errors, and programming errors, and being able to engage with the e-learning system through different types of devices. • Support for Instructor Activity: It is the measure of how well the e-learning system recognises the role of instructor in learning by enabling authorization and authentication of access to some learning resources. • Privacy and Security: It is the measure of the degree to which the e-learning system ensures the privacy and security of learning resources that are available.

Study Design
In this section, we shall give a brief overview of the Moodle-based Covenant University (CU) e-learning system and then discuss the procedure used in performing the heuristic evaluation of the CU e-learning system.

The Moodle-Based Covenant University E-learning
The Modular Object-Oriented Dynamic Learning Environment (Moodle) is a webbased Learning Content Management System (LCMS) that was created based on the social constructivist philosophy, while leveraging on the collaborative possibilities of the Internet. Moodle affords a lot of flexibility, and can be used to support outcomeoriented classroom environments. Key features such as discussion forums, content management (resources), quizzes, and several activity modules are integral parts of Moodle. Being an open source platform, Moodle has benefitted from contributions from different sources including SCORM, WebQuest and other document management systems [42]. The CU e-learning system is designed based on Moodle. It is essentially a customisation of many of the features of Moodle to suit the peculiar needs of the Covenant University context. Key activities supported by the CU e-learning include enrolment in courses as a student or lecturer, courseware administration (up-loading of lecture materials, scheduling of class activities, posting assignments, setting up quizzes and the like), plagiarism test -through the integration of the turnitin software -, online chats, discussion forum, evaluation reports and many more. A view of the CU Moodle-based e-learning is presented in Fig. 1.

Experimental Procedure of the Heuristic Evaluation
In order to perform the heuristic evaluation of the CU e-learning system, we enlisted the participation of six (6) subjects consisting of 3 student users from the Computer Science Department, and 3 staff users (from different divisions of the IT support unit). All student participants had good background in IT, while the staff user were also IT savvy personnel, but have not been directly involved in the development of the e-learning system. The summary of the background of the participants with regard to the key areas that pertain to heuristic evaluation as obtained through the preexperiment questionnaire filled by participants is as follows (see Table 1): • Over 83% of participants claimed to be an expert or good in e-learning • Over 83% of participants claimed to be an expert or good in Web technology • Over 66% of participants claimed to be an expert or good in Web 2.0 technology • All participants claimed to be an expert or good with Courseware technology • Over 83% of participants claimed to be an expert or good in Moodle The same list of specific tasks was given to all participants to undertake on the elearning system. The three student participants undertook the experiment at the same time in a laboratory, while the staff participants performed their own experiment independently at their own time within a two-day window. Each participant was expected to complete the tasks in 30 minutes. The assigned tasks include the following: 1. Upload one or more lecture notes of a course 2. Study the course content of specific courses randomly 3. Check if course materials hosted on the e-learning system contains information that give an orientation of the objectives and learning outcomes of the courses 4. Seek out the multimedia features of the e-learning system and try to engage it 5. Try to add your own content to the e-learning system, and assess how easy or difficult it is to do it 6. Try to communicate through chat with colleagues online and also participate in a discussion forum 7. Find out if just anything relevant or irrelevant can be uploaded on the e-learning system 8. Assume the role of a lecturer and set up a short quiz for students or quiz for other students.
After performing the experiment, each participant filled a post-experiment questionnaire that was designed based on the 24 customised heuristics used for the assessment. Each participants answered a total of 36 questions based on a 5-Likert Scale (1 -strongly Disagree, 2 -Disagree, 3 -Neutral, 4 -Agree, 5 -Strongly Agree).

Results and Discussion
The data that was collected from the post-experiment questionnaire were collated, coded, and analysed using MS Excel software to compute the median statistics. The results obtained from the evaluation are presented in Table 2. Also, a comparison of the rating of student users (SU) and staff users (STU) based on the median scores is presented in Fig. 2. The figure 2 reflects the contrast in the rating and perception of elearning system by students and staff users.

Discussion
Based on the results obtained from the experiment as shown in Table 2 and Fig.2, we compared the rating of e-learning system by students and staff users. The following was obvious: • The e-learning system got very good rating from staff users. This is not unexpected, since the staff users see themselves more as system owners being part of the unit that handled the design and implementation of the system, they may somewhat have a better overall understanding of the system than the students; • The e-learning system got a generally lower rating from student users than from staff users. This is also expected, because the students are less biased or may be more critical compared to the staff users; • The e-learning system got particularly lower rating in the aspect of learning heuristics from the student users, when compared to the staff users. We assume that the rating of students is likely to be more objective, since they are in a position to access the extent to which the system supports their learning.

Fig. 2. Comparison of Rating by Student Users and Staff Users
In addition, based on Table 2, we observed the following: • The e-learning system got acceptable rating in most aspects covered by Web 2.0 heuristics, leaning heuristics and tool heuristics • The e-learning system had a particularly good rating in all aspects of tool heuristics • Relative to other aspects, the e-learning system received lower rating in the aspects of flexibility and efficiency of use, interactive learning, assessment and feedback.
rating from staff users compared to student users suggests that the University needs to improve on the features and capabilities of the system. The students are the real users, so their rating could be more trusted compared to that of the staff users. It is also necessary for the University authorities to do more in order to make students to appreciate the features and capability of the e-learning system.

Conclusion
In this paper, we have presented the heuristic evaluation of the e-learning system of a Nigerian University -Covenant University, Nigeria. A set of 24 customised heuristics that belongs to three classes, Web 2.0 heuristics, learning heuristics, and tool heuristics were used to evaluate the e-learning system. The evaluation result revealed that although the e-learning system earned good rating in many aspects assessed by the 24 customized heuristics, there are key areas such as flexibility and efficiency of use, interactive learning, and assessment and feedback that need improvement. In addition, the evaluators also discovered a number of other usability problems that require improvement such as learnability, aesthetics, and user-friendliness. The system was rated low in terms of its capability to support learning by the student users, which seem quite objective compared to the rating of the staff users. This implies that the design of the e-learning system should be improved to ensure that it maximally supports students' learning. The evaluators also made specific observations in the free comments section of the post-questionnaire that will help to further improved the performance and usability of the e-learning system. As a contribution, this paper used a customised set of heuristics that cover the four dimensions, which are the capability of the learning tool, the quality of learning content, the nature virtual environment, and the mode of user's engagement to conduct the heuristic evaluation of an institutional e-learning system in a comprehensive way. As of now, not many studies have considered these four metric dimensions as integral aspects when conducting a heuristic evaluation.
In addition, the choice of Nigeria as the context of the study also adds to the existing body of empirical case studies on heuristic evaluation of institutional e-learning systems in Africa, because not many of such studies have been reported in the literature. In future work, we shall seek to employ a more robust set of heuristics that will cater for other salient dimensions that we have been overlooked in the evaluation. Also, the selection of subjects to participate in the evaluation will be done over a wider and more variant population of students (different levels, departments, colleges, and staff groups spread across several units and departments of the university) in order to obtain a more credible assessment.