Interfaces with good usability help their users complete more tasks in less time and with less effort, which gives them greater satisfaction. Given the vast array of options available to users today, usability is an important interface feature that may lead to the commercial success or failure of a software system. Despite its importance, few educational tools are available to help usability teachers and students. Knowing how to measure interface usability is one of the basic concepts that students should learn when they study the theme. This paper presents UsabilityZero, a web application to support the teaching of usability concepts to undergraduate students. By using UsabilityZero, students interact with a system displaying a reduced usability interface and, later, with the same system exhibiting an increased usability interface. Considering the use of UsabilityZero by 64 students, the differences between the interface with reduced and increased usability were: (i) 61.5% decrease in the number of clicks; (ii) 62.2% decrease in the time to perform tasks; (iii) 92.9% effectiveness increase; and (iv) a 277.3% satisfaction increase. During their experience with UsabilityZero, students learn how to measure efficiency, effectiveness, and satisfaction of user interfaces. After using the application, Information Systems and Computer Science students who had never been in touch with the subject could identify key usability aspects. The students’ perception of efficiency, effectiveness, and satisfaction as usability measures was higher than 80%. Also, they could identify some usability criteria and understand how measurements change when some of them are present in the interface design. As a result, over 92% of these students said they recognized the importance of usability to the quality of a software product, and 79% declared that their experience with the application would contribute to their professional lives.
This work presents a systematic review whose objective was to identify heuristics applicable to the evaluation of the usability of educational games. Heuristics are usability engineering methods that aim to detect problems in the use of a system during its development and / or when its interface is in interaction with the user. Therefore, applying heuristics is an essential part of developing digital educational games. Search sources were articles available in all the databases present in the Capes / MEC / Brazil periodicals portal, in the available languages. The descriptors adopted were "educational games", "heuristic" and "usability" in Boolean search in titles, abstracts and keywords, with AND operator, for publications starting in 2014. The inclusion criteria were: (a) articles with a clear description of the methodology used in the usability analysis; (b) studies presenting primary data and (c) articles whose focus corresponds to the investigated question. Two examiners conducted the searches in the databases and a third the evaluation and general review of the data. Initially, 93 articles were identified, of which 19 were repeated, 5 were literature reviews. Of the 69 that remained, 57 were elected as not eligible with only 12 selected for full studies, of which 6 entered the final review. With this review we can deduce that the field of heuristics and usability for educational games is still little explored, with few specific evaluations validated or in the process of validation, requiring greater investment in the area. Through this review, we found at least one heuristic that meets the usability evaluation of educational software: Game User Experience Satisfaction Scale (GUESS).
Generally, universities have complex and large websites, which include a collection of many sub-sites related to the different parts of universities (e.g. registration unit, faculties, departments). Managers of academic institutions and educational websites need to know types of usability problems that could be found on their websites. This would shed the light on possible weak aspects on their websites, which need to be improved, in order to reap the advantages of usable educational websites. There is a lack of research which provides detailed information regarding the types of specific usability problems that could be found on universities websites in general, and specifically in Jordan. This research employed the heuristic evaluation method to comprehensively evaluate the usability of three large public university websites in Jordan (Hashemite University, the University of Jordan, and Yarmouk University). The evaluation involves testing all pages related to the selected universities faculties and their corresponding departments. A list of 34 specific types of usability problems, which could be found on a Jordanian university website, was identified. The results provide a description regarding the common types of the problems found on the three Jordanian university sites, together with their numbers and locations on the website.
This paper presents results from three interrelated studies focusing on introducing TRAKLA2 to students taking courses on data structures and algorithms at the University of Turku and \rAbo Akademi University in 2004. Using TRAKLA2 they got acquainted with a completely new system for solving exercises that provided them with automatic feedback and the possibility to resubmit their solutions. Besides comparing the students' learning results, a survey was made with 100 students on the changes in their attitudes towards web-based learning environments. In addition, a usability evaluation was conducted in a human-computer interaction laboratory.
Our results show that TRAKLA2 considerably increased the positive attitudes towards web-based learning. According to students' self-evaluations, the best learning results are achieved by combining traditional exercises with web-based ones. In addition, the numerical course statistics were clearly better than in 2003 when only pen-and-paper exercises in class were used. The results from the usability test were also very positive: no severe usability problems were revealed; in fact, the results indicate that the system is very easy to learn and user-friendly as a whole.
Research for the evaluation of web-sites has already begun, however it is proceeding at a very slow rate. The main reasons for this are, in our opinion, the attempt to adapt existing methodologies to the particularities of the web, the individual structure of web-sites and the issue of finding the appropriate evaluators. This study copes exactly with these points and suggests a heuristic approach for the evaluation of web-sites.
In our study we tried primarily to train the evaluators in the particularities of the heuristic evaluation; in its classic form as well as in its web-adapted form. By doing this we try to answer the core question if we can augment the evaluators' expertise with a kind of training prior to the conduction of the evaluation itself. Next we used web-adapted heuristics, found in relative literature and tried to clarify them to the evaluators as well. Finally the evaluators were involved in a real evaluation of five web sites and they wrote down their comments on appropriately prepared questionnaires.
The results from this study confirm firstly two known conclusions, that the method is applicable to the Web and that the prior evaluators' expertise is of great importance. Yet, in addition to these, we concluded that it is possible to augment, under conditions, this expertise in a short way so they have an increased performance during the evaluation as well. Our main conclusion is, however, that the used heuristic list performed inadequately, but we noted the trend of the evaluators following a somewhat similar mode of thinking, thus providing us with the way to adapt these heuristics in a more holistic approach to the web.