Computer science students often evaluate the behavior of the code they write by running it on specific inputs and studying the outputs, and then apply their comprehension to a more general understanding of the code. While this is a good starting point in the student’s career, successful graduates must be able to reason analytically about the code they create or encounter. They must be able to reason about the behavior of the code on arbitrary inputs, without running the code. Abstraction is central for such reasoning.
In our quest to help students learn to reason abstractly and develop logically correct code, we have developed tools that rely on a verification engine. Code involves assignment, conditional, and loop statements, along with objects and operations. Reasoning activities involve symbolic reasoning with simple assertions and design-by-contract assertions such as pre-and post-conditions as well as loop invariants with data abstractions. Students progress from tracing and reading code to the design and implementation of code, all relying on abstraction for verification. This paper reports some key results and findings from associated studies spanning several years.
This work presents a systematic review whose objective was to identify heuristics applicable to the evaluation of the usability of educational games. Heuristics are usability engineering methods that aim to detect problems in the use of a system during its development and / or when its interface is in interaction with the user. Therefore, applying heuristics is an essential part of developing digital educational games. Search sources were articles available in all the databases present in the Capes / MEC / Brazil periodicals portal, in the available languages. The descriptors adopted were "educational games", "heuristic" and "usability" in Boolean search in titles, abstracts and keywords, with AND operator, for publications starting in 2014. The inclusion criteria were: (a) articles with a clear description of the methodology used in the usability analysis; (b) studies presenting primary data and (c) articles whose focus corresponds to the investigated question. Two examiners conducted the searches in the databases and a third the evaluation and general review of the data. Initially, 93 articles were identified, of which 19 were repeated, 5 were literature reviews. Of the 69 that remained, 57 were elected as not eligible with only 12 selected for full studies, of which 6 entered the final review. With this review we can deduce that the field of heuristics and usability for educational games is still little explored, with few specific evaluations validated or in the process of validation, requiring greater investment in the area. Through this review, we found at least one heuristic that meets the usability evaluation of educational software: Game User Experience Satisfaction Scale (GUESS).
Diverse initiatives have emerged to popularize the teaching of computing in K-12 mainly through programming. This, however, may not cover other important core computing competencies, such as Software Engineering (SE). Thus, in order to obtain an overview of the state of the art and practice of teaching SE competences in K-12, we carried out a systematic mapping study. We identified 17 instructional units mostly adopting the waterfall model or agile methodologies focusing on the main phases of the software process. However, there seems to be a lack of details hindering large-scope adoption of these instructional units. Many articles also do not report how the units have been developed and/or evaluated. However, results demonstrating both the viability and the positive contribution of initiating SE education already in K-12, indicate a need for further research in order to improve computing education in schools contributing to the popularization of SE competencies.
Generally, universities have complex and large websites, which include a collection of many sub-sites related to the different parts of universities (e.g. registration unit, faculties, departments). Managers of academic institutions and educational websites need to know types of usability problems that could be found on their websites. This would shed the light on possible weak aspects on their websites, which need to be improved, in order to reap the advantages of usable educational websites. There is a lack of research which provides detailed information regarding the types of specific usability problems that could be found on universities websites in general, and specifically in Jordan. This research employed the heuristic evaluation method to comprehensively evaluate the usability of three large public university websites in Jordan (Hashemite University, the University of Jordan, and Yarmouk University). The evaluation involves testing all pages related to the selected universities faculties and their corresponding departments. A list of 34 specific types of usability problems, which could be found on a Jordanian university website, was identified. The results provide a description regarding the common types of the problems found on the three Jordanian university sites, together with their numbers and locations on the website.
Research for the evaluation of web-sites has already begun, however it is proceeding at a very slow rate. The main reasons for this are, in our opinion, the attempt to adapt existing methodologies to the particularities of the web, the individual structure of web-sites and the issue of finding the appropriate evaluators. This study copes exactly with these points and suggests a heuristic approach for the evaluation of web-sites.
In our study we tried primarily to train the evaluators in the particularities of the heuristic evaluation; in its classic form as well as in its web-adapted form. By doing this we try to answer the core question if we can augment the evaluators' expertise with a kind of training prior to the conduction of the evaluation itself. Next we used web-adapted heuristics, found in relative literature and tried to clarify them to the evaluators as well. Finally the evaluators were involved in a real evaluation of five web sites and they wrote down their comments on appropriately prepared questionnaires.
The results from this study confirm firstly two known conclusions, that the method is applicable to the Web and that the prior evaluators' expertise is of great importance. Yet, in addition to these, we concluded that it is possible to augment, under conditions, this expertise in a short way so they have an increased performance during the evaluation as well. Our main conclusion is, however, that the used heuristic list performed inadequately, but we noted the trend of the evaluators following a somewhat similar mode of thinking, thus providing us with the way to adapt these heuristics in a more holistic approach to the web.