The objective of this article is to present the development and evaluation of dETECT (Evaluating TEaching CompuTing), a model for the evaluation of the quality of instructional units for teaching computing in middle school based on the students' perception collected through a measurement instrument. The dETECT model was systematically developed and evaluated based on data collected from 16 case studies in 13 different middle school institutions with responses from 477 students. Our results indicate that the dETECT model is acceptable in terms of reliability (Cronbach's alpha ?=.787) and construct validity, demonstrating an acceptable degree of correlation found between almost all items of the dETECT measurement instrument. These results allow researchers and instructors to rely on the dETECT model in order to evaluate instructional units and, thus, contribute to their improvement and to direct an effective and efficient adoption of teaching computing in middle school.
Despite the fact that digital technologies are more and more used in the learning and education process, there is still lack of professional evaluation tools capable of assessing the quality of used digital teaching aids in a comprehensive and objective manner. Construction of the Comprehensive Evaluation of Electronic Learning Tools and Educational Software (CEELTES) tool was preceded by several surveys and knowledge obtained in the course of creation of digital learning and teaching aids and implementation thereof in the teaching process. The evaluation tool as such consists of sets (catalogues) of criteria divided into four separately assessed areas - the area of technical, technological and user attributes; the area of criteria evaluating the content, operation, information structuring and processing; the area of criteria evaluating the information processing in terms of learning, recognition, and education needs; and, finally, the area of criteria evaluating the psychological and pedagogical aspects of a digital product. The specified areas are assessed independently, separately, by a specialist in the given science discipline. The final evaluation of the assessed digital product objectifies (quantifies) the overall rate of appropriateness of inclusion of a particular digital teaching aid in the teaching process.
The Lithuanian Informatics Olympiad is a problem solving contest for high school students. The work of each contestant is evaluated in terms of several criteria, where each criterion is measured according to its own scale (but the same scale for each contestant). Several jury members are involved in the evaluation. This paper analyses the problem how to calculate the aggregated score for whole submission in the above mentioned situation. The chosen methodology for solving this problem is Multiple Criteria Decision Analysis (MCDA). The outcome of this paper is the score aggregation method proposed to be applied in LitIO developed using MCDA approaches.
The International Olympiad in Informatics (IOI) aspires to be a science olympiad alongside such international olympiads in mathematics, physics, chemistry, and biology. Informatics as a discipline is well suited to a scientific approach and it offers numerous possibilities for competitions with a high scientific standing. We argue that, in its current form, the IOI fails to be scientific in the way it evaluates the work of the contestants.
In this paper, we describe the major ingredients of the IOI to guide further discussions. By presenting the results of an extensive analysis of two IOI competition tasks, we hope to create an awareness of the urgency to address the shortcomings. We offer some suggestions to raise the scientific quality of the IOI.
Individuals vary across many dimensions due to the effects of gender-based, personality, and cultural differences. Consequently, programming contests with a limited and restrictive structure (e.g., scoring system, questioning style) are most favourable and attractive to a specific set of individuals with the characteristics that best match this structure. We suggest that a more inclusive and flexible structure will allow contests to be more appealing to a wider range of participants by being less biased towards specific traits. As well, by making contests more broadly appealing, they become better post secondary recruiting tools that can potentially be used to attract under-represented populations to the discipline of computer science. In this paper, we focus on gender-based differences and the effect of a competition's structure on female participants.