The Lithuanian Informatics Olympiad is a problem solving contest for high school students. The work of each contestant is evaluated in terms of several criteria, where each criterion is measured according to its own scale (but the same scale for each contestant). Several jury members are involved in the evaluation. This paper analyses the problem how to calculate the aggregated score for whole submission in the above mentioned situation. The chosen methodology for solving this problem is Multiple Criteria Decision Analysis (MCDA). The outcome of this paper is the score aggregation method proposed to be applied in LitIO developed using MCDA approaches.
The Lithuanian Informatics Olympiads (LitIO) is a problem solving programming contest for students in secondary education. The work of the student to be evaluated is an algorithm designed by the student and implemented as a working program. The current evaluation process involves both automated (for correctness and performance of programs with the given input data) and manual (for programming style, written motivation of an algorithm) grading. However, it is based on tradition and has not been scientifically discussed and motivated. To create an improved and motivated evaluation model, we put together a questionnaire and asked a group of foreign and Lithuanian experts having experience in various informatics contests to respond. We identified two basic directions in the suggested evaluation models and made a choice based on the goals of LitIO. While designing the model in the paper, we reflected on the suggestions and opinions of the experts as much as possible, even if they were not included into the proposed model. The paper presents the final outcome of this work, the proposed evaluation model for the Lithuanian Informatics Olympiads.
The International Olympiad in Informatics (IOI) aspires to be a science olympiad alongside such international olympiads in mathematics, physics, chemistry, and biology. Informatics as a discipline is well suited to a scientific approach and it offers numerous possibilities for competitions with a high scientific standing. We argue that, in its current form, the IOI fails to be scientific in the way it evaluates the work of the contestants.
In this paper, we describe the major ingredients of the IOI to guide further discussions. By presenting the results of an extensive analysis of two IOI competition tasks, we hope to create an awareness of the urgency to address the shortcomings. We offer some suggestions to raise the scientific quality of the IOI.
For many programming tasks we would be glad to have some kind of automatic evaluation process. As an example, most of the programming contests use an automatic evaluation of the contestants' submissions. While this approach is clearly highly efficient, it also has some drawbacks. Often it is the case that the test inputs are not able to ``break'' all flawed submissions. In this article we show that the situation is not pleasant at all - for some programming tasks it is impossible to design good test inputs. Moreover, we discuss some ways how to recognize such tasks, and discuss other possibilities for doing the evaluation. The discussion is focused on programming contests, but the results can be applied for any programming tasks, e.g., assignments in school.
Individuals vary across many dimensions due to the effects of gender-based, personality, and cultural differences. Consequently, programming contests with a limited and restrictive structure (e.g., scoring system, questioning style) are most favourable and attractive to a specific set of individuals with the characteristics that best match this structure. We suggest that a more inclusive and flexible structure will allow contests to be more appealing to a wider range of participants by being less biased towards specific traits. As well, by making contests more broadly appealing, they become better post secondary recruiting tools that can potentially be used to attract under-represented populations to the discipline of computer science. In this paper, we focus on gender-based differences and the effect of a competition's structure on female participants.