The Lithuanian Informatics Olympiad is a problem solving contest for high school students. The work of each contestant is evaluated in terms of several criteria, where each criterion is measured according to its own scale (but the same scale for each contestant). Several jury members are involved in the evaluation. This paper analyses the problem how to calculate the aggregated score for whole submission in the above mentioned situation. The chosen methodology for solving this problem is Multiple Criteria Decision Analysis (MCDA). The outcome of this paper is the score aggregation method proposed to be applied in LitIO developed using MCDA approaches.
The Lithuanian Informatics Olympiads (LitIO) is a problem solving programming contest for students in secondary education. The work of the student to be evaluated is an algorithm designed by the student and implemented as a working program. The current evaluation process involves both automated (for correctness and performance of programs with the given input data) and manual (for programming style, written motivation of an algorithm) grading. However, it is based on tradition and has not been scientifically discussed and motivated. To create an improved and motivated evaluation model, we put together a questionnaire and asked a group of foreign and Lithuanian experts having experience in various informatics contests to respond. We identified two basic directions in the suggested evaluation models and made a choice based on the goals of LitIO. While designing the model in the paper, we reflected on the suggestions and opinions of the experts as much as possible, even if they were not included into the proposed model. The paper presents the final outcome of this work, the proposed evaluation model for the Lithuanian Informatics Olympiads.
Automatic assessment of programming exercises is typically based on testing approach. Most automatic assessment frameworks execute tests and evaluate test results automatically, but the test data generation is not automated. No matter that automatic test data generation techniques and tools are available.
We have researched how the Java PathFinder software model checker can be adopted to the specific needs of test data generation in automatic assessment. Practical problems considered are: how to derive test data directly from students' programs (i.e., without annotation) and how to visualize and how to abstract test data automatically for students? Interesting outcomes of our research are that with minor refinements generalized symbolic execution with lazy initialization (a test data generation algorithm implemented in PathFinder) can be used to construct test data directly from students' programs without annotation, and that intermediate results of the same algorithm can be used to provide novel visualizations of the test data.
Computer simulations seem to be one of the most effective ways to use computers in physics education. They encourage students to carry out the processes used in physics research: to question, predict, hypothesise, observe, interpret results etc. Their effective use requires an availability of appropriate teaching resources fitting secondary schools curricula.
This paper presents a set of computer simulations that cover the curriculum area of Mechanics and are designed to fit directly to curricula and textbooks used at Slovak grammar schools. All simulations are accompanied by brief instructions for teachers, including suggestions for learning activities and problem tasks for students. Some of them are designed as virtual laboratories.
The developed simulations were tested with a group of secondary school students and evaluated also by groups of future and practising physics teachers. The paper presents and discusses findings and conclusions from the both runs of the testing.
Multiple choice questions are a convenient and popular means of testing beginning students in programming courses. However, they are qualitatively different from exam questions. This paper reports on a study into which types of multiple choice programming questions discriminate well on a final exam, and how well they predict exam scores.