This paper examines results from a multiple-choice test given to novice programmers at twelve institutions, with specific focus on annotations made by students on their tests. We found that the question type affected both student performance and student annotations. Classifying student answers by question type, annotation type (tracing, elimination, other, or none), and institution, we found that tracing was most effective for one type of question and elimination for the other, but overall, any annotation was better than none.
Interaction and feedback are key factors supporting the learning process. Therefore many automatic assessment and feedback systems have been developed for computer science courses during the past decade. In this paper we present a new framework, TRAKLA2, for building interactive algorithm simulation exercises. Exercises constructed in TRAKLA2 are viewed as learning objects in which students manipulate conceptual visualizations of data structures in order to simulate the working of given algorithms. The framework supports randomized input values for the assignments, as well as automatic feedback and grading of students' simulation sequences. Moreover, it supports automatic generation of model solutions as algorithm animations and the logging of statistical data about the interaction process resulting as students solve exercises. The system has been used in two universities in Finland for several courses involving over 1000 students. Student response has been very positive.