There can be many reasons why students fail to answer correctly to summative tests in advanced computer science courses: often the cause is a lack of prerequisites or misconceptions about topics presented in previous courses. One of the ITiCSE 2020 working groups investigated the possibility of designing assessments suitable for differentiating between fragilities in prerequisites (in particular, knowledge and skills related to introductory programming courses) and advanced topics. This paper reports on an empirical evaluation of an instrument focusing on data structures, among those proposed by the ITiCSE working group. The evaluation aimed at understanding what fragile knowledge and skills the instrument is actually able to detect and to what extent it is able to differentiate them. Our results support that the instrument is able to distinguish between some specific fragilities (e.g., value vs. reference semantics), but not all of those claimed in the original report. In addition, our findings highlight the role of relevant skills at a level between prerequisite and advanced skills, such as program comprehension and reasoning about constraints. We also suggest ways to improve the questions in the instrument, both by improving the distractors of the multiple choice questions, and by slightly changing the content or phrasing of the questions. We argue that these improvements will increase the effectiveness of the instrument in assessing prerequisites as a whole, but also to pinpoint specific fragilities.
Concurrency is often perceived as difficult by students. One reason for this may be due to the fact that abstractions used in concurrent programs leave more situations undefined compared to sequential programs (e.g., in what order statements are executed), which makes it harder to create a proper mental model of the execution environment. Students who aim to explore the abstractions through testing are further hindered by the non-determinism of concurrent programs since even incorrect programs may seem to work properly most of the time. In this paper we aim to explore how students’ understanding these abstractions by examining 137 solutions to two concurrency questions given on the final exam in two years of an introductory concurrency course. To highlight problematic areas of these abstractions, we present alternative abstractions under which each incorrect solution would be correct.
Controlling complexity through the use of abstractions is a critical part of problem solving in programming. Thus, becoming proficient with procedural and data abstraction through the use of user-defined functions is important. Properly using functions for abstraction involves a number of other core concepts, such as parameter passing, scope and references, which are known to be difficult. Therefore, this paper aims to study students’ proficiency with these core concepts, and students’ ability to apply procedural and data abstraction to solve problems. We collected data from two years of an introductory Python course, both from a questionnaire and from two lab assignments. The data shows that students had difficulties with the core concepts, and a number of issues solving problems with abstraction. We also investigate the impact of using a visualization tool when teaching the core concepts.