Concurrency is often perceived as difficult by students. One reason for this may be due to the fact that abstractions used in concurrent programs leave more situations undefined compared to sequential programs (e.g., in what order statements are executed), which makes it harder to create a proper mental model of the execution environment. Students who aim to explore the abstractions through testing are further hindered by the non-determinism of concurrent programs since even incorrect programs may seem to work properly most of the time. In this paper we aim to explore how students’ understanding these abstractions by examining 137 solutions to two concurrency questions given on the final exam in two years of an introductory concurrency course. To highlight problematic areas of these abstractions, we present alternative abstractions under which each incorrect solution would be correct.
Controlling complexity through the use of abstractions is a critical part of problem solving in programming. Thus, becoming proficient with procedural and data abstraction through the use of user-defined functions is important. Properly using functions for abstraction involves a number of other core concepts, such as parameter passing, scope and references, which are known to be difficult. Therefore, this paper aims to study students’ proficiency with these core concepts, and students’ ability to apply procedural and data abstraction to solve problems. We collected data from two years of an introductory Python course, both from a questionnaire and from two lab assignments. The data shows that students had difficulties with the core concepts, and a number of issues solving problems with abstraction. We also investigate the impact of using a visualization tool when teaching the core concepts.
Although there is no universal agreement that students should learn programming, many countries have reached a consensus on the need to expose K-12 students to Computational Thinking (CT). When, what and how to teach CT in schools are open questions and we attempt to address them by examining how well students around the world solved problems in recent Bebras challenges. We collected and analyzed performance data on Bebras tasks from 115,400 students in grades 3-12 in seven countries. Our study provides further insight into a range of questions addressed in smaller-scale inquiries, in particular about the possible impact of schools systems and gender on students' success rate.
In addition to analyzing performance data of a large population, we have classified the considered tasks in terms of CT categories, which should account for the learning implications of the challenge. Algorithms and data representation dominate the challenge, accounting for 75-90% of the tasks, while other categories such as abstraction, parallelization and problem decomposition are sometimes represented by one or two questions at various age groups. This classification can be a starting point for using online Bebras tasks to support the effective learning of CT concepts in the classroom.
In this paper, we analyze the errors novice students make when developing invariant based programs. In addition to presenting the general error types, we also look at what students have difficulty with when it comes to expressing invariants. The results indicate that an introductory course utilizing the invariant based approach is suitable from the very beginning of university studies in CS without being ``too advanced''. Although inventing the invariant was not found to be trivial, the main difficulty faced by novices when applying a correct-by-construction approach to program development seems to be related to weak skills in translating intuitive and informal statements into a symbolic form using logical notation in general and quantifiers in particular.
This paper presents an approach for educators to evaluate student progress throughout a course, and not merely based on a final exam. We introduce progress reports and describe how these can be used as a tool to evaluate student learning and understanding during programming courses. Complemented with data from surveys and the exam, the progress reports can be used to build an overall picture of individual student progress in a course, and to answer questions related to how students (1) understand program code as a whole, (2) understand individual constructs, and (3) perceive the difficulty level of different programming topics. We also present results from using this approach in introductory programming courses at secondary level. Our initial experience from using the progress reports is positive, as they provide valuable information during the course, which most likely would remain uncovered otherwise.