Spark, one of the products offered by MyQ (formerly Plethora), is a game-based platform meticulously designed to introduce students to the foundational concepts of computer science. By navigating through logical challenges, users delve into topics like abstraction, loops, and graph patterns. Setting itself apart from its counterparts, Spark boasts an innovative formal language and a rich set of features. Unlike traditional platforms, Spark emphasizes computational problem solving over programming syntax, making it accessible to learners of all levels. With progressively challenging levels and an intuitive graphical interface, students engage in problem solving, content creation, and collaboration within the MyQ community. Using Spark makes it less probable for students to utilize generative AI (GAI) to solve challenges, thereby sparing teachers the struggle of assessing tasks that might have been accomplished using GAI.
In this paper, we provide an examination of Spark, its functionalities, the challenges it tackles, its merits, limitations, and prospective trajectories.
This paper presents the first experiences of the use of an online open-source repository with programming exercises. The repository is independent of any specific teaching approach. Students can search for and select an exercise that trains the programming concepts that they want to train and that only uses the programming concepts they already know. Then, they can submit their solutions and get automatic feedback from the system. We analyzed quantitatively how students used the system by inspecting the logged actions of the students using the system. We also did a qualitative analysis by interviews, to find out how the students appreciated the use of the repository and to get feedback for improvements. We focused on how students select exercises as finding the exercise that fulfills the training needs of a student is the innovative part of our repository.
Concurrency is a complex to learn topic that is becoming more and more relevant, such that many undergraduate Computer Science curricula are introducing it in introductory programming courses. This paper investigates the combined use of Sonic Pi and Team-Based Learning to mitigate the difficulties in early exposure to concurrency. Sonic Pi, a domain-specific music language, provides great support for “playing” with concurrency and “hearing” common problems such as data races and lack of synchronization among different concurrent threads. More specifically, the paper focuses on students’ misconceptions regarding concurrency in Sonic Pi, and compares them to those arising in traditional concurrent programming languages. In addition, it preliminarily explores knowledge transfer from Sonic Pi to C/C++. The approach has been applied in two teaching experiments with undergraduate students in our University involving 184 participants. Our investigations bring out the need to address misconceptions through targeted interventions for a clear understanding of concurrent programming concepts. Sonic Pi’s simplified abstraction and domain-specific flavor has demonstrated to be effective, especially for first-year students.
Even though working with data is as important as coding for understanding and dealing with complex problems across multiple fields, it has received very little attention in the context of Computational Thinking. This paper discusses an approach for bridging the gap between Computational Thinking with Data Science by employing and studying classification as a higher-order thinking process that connects the two. To achieve that, we designed and developed an online constructionist gaming tool called SorBET which integrates coding and database design enabling students to interpret, organize, and analyze data through game play and game design. The paper presents and discusses the results of a pilot study that aimed to investigate the data practices secondary students develop through playing and modifying SorBET games, and to determine the impact of game modding on student critical engagement with CT. According to the results, students developed and used certain data practices such as data interpretation and data model design to become better players or to design an interesting classification game. Moreover, game modding process motivated students to question the original games’ content, leading them to develop a critical stance towards the game data model and representations.
There can be many reasons why students fail to answer correctly to summative tests in advanced computer science courses: often the cause is a lack of prerequisites or misconceptions about topics presented in previous courses. One of the ITiCSE 2020 working groups investigated the possibility of designing assessments suitable for differentiating between fragilities in prerequisites (in particular, knowledge and skills related to introductory programming courses) and advanced topics. This paper reports on an empirical evaluation of an instrument focusing on data structures, among those proposed by the ITiCSE working group. The evaluation aimed at understanding what fragile knowledge and skills the instrument is actually able to detect and to what extent it is able to differentiate them. Our results support that the instrument is able to distinguish between some specific fragilities (e.g., value vs. reference semantics), but not all of those claimed in the original report. In addition, our findings highlight the role of relevant skills at a level between prerequisite and advanced skills, such as program comprehension and reasoning about constraints. We also suggest ways to improve the questions in the instrument, both by improving the distractors of the multiple choice questions, and by slightly changing the content or phrasing of the questions. We argue that these improvements will increase the effectiveness of the instrument in assessing prerequisites as a whole, but also to pinpoint specific fragilities.
Nowadays, few professionals understand the techniques and testing criteria to systematize the software testing activity in the software industry. Towards shedding some light on such problems and promoting software testing, professors in the area have established Massive Open Online Courses as educational initiatives. However, the main limitation is the professor’s lack of supervision of students. A conversation agent called TOB-STT has been defined in trying to avoid the problem. A previous study introduced TOB-STT; however, it did not analyze its efficacy. This article addresses a controlled experiment that analyzed its efficacy and revealed it was not expressive in its current version. Therefore, we conducted an in-depth analysis to find what caused this result and provided a detailed discussion. The findings contribute to the TOB-STT since the experimental results show that improvements need to be made in the conversational agent before we use it in Massive Open Online Courses.
Teaching algorithmic thinking enables students to use their knowledge in various contexts to reuse existing solutions to algorithmic problems. The aim of this study is to examine how students recognize which algorithmic concepts can be used in a new situation. We developed a card sorting task and investigated the ways in which secondary school students arranged algorithmic problems (Bebras tasks) into groups using algorithm as a criterion. Furthermore, we examined the students’ explanations for their groupings. The results of this qualitative study indicate that students may recognize underlying algorithmic concepts directly or by identifying similarities with a previously solved problem; however, the direct recognition was more successful. Our findings also include the factors that play a role in students’ recognition of algorithmic concepts, such as the degree of similarity to problems discussed during lessons. Our study highlights the significance of teaching students how to recognize the structure of algorithmic problems.
This paper focuses on the analysis of Bebras Challenge tasks to find Informatics tasks that develop abstract thinking. Our study seeks to find which Bebras tasks develop abstraction and in what way. We analysed hundreds of tasks from the Czech contest to identify those tasks requiring participants to abstract directly or use abstract structures. Results show that an agreement among experts on stating which task is focused on abstraction is at a moderate level. We discovered that tasks focused on abstraction occur four to five times less frequently in sets of contest tasks than algorithmic tasks. Our findings proved that abstract tasks results compared with algorithmic ones did not differ in neither age nor gender group of contestants.
When we “think like a computer scientist,” we are able to systematically solve problems in different fields, create software applications that support various needs, and design artefacts that model complex systems. Abstraction is a soft skill embedded in all those endeavours, being a main cornerstone of computational thinking. Our overview of abstraction is intended to be not so much systematic as thought provoking, inviting the reader to (re)think abstraction from different – and perhaps unusual – perspectives. After presenting a range of its characterisations, we will explore abstraction from a cognitive point of view. Then we will discuss the role of abstraction in a range of computer science areas, including whether and how abstraction is taught. Although it is impossible to capture the essence of abstraction in one sentence, one section or a single paper, we hope our insights into abstraction may help computer science educators to better understand, model and even dare to teach abstraction skills.
Object-oriented programming distinguishes between instance attributes and methods and class attributes and methods, annotated by the static modifier. Novices encounter difficulty understanding the means and implications of static attributes and methods. The paper has two outcomes: (a) a detailed classification of aspects of understanding static, and (b) a collection of questions designed to serve as a learning/practice/diagnostic tool to address those aspects. Providing answers requires learners to apply higher-order cognitive skills and, hence, to advance their understanding of the essential meaning of the concept. Each question is analyzed according to three characteristics: (a) the static aspects that the question examines according to a detailed classification the paper provides; (b) identification of the question according: to Bloom’s revised taxonomy, to the Structure of Observed Learning Outcome (SOLO) taxonomy; and to the problem-solving keywords used in the question's formulation. Several recommendations for teaching are presented.