The article describes a study carried out on pupils aged 12-13 with no prior programming experience. The study examined how they learn to use loops with a fixed number of repetitions. Pupils were given a set of programming tasks to solve, without any preparatory or accompanying instruction or explanation, in a block-based visual programming environment. Pupils’ programs were analyzed to identify possible misconceptions and factors influencing them. Four misconceptions involving comprehension of the loop concept and repeat command were detected. Some of these misconceptions were found to have an impact on a pupil’s need to ask the computer to check the correctness of his/her program. Some of the changes made to tasks had an impact on the frequency of these misconceptions and could be the factors influencing them. Teachers and course book writers will be able to use the results of our research to create an appropriate curriculum. This will enable pupils to acquire and subsequently deal with misconceptions that could prevent the correct understanding of created concepts.
Creativity has emerged as an important 21st-century competency. Although it is traditionally associated with arts and literature, it can also be developed as part of computing education. Therefore, this article -presents a systematic mapping of approaches for assessing creativity based on the analysis of computer programs created by the students. As result, only ten approaches reported in eleven articles have been encountered. These reveal the absence of a commonly accepted definition of product creativity customized to computer education, confirming only originality as one of the well-established characteristics. Several approaches seem to lack clearly defined criteria for effective, efficient and useful creativity assessment. Diverse techniques are used including rubrics, mathematical models and machine learning, supporting manual and automated approaches. Few performed a comprehensive evaluation of the proposed approach regarding their reliability and validity. These results can help instructors to choose and adopt assessment approaches and guide researchers by pointing out shortcomings.
Coding and computational thinking have recently become compulsory skills in many school systems globally. Teaching these new skills presents a challenge for many teachers. A notable example of professional development designed using Constructionist principles to address this challenge is ScratchEd. Upon reflecting on her experiences designing and running ScratchEd, Karen Brennan identified five tensions faced by professional development providers, and proposed that these tensions could be used for scrutinising and critiquing professional development. In this paper we analyse, through the lens of Brennan's tensions, the process we have followed to design, evaluate and improve professional development. We argue that while we have experienced the same tensions, the extent to which we assess learning is a new tension that extends those identified by Brennan. There are strong reasons to assess teachers' knowledge, however, quantitative measures of learning could be at odds with Constructionism: as Papert argued in Mindstorms, constructionist educators should study their learning environments as anthropologists. Consequently, we have called this new tension the tension between anthropology and assessment.
The development of computational thinking is a major topic in K-12 education. Many of these experiences focus on teaching programming using block-based languages. As part of these activities, it is important for students to receive feedback on their assignments. Yet, in practice it may be difficult to provide personalized, objective and consistent feedback. In this context, automatic assessment and grading has become important. While there exist diverse graders for text-based languages, support for block-based programming languages is still scarce. This article presents CodeMaster, a free web application that in a problem-based learning context allows to automatically assess and grade projects programmed with App Inventor and Snap!. It uses a rubric measuring computational thinking based on a static code analysis. Students can use the tool to get feedback to encourage them to improve their programming competencies. It can also be used by teachers for assessing whole classes easing their workload.
The goal of this literature study is to give some preliminary answers to the questions that aim to uncover the Pedagogical Content Knowledge (PCK) of Informatics Education, with focus on Programming. PCK has been defined as the knowledge that allows teachers to transform their knowledge of the subject into something accessible for their students. The core questions to uncover this knowledge are: what are the reasons to teach programming; what are the concepts we need to teach programming; what are the most common difficulties/misconceptions students encounter while learning to program; and how to teach this topic. Some of the answers found are, respectively: enhancing students' problem solving skills; programming knowledge and programming strategies; general problems of orientation; and possible ideal chains for learning computer programming. Because answers to the four questions are in a way not connected with each other, PCK being an unexplored field in Informatics Education, we need research based efforts to study this field.