The main goal of this research is to enhance the understanding of quality criteria for DB metadata for assessment and recognition as factors increasing their value in higher education (HE). To attain this goal, a case study approach centered in one HE institution was used, aiming (a) at an analysis of the status quo description of metadata of DBs issued by the HE institution to identify the value of DBs in terms of assessment and recognition procedures, and (b) a list of quality criteria for DB description metadata was proposed on the basis of academic research and on expert interview results. The results of the research demonstrate that in the institution under research, these criteria are not present in most cases of DB descriptions as teachers do not provide them. Distinct assessment and recognition criteria make an important quality factor for the DBs to become valid and valued digital credentials in HE.
Creativity has emerged as an important 21st-century competency. Although it is traditionally associated with arts and literature, it can also be developed as part of computing education. Therefore, this article -presents a systematic mapping of approaches for assessing creativity based on the analysis of computer programs created by the students. As result, only ten approaches reported in eleven articles have been encountered. These reveal the absence of a commonly accepted definition of product creativity customized to computer education, confirming only originality as one of the well-established characteristics. Several approaches seem to lack clearly defined criteria for effective, efficient and useful creativity assessment. Diverse techniques are used including rubrics, mathematical models and machine learning, supporting manual and automated approaches. Few performed a comprehensive evaluation of the proposed approach regarding their reliability and validity. These results can help instructors to choose and adopt assessment approaches and guide researchers by pointing out shortcomings.
Computational thinking (CT) has been introduced in primary schools worldwide. However, rich classroom-based evidence and research on how to assess and support students’ CT through programming are particularly scarce. This empirical study investigates 4th grade students’ (N = 57) CT in a comparatively comprehensive and fine-grained manner by assessing their Scratch projects (N = 325) with a framework that was revised from previous studies to aim towards enhancing CT. The results demonstrate in detail the various coding patterns and code constructs the students programmed in assorted projects throughout a programming course and the extent to which they had conceptual encounters with CT. Notably, the projects indicated CT diversely, and the students altogether encountered dissimilar areas in CT. To target the acquisition of CT broadly, manifold programming activities are necessary to introduce in the classroom. Furthermore, we discuss the possibilities of applying the assessment framework employed herein to support CT education through Scratch in classrooms.
As computing has become an integral part of our world, demand for teaching computational thinking in K-12 has increased. One of its basic competences is programming, often taught by learning activities without a predefined solution using block-based visual programming languages. Automatic assessment tools can support teachers with their assessment and grading as well as guide students throughout their learning process. Although being already widely used in higher education, it remains unclear if such approaches exist for K-12 computing education. Thus, in order to obtain an overview, we performed a systematic mapping study. We identified 14 approaches, focusing on the analysis of the code created by the students inferring computational thinking competencies related to algorithms and programming. However, an evident lack of consensus on the assessment criteria and instructional feedback indicates the need for further research to support a wide application of computing education in K-12 schools.
The development of computational thinking is a major topic in K-12 education. Many of these experiences focus on teaching programming using block-based languages. As part of these activities, it is important for students to receive feedback on their assignments. Yet, in practice it may be difficult to provide personalized, objective and consistent feedback. In this context, automatic assessment and grading has become important. While there exist diverse graders for text-based languages, support for block-based programming languages is still scarce. This article presents CodeMaster, a free web application that in a problem-based learning context allows to automatically assess and grade projects programmed with App Inventor and Snap!. It uses a rubric measuring computational thinking based on a static code analysis. Students can use the tool to get feedback to encourage them to improve their programming competencies. It can also be used by teachers for assessing whole classes easing their workload.
In this article we report about a study to assess Dutch teachers' Pedagogical Content Knowledge (\small PCK), with special focus on programming as a topic in secondary school Informatics education. For this research, we developed an online research instrument: the Online Teacher \small PCK Analyser (OTPA). The results show that Dutch teachers' \small PCK scores between low and medium. Also we enquired whether there is any relation between teachers' \small PCK and the textbooks they use by comparing the results of this study with those of a previous one in which the \small PCK of textbooks was assessed. The results show that there is no strong relation. Finally, we looked for trends between teachers' \small PCK and their educational backgrounds, as most of the Dutch teachers have a different background than Informatics. The results show that also in this case there is no strong relation.