Critical thinking is a fundamental skill for 21st-century citizens, and it should be promoted from elementary school and developed in computing education. However, assessing the development of critical thinking in educational contexts presents unique challenges. In this study, a systematic mapping was carried out to investigate how to assess the development of critical thinking, or some of its skills, in K-12 computing teaching. The results indicate that primary studies on the development of critical thinking in K-12 computing education are concentrated in Asian countries, mainly focusing on teaching concepts such as algorithms and programming. Moreover, the studies do not present a fixed set of critical thinking skills assessed, and the skills are selected according to specific teaching and research needs. Most of the studies adopted student self-assessment using instruments that are well-known in the literature for assessing critical thinking. Many studies measured the quality of instruments for their research, obtaining favorable results and demonstrating consistency. However, the research points to a need for more diversity in assessment methods beyond student self-assessment. The findings suggest a need for more comprehensive and diverse critical thinking assessments in K-12 computing education, covering different educational stages and computing education concepts. This research aims to guide educators and researchers in developing more effective critical thinking assessments for K-12 computing education.
The insertion of Machine Learning (ML) in everyday life demonstrates the importance of popularizing an understanding of ML already in school. Accompanying this trend arises the need to assess the students’ learning. Yet, so far, few assessments have been proposed, most lacking an evaluation. Therefore, we evaluate the reliability and validity of an automated assessment of the students’ learning of an image classification model created as a learning outcome of the “ML for All!” course. Results based on data collected from 240 students indicate that the assessment can be considered reliable (coefficient Omega = 0.834/Cronbach's alpha α=0.83). We also identified moderate to strong convergent and discriminant validity based on the polychoric correlation matrix. Factor analyses indicate two underlying factors “Data Management and Model Training” and “Performance Interpretation”, completing each other. These results can guide the improvement of assessments, as well as the decision on the application of this model in order to support ML education as part of a comprehensive assessment.
This study aims to provide a deeper understanding about the Bebras tasks, which is one of the computational thinking (CT) unplugged activities, in terms of age level, task category, and CT skills. Explanatory sequential mixed method was adopted in the study in order to collect data according to the research questions. The participants of the study were 113,653 school students from different age levels. Anonymous data was collected electronically from the Turkey 2019 Bebras challenge. Factor analysis was employed to reveal the construct validity to determine how accurately the tool measured the abstract psychological characteristics of the participants. In addition, the item discrimination index was calculated to measure how discriminating the items in the challenge were. Qualitative data gathered through the national Bebras workshop was analysed according to content analysis. The findings highlighted some interesting points about the implications of the Bebras Challenge for Turkey, which are discussed in detail. Furthermore, common problems of Bebras tasks are identified and possible suggestions for improvement are listed.
Although Machine Learning (ML) is used already in our daily lives, few are familiar with the technology. This poses new challenges for students to understand ML, its potential, and limitations as well as to empower them to become creators of intelligent solutions. To effectively guide the learning of ML, this article proposes a scoring rubric for the performance-based assessment of the learning of concepts and practices regarding image classification with artificial neural networks in K-12. The assessment is based on the examination of student-created artifacts as a part of open-ended applications on the use stage of the Use-Modify-Create cycle. An initial evaluation of the scoring rubric through an expert panel demonstrates its internal consistency as well as its correctness and relevance. Providing a first step for the assessment of concepts on image recognition, the results may support the progress of learning ML by providing feedback to students and teachers.
In today’s society, creativity plays a key role, emphasizing the importance of its development in K-12 education. Computing education may be an alternative for students to extend their creativity by solving problems and creating computational artifacts. Yet, there is little systematic evidence available to support this claim, also due to the lack of assessment models. This article presents SCORE, a model for the assessment of creativity in the context of computing education in K-12. Based on a mapping study, the model and a self-assessment questionnaire are systematically developed. The evaluation, based on 76 responses from K-12 students, indicates a high internal reliability (Cronbach’s alpha = 0.961) and confirmed the validity of the instrument suggesting only the exclusion of 3 items that do not seem to be measuring the concept. As such, the model represents a first step aiming at the systematic improvement of teaching creativity as part of computing education.
Although Machine Learning (ML) has already become part of our daily lives, few are familiar with this technology. Thus, in order to help students to understand ML, its potential, and limitations and to empower them to become creators of intelligent solutions, diverse courses for teaching ML in K-12 have emerged. Yet, a question less considered is how to assess the learning of ML. Therefore, we performed a systematic mapping identifying 27 instructional units, which also present a quantitative assessment of the students’ learning. The simplest assessments range from quizzes to performance-based assessments assessing the learning of basic ML concepts, approaches, and in some cases ethical issues and the impact of ML on lower cognitive levels. Feedback is mostly limited to the indication of the correctness of the answers and only a few assessments are automated. These results indicate a need for more rigorous and comprehensive research in this area.
The main goal of this research is to enhance the understanding of quality criteria for DB metadata for assessment and recognition as factors increasing their value in higher education (HE). To attain this goal, a case study approach centered in one HE institution was used, aiming (a) at an analysis of the status quo description of metadata of DBs issued by the HE institution to identify the value of DBs in terms of assessment and recognition procedures, and (b) a list of quality criteria for DB description metadata was proposed on the basis of academic research and on expert interview results. The results of the research demonstrate that in the institution under research, these criteria are not present in most cases of DB descriptions as teachers do not provide them. Distinct assessment and recognition criteria make an important quality factor for the DBs to become valid and valued digital credentials in HE.
Creativity has emerged as an important 21st-century competency. Although it is traditionally associated with arts and literature, it can also be developed as part of computing education. Therefore, this article -presents a systematic mapping of approaches for assessing creativity based on the analysis of computer programs created by the students. As result, only ten approaches reported in eleven articles have been encountered. These reveal the absence of a commonly accepted definition of product creativity customized to computer education, confirming only originality as one of the well-established characteristics. Several approaches seem to lack clearly defined criteria for effective, efficient and useful creativity assessment. Diverse techniques are used including rubrics, mathematical models and machine learning, supporting manual and automated approaches. Few performed a comprehensive evaluation of the proposed approach regarding their reliability and validity. These results can help instructors to choose and adopt assessment approaches and guide researchers by pointing out shortcomings.
Computational thinking (CT) has been introduced in primary schools worldwide. However, rich classroom-based evidence and research on how to assess and support students’ CT through programming are particularly scarce. This empirical study investigates 4th grade students’ (N = 57) CT in a comparatively comprehensive and fine-grained manner by assessing their Scratch projects (N = 325) with a framework that was revised from previous studies to aim towards enhancing CT. The results demonstrate in detail the various coding patterns and code constructs the students programmed in assorted projects throughout a programming course and the extent to which they had conceptual encounters with CT. Notably, the projects indicated CT diversely, and the students altogether encountered dissimilar areas in CT. To target the acquisition of CT broadly, manifold programming activities are necessary to introduce in the classroom. Furthermore, we discuss the possibilities of applying the assessment framework employed herein to support CT education through Scratch in classrooms.
As computing has become an integral part of our world, demand for teaching computational thinking in K-12 has increased. One of its basic competences is programming, often taught by learning activities without a predefined solution using block-based visual programming languages. Automatic assessment tools can support teachers with their assessment and grading as well as guide students throughout their learning process. Although being already widely used in higher education, it remains unclear if such approaches exist for K-12 computing education. Thus, in order to obtain an overview, we performed a systematic mapping study. We identified 14 approaches, focusing on the analysis of the code created by the students inferring computational thinking competencies related to algorithms and programming. However, an evident lack of consensus on the assessment criteria and instructional feedback indicates the need for further research to support a wide application of computing education in K-12 schools.