The insertion of Machine Learning (ML) in everyday life demonstrates the importance of popularizing an understanding of ML already in school. Accompanying this trend arises the need to assess the students’ learning. Yet, so far, few assessments have been proposed, most lacking an evaluation. Therefore, we evaluate the reliability and validity of an automated assessment of the students’ learning of an image classification model created as a learning outcome of the “ML for All!” course. Results based on data collected from 240 students indicate that the assessment can be considered reliable (coefficient Omega = 0.834/Cronbach's alpha α=0.83). We also identified moderate to strong convergent and discriminant validity based on the polychoric correlation matrix. Factor analyses indicate two underlying factors “Data Management and Model Training” and “Performance Interpretation”, completing each other. These results can guide the improvement of assessments, as well as the decision on the application of this model in order to support ML education as part of a comprehensive assessment.
In today’s society, creativity plays a key role, emphasizing the importance of its development in K-12 education. Computing education may be an alternative for students to extend their creativity by solving problems and creating computational artifacts. Yet, there is little systematic evidence available to support this claim, also due to the lack of assessment models. This article presents SCORE, a model for the assessment of creativity in the context of computing education in K-12. Based on a mapping study, the model and a self-assessment questionnaire are systematically developed. The evaluation, based on 76 responses from K-12 students, indicates a high internal reliability (Cronbach’s alpha = 0.961) and confirmed the validity of the instrument suggesting only the exclusion of 3 items that do not seem to be measuring the concept. As such, the model represents a first step aiming at the systematic improvement of teaching creativity as part of computing education.
Although Machine Learning (ML) has already become part of our daily lives, few are familiar with this technology. Thus, in order to help students to understand ML, its potential, and limitations and to empower them to become creators of intelligent solutions, diverse courses for teaching ML in K-12 have emerged. Yet, a question less considered is how to assess the learning of ML. Therefore, we performed a systematic mapping identifying 27 instructional units, which also present a quantitative assessment of the students’ learning. The simplest assessments range from quizzes to performance-based assessments assessing the learning of basic ML concepts, approaches, and in some cases ethical issues and the impact of ML on lower cognitive levels. Feedback is mostly limited to the indication of the correctness of the answers and only a few assessments are automated. These results indicate a need for more rigorous and comprehensive research in this area.
Computational thinking (CT) has been introduced in primary schools worldwide. However, rich classroom-based evidence and research on how to assess and support students’ CT through programming are particularly scarce. This empirical study investigates 4th grade students’ (N = 57) CT in a comparatively comprehensive and fine-grained manner by assessing their Scratch projects (N = 325) with a framework that was revised from previous studies to aim towards enhancing CT. The results demonstrate in detail the various coding patterns and code constructs the students programmed in assorted projects throughout a programming course and the extent to which they had conceptual encounters with CT. Notably, the projects indicated CT diversely, and the students altogether encountered dissimilar areas in CT. To target the acquisition of CT broadly, manifold programming activities are necessary to introduce in the classroom. Furthermore, we discuss the possibilities of applying the assessment framework employed herein to support CT education through Scratch in classrooms.
Although Machine Learning (ML) is integrated today into various aspects of our lives, few understand the technology behind it. This presents new challenges to extend computing education early to ML concepts helping students to understand its potential and limits. Thus, in order to obtain an overview of the state of the art on teaching Machine Learning concepts in elementary to high school, we carried out a systematic mapping study. We identified 30 instructional units mostly focusing on ML basics and neural networks. Considering the complexity of ML concepts, several instructional units cover only the most accessible processes, such as data management or present model learning and testing on an abstract level black-boxing some of the underlying ML processes. Results demonstrate that teaching ML in school can increase understanding and interest in this knowledge area as well as contextualize ML concepts through their societal impact.