Although Machine Learning (ML) is used already in our daily lives, few are familiar with the technology. This poses new challenges for students to understand ML, its potential, and limitations as well as to empower them to become creators of intelligent solutions. To effectively guide the learning of ML, this article proposes a scoring rubric for the performance-based assessment of the learning of concepts and practices regarding image classification with artificial neural networks in K-12. The assessment is based on the examination of student-created artifacts as a part of open-ended applications on the use stage of the Use-Modify-Create cycle. An initial evaluation of the scoring rubric through an expert panel demonstrates its internal consistency as well as its correctness and relevance. Providing a first step for the assessment of concepts on image recognition, the results may support the progress of learning ML by providing feedback to students and teachers.
In today’s society, creativity plays a key role, emphasizing the importance of its development in K-12 education. Computing education may be an alternative for students to extend their creativity by solving problems and creating computational artifacts. Yet, there is little systematic evidence available to support this claim, also due to the lack of assessment models. This article presents SCORE, a model for the assessment of creativity in the context of computing education in K-12. Based on a mapping study, the model and a self-assessment questionnaire are systematically developed. The evaluation, based on 76 responses from K-12 students, indicates a high internal reliability (Cronbach’s alpha = 0.961) and confirmed the validity of the instrument suggesting only the exclusion of 3 items that do not seem to be measuring the concept. As such, the model represents a first step aiming at the systematic improvement of teaching creativity as part of computing education.
Creativity has emerged as an important 21st-century competency. Although it is traditionally associated with arts and literature, it can also be developed as part of computing education. Therefore, this article -presents a systematic mapping of approaches for assessing creativity based on the analysis of computer programs created by the students. As result, only ten approaches reported in eleven articles have been encountered. These reveal the absence of a commonly accepted definition of product creativity customized to computer education, confirming only originality as one of the well-established characteristics. Several approaches seem to lack clearly defined criteria for effective, efficient and useful creativity assessment. Diverse techniques are used including rubrics, mathematical models and machine learning, supporting manual and automated approaches. Few performed a comprehensive evaluation of the proposed approach regarding their reliability and validity. These results can help instructors to choose and adopt assessment approaches and guide researchers by pointing out shortcomings.
As computing has become an integral part of our world, demand for teaching computational thinking in K-12 has increased. One of its basic competences is programming, often taught by learning activities without a predefined solution using block-based visual programming languages. Automatic assessment tools can support teachers with their assessment and grading as well as guide students throughout their learning process. Although being already widely used in higher education, it remains unclear if such approaches exist for K-12 computing education. Thus, in order to obtain an overview, we performed a systematic mapping study. We identified 14 approaches, focusing on the analysis of the code created by the students inferring computational thinking competencies related to algorithms and programming. However, an evident lack of consensus on the assessment criteria and instructional feedback indicates the need for further research to support a wide application of computing education in K-12 schools.
The development of computational thinking is a major topic in K-12 education. Many of these experiences focus on teaching programming using block-based languages. As part of these activities, it is important for students to receive feedback on their assignments. Yet, in practice it may be difficult to provide personalized, objective and consistent feedback. In this context, automatic assessment and grading has become important. While there exist diverse graders for text-based languages, support for block-based programming languages is still scarce. This article presents CodeMaster, a free web application that in a problem-based learning context allows to automatically assess and grade projects programmed with App Inventor and Snap!. It uses a rubric measuring computational thinking based on a static code analysis. Students can use the tool to get feedback to encourage them to improve their programming competencies. It can also be used by teachers for assessing whole classes easing their workload.