Critical thinking is a fundamental skill for 21st-century citizens, and it should be promoted from elementary school and developed in computing education. However, assessing the development of critical thinking in educational contexts presents unique challenges. In this study, a systematic mapping was carried out to investigate how to assess the development of critical thinking, or some of its skills, in K-12 computing teaching. The results indicate that primary studies on the development of critical thinking in K-12 computing education are concentrated in Asian countries, mainly focusing on teaching concepts such as algorithms and programming. Moreover, the studies do not present a fixed set of critical thinking skills assessed, and the skills are selected according to specific teaching and research needs. Most of the studies adopted student self-assessment using instruments that are well-known in the literature for assessing critical thinking. Many studies measured the quality of instruments for their research, obtaining favorable results and demonstrating consistency. However, the research points to a need for more diversity in assessment methods beyond student self-assessment. The findings suggest a need for more comprehensive and diverse critical thinking assessments in K-12 computing education, covering different educational stages and computing education concepts. This research aims to guide educators and researchers in developing more effective critical thinking assessments for K-12 computing education.
The insertion of Machine Learning (ML) in everyday life demonstrates the importance of popularizing an understanding of ML already in school. Accompanying this trend arises the need to assess the students’ learning. Yet, so far, few assessments have been proposed, most lacking an evaluation. Therefore, we evaluate the reliability and validity of an automated assessment of the students’ learning of an image classification model created as a learning outcome of the “ML for All!” course. Results based on data collected from 240 students indicate that the assessment can be considered reliable (coefficient Omega = 0.834/Cronbach's alpha α=0.83). We also identified moderate to strong convergent and discriminant validity based on the polychoric correlation matrix. Factor analyses indicate two underlying factors “Data Management and Model Training” and “Performance Interpretation”, completing each other. These results can guide the improvement of assessments, as well as the decision on the application of this model in order to support ML education as part of a comprehensive assessment.
Knowledge about Machine Learning is becoming essential, yet it remains a restricted privilege that may not be available to students from a low socio-economic status background. Thus, in order to provide equal opportunities, we taught ML concepts and applications to 158 middle and high school students from a low socio-economic background in Brazil. Results show that these students can understand how ML works and execute the main steps of a human-centered process for developing an image classification model. No substantial differences regarding class periods, educational stage, and sex assigned at birth were observed. The course was perceived as fun and motivating, especially to girls. Despite the limitations in this context, the results show that they can be overcome. Mitigating solutions involve partnerships between social institutions and university, an adapted pedagogical approach as well as increased on-by-one assistance. These findings can be used to guide course designs for teaching ML in the context of underprivileged students from a low socio-economic status background and thus contribute to the inclusion of these students.
The management of contemporary software projects is unfeasible without the support of a Project Management (PM) tool. In order to enable the adoption of PM tools in practice, teaching its usage is important as part of computer education. Aiming at teaching PM tools, several approaches have been proposed, such as the development of educational PM tools. However, such approaches are typically limited with respect to content coverage and instructional support. In this context, an important technique is the provision of instructional feedback, which is essential in order to help the students to learn based on the evaluation of their own actions. In order to take advantage of this technique, this article proposes its employment in an Instructional Unit, being integrated into the PM tool dotProject+, providing automated feedback based on the project plan being developed with the tool. This technique has been evaluated through a series of case studies.