Nowadays, SPOCs (Small Private Online Courses) have been used as complementary methods to support classroom teaching. SPOCs are courses that apply the usage of MOOCs (Massive Open Online Courses), combining classroom with online education, making them an exciting alternative for contexts such as emergency remote teaching. Although SPOCs have been continuously proposed in the software engineering teaching area, it is crucial to assess their practical applicability via measuring the effectiveness of this resource in the teaching-learning process. In this context, this paper aims to present an experimental evaluation to investigate the applicability of a SPOC in a Verification, Validation, and Software Testing course taught during the period of emergency remote education during the COVID-19 pandemic in Brazil. Therefore, we conducted a controlled experiment comparing alternative teaching through the application of a SPOC with teaching carried out via lectures. The comparison between the teaching methods is made by analyzing the students’ performance during the solving of practical activities and essay questions on the content covered. In addition, we used questionnaires to analyze students’ motivation during the course. Study results indicate an improvement in both motivation and performance of students participating in SPOC, which corroborates its applicability to the software testing teaching area.
We describe a collaboration between Marelli and Università degli Studi di Milano that allowed the latter to add a course on «Architectures for Big Data» in its Master programme of Computer Science, with the aim of providing a teaching approach characterized by an intertwined exposition of discipline, methodology and practical tools. We were motivated by the need of filling, at least in part, the gap between the expectation of employers and the competences acquired by students. Indeed, several big-data-related tools and patterns of widespread use in working environments are seldom taught in the academic context. The course also allowed to expose students to company-related processes and topics. So far, the course has been taught for two editions, and a third one is currently ongoing. Using both a quantitative and a qualitative approach, we show that students appreciated this new form of learning activities, in terms of enrollments, exam marks, and activated external theses. We also exploited the received feedback in order to slightly modify the content and the structure of the course.
In today’s society, creativity plays a key role, emphasizing the importance of its development in K-12 education. Computing education may be an alternative for students to extend their creativity by solving problems and creating computational artifacts. Yet, there is little systematic evidence available to support this claim, also due to the lack of assessment models. This article presents SCORE, a model for the assessment of creativity in the context of computing education in K-12. Based on a mapping study, the model and a self-assessment questionnaire are systematically developed. The evaluation, based on 76 responses from K-12 students, indicates a high internal reliability (Cronbach’s alpha = 0.961) and confirmed the validity of the instrument suggesting only the exclusion of 3 items that do not seem to be measuring the concept. As such, the model represents a first step aiming at the systematic improvement of teaching creativity as part of computing education.
Although Machine Learning (ML) has already become part of our daily lives, few are familiar with this technology. Thus, in order to help students to understand ML, its potential, and limitations and to empower them to become creators of intelligent solutions, diverse courses for teaching ML in K-12 have emerged. Yet, a question less considered is how to assess the learning of ML. Therefore, we performed a systematic mapping identifying 27 instructional units, which also present a quantitative assessment of the students’ learning. The simplest assessments range from quizzes to performance-based assessments assessing the learning of basic ML concepts, approaches, and in some cases ethical issues and the impact of ML on lower cognitive levels. Feedback is mostly limited to the indication of the correctness of the answers and only a few assessments are automated. These results indicate a need for more rigorous and comprehensive research in this area.
As part of a wide-ranging phenomenographic study of computing teachers, we explored their varying understandings of the lab practical class and discovered four distinct categories of description of lab practicals. We consider which of these categories appear comparable with non-lecture classes in other disciplines, and which appear distinctive to computing. An awareness of this range of approaches to conducting practical lab classes will better enable academics to consider which is best suited to their own purposes when designing courses.