We describe a collaboration between Marelli and Università degli Studi di Milano that allowed the latter to add a course on «Architectures for Big Data» in its Master programme of Computer Science, with the aim of providing a teaching approach characterized by an intertwined exposition of discipline, methodology and practical tools. We were motivated by the need of filling, at least in part, the gap between the expectation of employers and the competences acquired by students. Indeed, several big-data-related tools and patterns of widespread use in working environments are seldom taught in the academic context. The course also allowed to expose students to company-related processes and topics. So far, the course has been taught for two editions, and a third one is currently ongoing. Using both a quantitative and a qualitative approach, we show that students appreciated this new form of learning activities, in terms of enrollments, exam marks, and activated external theses. We also exploited the received feedback in order to slightly modify the content and the structure of the course.
Although Machine Learning (ML) is used already in our daily lives, few are familiar with the technology. This poses new challenges for students to understand ML, its potential, and limitations as well as to empower them to become creators of intelligent solutions. To effectively guide the learning of ML, this article proposes a scoring rubric for the performance-based assessment of the learning of concepts and practices regarding image classification with artificial neural networks in K-12. The assessment is based on the examination of student-created artifacts as a part of open-ended applications on the use stage of the Use-Modify-Create cycle. An initial evaluation of the scoring rubric through an expert panel demonstrates its internal consistency as well as its correctness and relevance. Providing a first step for the assessment of concepts on image recognition, the results may support the progress of learning ML by providing feedback to students and teachers.
Although Machine Learning (ML) has already become part of our daily lives, few are familiar with this technology. Thus, in order to help students to understand ML, its potential, and limitations and to empower them to become creators of intelligent solutions, diverse courses for teaching ML in K-12 have emerged. Yet, a question less considered is how to assess the learning of ML. Therefore, we performed a systematic mapping identifying 27 instructional units, which also present a quantitative assessment of the students’ learning. The simplest assessments range from quizzes to performance-based assessments assessing the learning of basic ML concepts, approaches, and in some cases ethical issues and the impact of ML on lower cognitive levels. Feedback is mostly limited to the indication of the correctness of the answers and only a few assessments are automated. These results indicate a need for more rigorous and comprehensive research in this area.
Creativity has emerged as an important 21st-century competency. Although it is traditionally associated with arts and literature, it can also be developed as part of computing education. Therefore, this article -presents a systematic mapping of approaches for assessing creativity based on the analysis of computer programs created by the students. As result, only ten approaches reported in eleven articles have been encountered. These reveal the absence of a commonly accepted definition of product creativity customized to computer education, confirming only originality as one of the well-established characteristics. Several approaches seem to lack clearly defined criteria for effective, efficient and useful creativity assessment. Diverse techniques are used including rubrics, mathematical models and machine learning, supporting manual and automated approaches. Few performed a comprehensive evaluation of the proposed approach regarding their reliability and validity. These results can help instructors to choose and adopt assessment approaches and guide researchers by pointing out shortcomings.
As part of a wide-ranging phenomenographic study of computing teachers, we explored their varying understandings of the lab practical class and discovered four distinct categories of description of lab practicals. We consider which of these categories appear comparable with non-lecture classes in other disciplines, and which appear distinctive to computing. An awareness of this range of approaches to conducting practical lab classes will better enable academics to consider which is best suited to their own purposes when designing courses.