Nowadays, SPOCs (Small Private Online Courses) have been used as complementary methods to support classroom teaching. SPOCs are courses that apply the usage of MOOCs (Massive Open Online Courses), combining classroom with online education, making them an exciting alternative for contexts such as emergency remote teaching. Although SPOCs have been continuously proposed in the software engineering teaching area, it is crucial to assess their practical applicability via measuring the effectiveness of this resource in the teaching-learning process. In this context, this paper aims to present an experimental evaluation to investigate the applicability of a SPOC in a Verification, Validation, and Software Testing course taught during the period of emergency remote education during the COVID-19 pandemic in Brazil. Therefore, we conducted a controlled experiment comparing alternative teaching through the application of a SPOC with teaching carried out via lectures. The comparison between the teaching methods is made by analyzing the students’ performance during the solving of practical activities and essay questions on the content covered. In addition, we used questionnaires to analyze students’ motivation during the course. Study results indicate an improvement in both motivation and performance of students participating in SPOC, which corroborates its applicability to the software testing teaching area.
Nowadays, few professionals understand the techniques and testing criteria to systematize the software testing activity in the software industry. Towards shedding some light on such problems and promoting software testing, professors in the area have established Massive Open Online Courses as educational initiatives. However, the main limitation is the professor’s lack of supervision of students. A conversation agent called TOB-STT has been defined in trying to avoid the problem. A previous study introduced TOB-STT; however, it did not analyze its efficacy. This article addresses a controlled experiment that analyzed its efficacy and revealed it was not expressive in its current version. Therefore, we conducted an in-depth analysis to find what caused this result and provided a detailed discussion. The findings contribute to the TOB-STT since the experimental results show that improvements need to be made in the conversational agent before we use it in Massive Open Online Courses.
Learning Object (LO) is one of the main research topics in the e-learning community in the recent years. In this context, granularity is a key factor for LO reuse. This paper presents a methodology to define the learning objects granularity in the computing area as well as a case study in software testing. We carried out five experiments to evaluate the learning potential from the produced learning objects, as well as to demonstrate the possibility of LO reuse. The results show that LOs promote the understanding and application of the concepts. In addition, the set of LOs identified through the proposed methodology allowed its reuse in different contexts.