Automatic program evaluation is a way to assess source program files. These techniques are used in learning management environments, programming exams and contest systems. However, use of automated program evaluation encounters problems: some evaluations are not clear for the students and the system messages do not show reasons for lost points. The author proposes several ideas for possible improvements in black box testing, which can lead into better service for the users of automatic evaluation systems.
Because of the potential for methodological reviews to improve practice, this article presents the results of a methodological review, and meta-analysis, of kindergarten through 12th grade computer science education evaluation reports published before March 2005. A search of major academic databases, the Internet, and a query to computer science education researchers resulted in 29 evaluation reports that met stringent criteria for inclusion. Those reports were coded in terms of their demographic characteristics, program characteristics, evaluation characteristics, and evaluation findings.
It was found that most of the programs offered direct computer science instruction to North American high school students. Stakeholder attitudes, program enrollment, academic achievement in core courses, and achievement in computer science courses were the most frequently measured outcomes. Questionnaires, existing sources of data, standardized tests, and teacher- or researcher-made tests were the most frequently used types of measures. Based on eight programs that offered direct computer science instruction, the average increase on tests of computer science achievement over the course of the program was 1.10 standard deviations, or the statistical equivalent of 73 out of 100 program participants having shown improvement. Some of the main challenges for the evaluation of computer science education programs are the absence of standardized, reliable, and valid measures of K-12 computer science education and coming to understand the causal links between program activities, gender, and program outcomes.
Mathematical logic is a discipline used in sciences and humanities with different point of view. Although in tertiary level computer science education it has a solid place, it does not hold also for secondary level education. We present a heterogeneous study both theoretical based and empirically based which points out the key role of logic in computer science, computer science education and knowledge representation. We focus on the key contrast of semantics and syntax, the resolution principle as a leading inference technique (giving also interesting non-clausal generalization of the rule). Further we discuss the possibilities of inclusion the non-classical (many-valued) logics in education together with the original generalization of the non-clausal resolution rule into fuzzy logic. The last part describes partial results of the research concerning the secondary education in the Czech Republic especially in the mathematical logic field. The generalization of the presented ideas entails the article.
Automatic assessment of programming exercises is typically based on testing approach. Most automatic assessment frameworks execute tests and evaluate test results automatically, but the test data generation is not automated. No matter that automatic test data generation techniques and tools are available.
We have researched how the Java PathFinder software model checker can be adopted to the specific needs of test data generation in automatic assessment. Practical problems considered are: how to derive test data directly from students' programs (i.e., without annotation) and how to visualize and how to abstract test data automatically for students? Interesting outcomes of our research are that with minor refinements generalized symbolic execution with lazy initialization (a test data generation algorithm implemented in PathFinder) can be used to construct test data directly from students' programs without annotation, and that intermediate results of the same algorithm can be used to provide novel visualizations of the test data.
Interaction and feedback are key factors supporting the learning process. Therefore many automatic assessment and feedback systems have been developed for computer science courses during the past decade. In this paper we present a new framework, TRAKLA2, for building interactive algorithm simulation exercises. Exercises constructed in TRAKLA2 are viewed as learning objects in which students manipulate conceptual visualizations of data structures in order to simulate the working of given algorithms. The framework supports randomized input values for the assignments, as well as automatic feedback and grading of students' simulation sequences. Moreover, it supports automatic generation of model solutions as algorithm animations and the logging of statistical data about the interaction process resulting as students solve exercises. The system has been used in two universities in Finland for several courses involving over 1000 students. Student response has been very positive.