Because of the potential for methodological reviews to improve practice, this article presents the results of a methodological review, and meta-analysis, of kindergarten through 12th grade computer science education evaluation reports published before March 2005. A search of major academic databases, the Internet, and a query to computer science education researchers resulted in 29 evaluation reports that met stringent criteria for inclusion. Those reports were coded in terms of their demographic characteristics, program characteristics, evaluation characteristics, and evaluation findings.
It was found that most of the programs offered direct computer science instruction to North American high school students. Stakeholder attitudes, program enrollment, academic achievement in core courses, and achievement in computer science courses were the most frequently measured outcomes. Questionnaires, existing sources of data, standardized tests, and teacher- or researcher-made tests were the most frequently used types of measures. Based on eight programs that offered direct computer science instruction, the average increase on tests of computer science achievement over the course of the program was 1.10 standard deviations, or the statistical equivalent of 73 out of 100 program participants having shown improvement. Some of the main challenges for the evaluation of computer science education programs are the absence of standardized, reliable, and valid measures of K-12 computer science education and coming to understand the causal links between program activities, gender, and program outcomes.
Although there are many high-quality models for program and evaluation planning, these models are often too intensive to be used in situations when time and resources are scarce. Additionally, there is little added value in using an elaborate and expensive program and evaluation planning procedure when programs are small or are planned to be short-lived. To meet the need for simplified models for program and evaluation planning, we describe a model that includes only what we consider to be the most essential outcomes-based program and evaluation planning steps: (a) how to create a logic model that shows how the program is causally expected to lead to outcomes, (b) how to use the logic model to identify the goals and objectives that the program is responsible for; (c) how to formulate measures, baselines, and targets from the goals and objectives; and (d) how to construct program activities that align with program targets.
A high quality review of the distance learning literature from 1992-1999 concluded that most of the research on distance learning had serious methodological flaws. This paper presents the results of a small-scale replication of that review. A sample of 66 articles was drawn from three leading distance education journals. Those articles were categorized by study type, and the experimental or quasi-experimental articles were analyzed in terms of their research methodologies. The results indicated that the sample of post-1999 articles had the same methodological flaws as the sample of pre-1999 articles: most participants were not randomly selected, extraneous variables and reactive effects were not controlled for, and the validity and reliability of measures were not reported.