In recent years a small number of web-based tools have been proposed to help students learn to write SQL query statements and also to assess students' SQL writing skills. SQLify is a new SQL teaching and assessment tool that extends the current state-of-the-art by incorporating peer review and enhanced automatic assessment based on database theory to produce more comprehensive feedback to students. SQLify (pronounced as squalify) is intended to yield a richer learning experience for students and reduce marking load for instructors. In this paper SQLify is compared with existing tools and important new features are demonstrated.
A high quality review of the distance learning literature from 1992-1999 concluded that most of the research on distance learning had serious methodological flaws. This paper presents the results of a small-scale replication of that review. A sample of 66 articles was drawn from three leading distance education journals. Those articles were categorized by study type, and the experimental or quasi-experimental articles were analyzed in terms of their research methodologies. The results indicated that the sample of post-1999 articles had the same methodological flaws as the sample of pre-1999 articles: most participants were not randomly selected, extraneous variables and reactive effects were not controlled for, and the validity and reliability of measures were not reported.
Although there are many high-quality models for program and evaluation planning, these models are often too intensive to be used in situations when time and resources are scarce. Additionally, there is little added value in using an elaborate and expensive program and evaluation planning procedure when programs are small or are planned to be short-lived. To meet the need for simplified models for program and evaluation planning, we describe a model that includes only what we consider to be the most essential outcomes-based program and evaluation planning steps: (a) how to create a logic model that shows how the program is causally expected to lead to outcomes, (b) how to use the logic model to identify the goals and objectives that the program is responsible for; (c) how to formulate measures, baselines, and targets from the goals and objectives; and (d) how to construct program activities that align with program targets.
Reflective practice is considered to play an important role in students' learning as they encounter difficult material. However, students in this situation sometimes do not behave reflectively, but in less productive and more problematic ways. This paper investigates how educators can recognize and analyze students' confusion, and determine whether students are responding reflectively or defensively. Qualitative data for the investigation comes from an upper-level undergraduate software engineering and design course that students invariably find quite challenging. A phenomenological analysis of the data, based on Heidegger's dynamic of rupture, provides useful insight to students' experience. A comparison between that approach and a sampling of classic sources in scholarship on learning, reflectiveness, and defensiveness has implications for teaching and education research in software design - and more generally. In addition, a clearer understanding of the concepts presented in this paper should enable faculty to bring a more sophisticated analysis to student feedback, and lead to a more informed and productive interpretation by both instructor and administration.
As part of a wide-ranging phenomenographic study of computing teachers, we explored their varying understandings of the lab practical class and discovered four distinct categories of description of lab practicals. We consider which of these categories appear comparable with non-lecture classes in other disciplines, and which appear distinctive to computing. An awareness of this range of approaches to conducting practical lab classes will better enable academics to consider which is best suited to their own purposes when designing courses.
This paper describes a didactical Computer Aided Software Engineering (CASE)-tool that was developed for use within the context of a course in object-oriented domain modelling. In particular, the tool was designed to address several inconveniences that challenge the realisation of the course objectives: the number of students enrolled does not allow for individual feedback (a); students have little opportunity to build a concrete information system, therefore they fail to predict the consequences of the different choices when building a conceptual model (b); students lack examples and practice on how to convert a conceptual model into a concrete information system (c); at the beginning of the course students have very different levels of prior knowledge leading to major differences in motivation and learning outcomes (d).
The tool was evaluated positively by the students and was shown to have a positive impact on the student's capabilities to construct object-oriented models.
It is argued that even better learning results can be realised by capitalizing on the opportunities for social interaction in an educational context.