This paper presents a systematic literature review of the coordinated use of Learning Analytics and Computational Ontologies to support educators in the process of academic performance evaluation of students. The aim is to provide a general overview for researchers about the current state of this relationship between Learning Analytics and Ontologies, and how they have been applied in a coordinated way. We selected 31 of a total of 1230 studies related to the research questions. The retrieved studies were analyzed from two perspectives: first, we analyzed the approaches where researchers used Learning Analytics and Ontologies in a coordinated way to describe some Taxonomy of Educational Objectives; In the second perspective, we seek to identify which models or methods have been used as an analytical tool for educational data. The results of this review suggest that: 1) few studies consider that student interactions in the Learning Management System can represent students’ learning experiences; 2) most studies use ontologies in the context of learning object assessment to enable learning sequencing; 3) we did not identify methods of evaluation of academic performance guided by Taxonomies of Educational Objectives; and 4) no studies were identified that report the coordinated use of Learning Analytics and Computational Ontologies, in the context of academic performance monitoring. Thus, we identify future directions of research such as the proposal of a new model of evaluation of academic performance.
The paper aims to present application of Educational Data Mining and particularly Case-Based Reasoning (CBR) for students profiling and further to design a personalised intelligent learning system. The main aim here is to develop a recommender system which should help the learners to create learning units (scenarios) that are the most suitable for them. First of all, systematic literature review on application of CBR and its possible implementation to personalise learning was performed in the paper. After that, methodology on CBR application to personalise learning is presented where learning styles play a dominate role as key factor in proposed personalised intelligent learning system model based on students profiling and personalised learning process model. The algorithm (the sequence of steps) to implement this model is also presented in the paper.
This paper considers the use of log data provided by learning management systems when studying whether students obey the problem-based learning (PBL) method. Log analysis turns out to be a valuable tool in measuring the use of the learning material of interest. It gives reliable figures concerning not only the number of use sessions but also the interlocking of various course activities. The longitudinal study based on log analysis makes use of a new software tool, SPY US. Our study concentrates on using log data analysis in improving the PBL method used in learning diagnostic skills with the help of Virtual Patients.
Student evaluations to measure the teaching effectiveness of instructor's are very frequently applied in higher education for many years. This study investigates the factors associated with the assessment of instructors teaching performance using two different data mining techniques; stepwise regression and decision trees. The data collected anonymously from students' evaluations of Management Information Systems department at Bogazici University. Additionally, variables related to other instructor and course characteristics are also included in the study. The results show that, a factor summarizing the instructor related questions in the evaluation form, the employment status of the instructor, the workload of the course, the attendance of the students, and the percentage of the students filling the form are significant dimensions of instructor's teaching performance.
A high quality review of the distance learning literature from 1992-1999 concluded that most of the research on distance learning had serious methodological flaws. This paper presents the results of a small-scale replication of that review. A sample of 66 articles was drawn from three leading distance education journals. Those articles were categorized by study type, and the experimental or quasi-experimental articles were analyzed in terms of their research methodologies. The results indicated that the sample of post-1999 articles had the same methodological flaws as the sample of pre-1999 articles: most participants were not randomly selected, extraneous variables and reactive effects were not controlled for, and the validity and reliability of measures were not reported.