The long and winding road: Insights from student misconceptions
Although online resources can complement and support face-to-face teaching, they run the risk of creating a distance between the student and the teacher. However, this might be counteracted through careful design of structure and content. In particular, interactive tasks can provide the student with much needed feedback while at the same time allowing the teacher access to both stateful and event data information from student replies to evaluate and improve the course content. This winter, the University Library at UiT launched a MOOC on information literacy, iKomp (www.edx.bibsys.no, with estimated launch of the English version in May this year). Built on the open source platform Open EdX, iKomp consists of four modules: learning strategies, source evaluation, information searching, and academic integrity. As with most MOOCs, the content is a mix of text, videos, learning activities, and tests. The final exam consists of a 40 question multiple-choice test. The lack of direct teacher-student interaction makes the assessment of learning outcomes from MOOCs and other web-based courses a challenge. While it is important that the selected assessment method tests the students’ understanding of the course content, we consider it equally important that the exam instigates learning. Each question in the multiple-choice test closing iKomp has four alternative answers, and each distractor, or wrong answer, is formulated as a plausible answer, thus encouraging thoughtful deliberation in the student. An advantage of using event recording technology in teaching is the possibility of gaining insight into student learning through usage data. In this paper, we present the results from a deep log analysis of the exam results from a period of 6 months. The study has a twofold objective: By examining the students’ performance, we aim to evaluate the exam content. Specifically, analysing response patterns, we can assess whether some of our alternative answers are confusing or if the questions are easily misunderstood. By examining the type and rate of errors, we aim to determine the areas where students need more input. Filling these gaps, we answer to the students’ needs and thereby improve the overall value of the course. The analysis of the exam answer distribution log reveals that several questions are too easy, as the majority of students succeed on the first attempt. Others questions are more evenly distributed, with more than one answer alternative being selected quite frequently. This paper presents patterns, or the lack thereof, between answer distributions and course content. Specifically, do certain content areas stand out, in terms of error rates? We discuss whether a revision of the exam or the course content is called for, and whether some areas may be harder to teach online than others. We consider this type of analysis to have at least three possible benefits: (i) improving our information literacy courses, (ii) refining our understanding of student learning, and (iii) increasing student-teacher interaction online.