Students Create Tests - Case Study

From MusicTechWiki

We like students to create questions because it helps us think and learn from a different perspective. Creating questions uses a different part of our minds than just studying and taking exams or quizzes. The quiz show jeopardy is famous for this as contestants must phrase their responses in the form of questions. Sometimes your questions can help add to our Test Pool collection of questions which helps us all learn.

The Art and Pedagogy of Creating Quiz Questions

Since Socrates time, asking good questions has been regarded as a strong indicator of knowledge and intelligence. Formative use of multiple choice tests facilitates learning and can motivate and guide the student. The task of creating tests with questions and alternatives, can be difficult. The process towards a successful test requires a certain amount of background knowledge, an overview of the field in question, and the ability to view the knowledge domain from a different perspective. The art and science of creating tests, or educational assessment, is also known as psychometrics. This article discusses how learning can be enhanced, by engaging students in the creative task of making their own tests.

Introduction

Multiple choice tests can be used both in assessment and in the learning process. The straightforward way is for the teacher to develop multiple choice tests, and then use these tests either in summative assessment or in mandatory exercises for self evaluation.

This article presents a way to use tests as an alternative pedagogical activity. Every teacher that has tried to make a multiple choice test knows that this task can be challenging. The process requires knowledge and understanding of the field. Similarly, reflection, self-evaluation and quality assurance of the work will be necessary to succeed. Hence, it should be not only possible to enhance learning by implementing test creation as a student activity, but also an interesting pedagogical exercise for both the student and the teacher.

Theoretical foundation and motivation

According to William Horton, making a good test consists of four iterative tasks. Several challenges exist in each phase. Test design constitutes how to write good questions of high quality, but also involves other test properties such as feedback strategy. Distribution involves for instance how to deliver the test, the number of attempts allowed and how to deal with cheating. The phase of grading involves calculation of score and a grading strategy – a task that is normally difficult for the average teacher and should be assisted by functionality of a test tool. The final phase involves improvement of the test through analysis of the results so that for instance bad questions and alternatives can be identified.

When creating a multiple choice test, the teacher meets challenges already in the first phase. How to write questions of high quality that not only test knowledge, but also test understanding, application and the ability to analyze a problem? Bloom´s taxonomy categorizes cognitive levels of competence. It is quite possible to create multiple choice questions that test the student on different levels according to Bloom´s taxonomy, thus making the test more demanding, valid and relevant. By providing the candidate with stimuli prior to a question, for instance an image, a sound/video clip, some text or other resources (or even activities), it is easier to test for instance the student´s understanding, ability to apply knowledge or ability to analyze a problem. It is important to note that what demands a deeper insight by a first-year student, may be routine recalling for the master student.

Furthermore, how can a teacher create questions and alternatives that are not self-explanatory, and that allows the test to accurately depict the level of the student knowledge/understanding? Identifying good distractors, that is, alternatives that are wrong, is a demanding task. To summarize, making good questions requires concentration, creativity, reflection, a critical eye and ideally an external evaluator for quality assurance.

Worth noting, a large question pool will be useful for the teacher when creating future tests and reusing existing questions. The pool can also assist the tasks of result analysis and quality assurance in the improvement phase. It obviously takes time to build a high quality question pool, but once populated, it would be very useful in order to create a variety of tests for use in both formative and summative assessment.

Exercise: The students create their own tests

Creating something often results in learning, and involves new cognitive processing of existing knowledge. This applies to written texts, presentations, learning material and should also be true for creating tests. In order to succeed when creating a test, questions and alternatives, the test creator must view and process the theory and knowledge domain from a different angle than when explaining or reproducing something. Both detail-specific knowledge and a generic overview and understanding of the knowledge domain, is necessary.

Given that the teacher faces challenging tasks related to mental processing when creating multiple choice tests, the students should also meet the same issues. Hence – making a pedagogical activity out of test and question creation, would seem interesting. In order to succeed with their questions, students should have to read, process and understand the course material in a new, different way than before – and this activity should, in theory, enhance learning. The task of creating, obviously requires mental processing at different levels, so one might also relate the creation activity itself to Bloom´s taxonomy in general, and to the higher evaluation level (within Bloom´s model) in particular. Self-evaluation of the test and question quality should also enrich learning, since self-evaluation is regarded an important part of the overall learning process.

Explaining the Answers

The article The Quiz Game: Writing and Explaining Questions Improve Quiz Scores, by Kerkman and Lewis, proposes a "quiz game" in which students write multiple choice questions and have to explain why each answers is correct or incorrect as a means of improving student performance and increase learning. The authors compare two psychology classes that took the same in-class quizzes, one of which participated in the "quiz game" and another that did not. Results of this comparison revealed that the class that participated in the "quiz game" exercise received higher pop quiz scores for the course even after controlling for GPA. The authors suggest that the higher level processing required to explain the answers to the questions they wrote promoted elaboration and improved recall. The authors tested three major hypotheses in comparing the two classes and found that: writing multiple choice questions and explanations for each answer did improve quiz scores, students who wrote questions that met minimum requirement for grammar, length, structure, and explanation of the questions would had higher scores than those who did not meet requirements for writing the questions, and that conceptual quality of student questions was not significantly correlated with quiz scores.

Individual Exercise

As a pedagogical learning experiment, the students in the courses Internet Publishing and Operating systems with Linux (Spring 2006) were given a mandatory exercise to create their own multiple choice tests from a given subject. A note explaining the importance of Bloom´s taxonomy was attached to the exercise. The introduction to the exercise stated well-defined learning goals: Process the knowledge domain from a different perspective, increase the ability to problematize knowledge, and work with the cognitive levels of Bloom in mind. The exercise text also had a few examples of questions at different levels in Bloom´s taxonomy, and the students had previously taken tests made by the teacher.

The students were also asked to explain why their distractors were wrong alternatives, and why their correct alternatives were correct. This task should improve the question quality due to the reflective nature of the exercise. After finishing the creation of the test, each student should evaluate her own learning process. Of course, it is difficult to conclude that the learning effect can actually increase within the bounds of such an activity – due to a small amount of empirical data – but student satisfaction and feedback gives a certain indication of success. Almost every one of the about 30 students in each course enjoyed this form of exercise, and claimed to have learned more than by just answering tests or writing texts like in traditional exercises. Many students noted that it was difficult to create good questions.

The exercise work was intended to be individual, but some students still chose to collaborate. Many questions were on the lowest level in Bloom´s taxonomy. The students that collaborated had better questions in this respect. It was also interesting to note that some questions were really creatively formulated. The students created their tests as Microsoft Word documents, which were read only by the teacher. Clearly, there was potential for improvement.

Collaborative exercise

Building on the experiences from the first exercise one year earlier, the students in the course Operating systems with Linux (Spring 2007) were first given a wiki exercise where they should write some self-chosen texts into a common wiki, groupwise. Then they had to pass a digital multiple choice test provided by the teacher through the LMS-system it´s learning. This test was individual to ensure participation by everyone. Some weeks later, they were given the pedagogical exercise of creating their own multiple choice tests, groupwise. In addition to these exercises, regular lectures were given. All together, the students were provided different ways to learn throughout the course. The exercise of “student test creation” was improved based on the weaknesses discovered the previous year. The students were told to work in their well-established groups, and create a common multiple choice test of at least 10 questions. Each group was given privileges in the LMS-system to create and manage its own tests. Each group should also write a reflection note evaluating its work and learning effect. Finally, the teacher collected all the tests and made them available for the entire class, as a preparation for the final exam.

Conclusions

American psychiatrist William Glasser claims that we learn “10% of what we read, 20% of what we hear, 30% of what we see, 50% of what we see and hear, 70% of what is discussed with others, 80% of what is experienced personally, and 95% of what we teach to someone else.” Facilitating student-made quizzes can allow students to be exposed to all of these forms of learning. Students will especially benefit from the experience of teaching others not only because it reinforces previously known and newly found knowledge, but also because it supports collaborative learning and is a form of empowerment learning. By facilitating student-made quizzes in the classroom, a teacher is not only giving students knowledge, but also showing them how to use it.

Tests can be used formatively to enhance student motivation, activity and learning. As experienced by many teachers, there is a lot of learning and mental processing involved in the task of creating a test. It therefore makes sense to challenge students to create their own tests. Experiments have shown that many benefits can be obtained when implementing this kind of pedagogical activity as part of the pedagogical curriculum, regarding both enhanced learning and important testing aspects, for instance quality assurance, question pool and student acceptance. The final paper will further explain in detail the various exercises that were briefly introduced in this “abstract paper”, and include another variation of such an exercise. The final paper will also discuss and analyze the benefits and learning effects from this kind of pedagogical use of tests.

References

  1. Horton W. Designing Web-based training, Wiley 2000; 273-332.
  2. Bloom. Bloom´s taxonomy of Educational Objectives. 1956.
  3. Brown G. Assessment: A Guide for Lecturers. November 2001.
  4. Sirnes S. Flervalgstester, konstruksjon og analyse, Fagbokforlaget 2005.
  5. Horgen, Larsen, Hjertø. Improving the Quality of Online Tests and Assessments. Educa Berlin 2005 Book of Abstracts 2005; 338-341.
  6. SAPHE project. Self Assessment in Professional and Higher Education Project.