Society for Industrial and Organizational Psychology > Research & Publications > TIP > TIP Back Issues > 2017 > April

masthead710

Volume 54     Number 4    April 2017      Editor: Tara Behrend

Meredith Turner
/ Categories: 544

Max Classroom Capacity: To Pop or Not? A Discussion of Pop Quizzes as Learning Assessments

Loren Naidoo

Welcome readers! Pop quiz: who was the first president of SIOP?

 

It was Bruce V. Moore, 1945-46! By the way, for a taste of the swashbuckling early days of I-O psychology, check out his SIOP president autobiography—I love how he casually tosses around names like Thorndike, James, and Thurstone!

 

Anyway, as I just clearly demonstrated, everyone loves pop quizzes! OK, fine, some people love trivianobody loves pop quizzes. Or do they? Or, if they hate them, maybe that’s OK, because they work? These are the questions I’m going to try to answer in this column!

 

Let me describe why I’m writing about this. I often teach large undergraduate classes of about 115 students. Years ago I invested a lot of time developing multiple choice exams (usually four/semester) that require more than memorization of definitions. It was a huge amount of work initially, though relatively easy to administer and grade. Here’s an example item:

 

Hari has just moved to New York City. All his friends from Ohio told him that cab drivers are really unfriendly. As a result, Hari tends to expect unfriendliness from cab drivers, to notice when they act unfriendly, and doesn’t remember the cab drivers who were friendly. Which bias/heuristic best describes this scenario?

                        (a) Availability heuristic                                 (b) Implicit person theory 

                        (c) Confirmation bias                                      (d) Hindsight bias

 

Although I liked the objectivity of these exams and was satisfied with the item design, many students found them tricky, and some seemed unable to improve their scores no matter how much guidance I gave them on appropriate studying and test-taking strategies. This was very discouraging for them and me. It made me wonder to what extent I was assessing intelligence, test wiseness, or reading ability rather than understanding and application of the material, the outcomes I wanted to assess. Moreover I was concerned with students’ tendency to “cram” for infrequent, high stakes exams, which tends to inhibit deep encoding and long term information retention. Many students expressed anxiety about their upcoming midterm exams, despite the fact that I offered an optional cumulative final that they could take to replace their lowest midterm grade. I believe, in part informed by research on goal orientation, that if you value deep processing and long term information retention, then you need to de-emphasize performance and grades and emphasize content mastery. For me multiple choice exams undermined these aims.

 

So I decided to use pop quizzes instead. I think I stole the idea from Professor Rich Koestner, whose undergraduate psychology of motivation class at McGill University had an enormous positive impact on my research interests and career.

 

By pop quizzes I mean assessments that are (a) high frequency (~10 per semester), (b) brief (~15 minutes), (c) unannounced—students are informed that they may be quizzed at the beginning of any class, (d) required (i.e., not “bonus” assignments), and (e) graded. Specifically, the idea was to develop assessments that encourage students to distribute their studying more evenly (rather than cram), reduce anxiety because each individual quiz is relatively low stakes, and minimize the influence of test wiseness. Together the quizzes form 50% of the final grade, but the lowest quiz grade is dropped, so each individual quiz is worth about 5%. Students are informed of these policies in the syllabus and first class meeting in which I discuss all of the reasons why I switched from multiple choice exams to pop quizzes. I really emphasize that I care about students’ learning, and the quizzes are designed to encourage healthy study habits and give students immediate feedback on their learning.

 

Here’s how it works. Before each class I decide whether I want to administer a pop quiz, what material will be covered, and how to score answers. Quiz topics are pulled from the prior class or two and/or assigned readings. Usually the quizzes are written as scenarios that require students to apply their knowledge of relevant theories or research. I try to make them fun. Quizzes are administered during the first 15 minutes of class, are closed book, and no collaboration is allowed. We discuss the answers immediately afterwards. I grade them myself by hand—up to 1 minute per quiz x 115 students. Here’s an example quiz:

 

Xenia notices her prof carrying a pile of papers, which she takes to mean that there will be a pop quiz in 10 minutes! She immediately panics: her heart races, she sweats, and she feels extremely anxious. Soon after, she thinks: “I have 10 minutes to study, I already read the material once last night, I can ask Raj who sits next to me about the one theory that I’m confused about—I can do this!” And she no longer feels panicked.

  1. From the research on stress, what two aspects of the appraisal process are illustrated here? Please use the appropriate labels, and explain which is more automatic and which is more subject to cognitive control.
  2. According to Lazarus and Folkman’s demands-resources model, is Xenia experiencing stress?

 

Although the switch to pop quizzes wasn’t a completely uninformed decision on my part, it certainly would have been helpful to have a reference guide on their pros and cons, informed by the experiences of fellow I-O psychologists and relevant empirical research. That’s what I hope this column will be for you.

 

So I e-mailed some respected colleagues who teach I-O at various levels in an informal survey. Many of them responded immediately, for which I am very grateful! Three themes emerged. First, many folks considered (or inferred) the purpose of pop quizzes to be to hold students accountable for completing assigned readings or attending class and that pop quizzes generally work well for those outcomes. Second, for some, pop quizzes were used spontaneously as a threat to increase accountability or a punishment for poor student effort or preparation rather than being built into the grading scheme described in the syllabus. Third, many faculty members were concerned that pop quizzes produce anxiety among students, in part due to the uncertainty around timing and in part due to students’ busy schedules not permitting them to study before each class. A few felt more positively about pop quizzes, but they typically used them as part of students’ participation grades or didn’t grade them. On the whole, faculty views of pop quizzes showed very little overlap with my rationale for using them. That makes me a bit nervous, but it was a small nonrandom sampling of faculty. What does the research literature say?

 

I conducted a quick, nonexhaustive search of the psychology and education literatures for empirical research on “pop quizzes” and “unannounced exams,” and so forth. I found very little research on the topic overall and even less that used structured research designs with adequate sample sizes. Most was centered on the question of how to improve compliance with reading assignments. For example, Ruscio (2001) examined performance on randomly assigned pop quizzes that tested whether students read and understood the assigned readings and found that the quizzes were passed 70–90% of the time. However, as there was no control group, these results are difficult to interpret.

Only one study, Anderson (1984), proposed quizzes as a means of reducing cramming, although Anderson examined announced rather than pop quizzes. The effects of weekly quizzes versus no quizzes on studying and exam performance were tested using a within-subjects ABA experimental design with 13 students across 15 weeks of a behavioral science medical class. Students studied more during quiz weeks compared to nonquiz weeks, but there was no effect on subsequent exam performance.

 

Graham (1999) tested the effects of pop quizzes versus no quizzes in a similar within-subjects experimental design using students from two psychology classes. Graham found that grades on high stakes exams preceded by quizzes were significantly higher compared to those not preceded by quizzes, though the effect was small. However, the positive effect of quizzes on exam performance was larger for students with B–D average grades compared to those with A (presumably due to a ceiling effect) or F grades.

 

Several other researchers advocated for the use of pop quizzes based on anecdotal evidence (Bebrend, 2013; Carter & Gentry, 2000; Thorne, 2000), nonexperimental designs (Sappington, Kinsey, & Munsayac, 2002), or self-reported studying and attendance behavior (Kouyoumdjian, 2004).

 

In sum, there’s little empirical evidence on the effects of pop quizzes on learning, and I found nothing that investigated the question of whether they produce greater anxiety than other assessment methods.

 

Interestingly, as you may have noticed, one my goals in adopting pop quizzes in place of high stakes multiple choice exams was to reduce student anxiety—the very outcome that most surveyed faculty members associated with pop quizzes. How do actual students feel about this? Well, I asked my class. At that point they had had two pop quizzes, neither of which had been graded. To be clear, I just asked them in class after their last quiz—this was not an anonymous survey.

 

Several different opinions were expressed, but I can characterize the general response as a kind of grudging, semiresentful acknowledgement that being forced to read and study regularly is probably a good thing. Bingo! One or two students quite passionately expressed their dislike of pop quizzes, and their concerns were the pressure, lack of scheduling flexibility, and anxiety about unpredictability—all consistent with many faculty members’ concerns. I initially concluded that students on the whole did not like pop quizzes. But, at the very end, I asked students to indicate by show of hands how many disliked pop quizzes more than liked them, and how many liked pop quizzes more than disliked them. To my surprise about ¾ of the class reported liking pop quizzes more than disliking them! Only a handful of students (albeit a very vocal minority) disliked them more than liked them.

 

I have three concluding points:

  1. Here’s why I like pop quizzes: They give me (and, hopefully, students) insight into blind spots and inaccurate mental models. Students get immediate feedback, and we can discuss confusions or misconceptions on the spot. They require students to distribute their efforts more evenly over time rather than cram, which should benefit long term retention and deep processing of information to the extent that cramming undermines these things. They are low stakes, which should reduce anxiety, though maybe it’s a wash due to increased anxiety from the uncertainty and scheduling inflexibility. Finally, they give students more experience writing compared to multiple choice tests.
  2. Here’s what I don’t like about pop quizzes: They are often onerous to grade, and although any assessment method is likely to be hated by a minority of students, perhaps no assessment method will be hated quite so intensely.
  3. It’s very clear that this is an area that could benefit from more empirical research. If anyone is interested in working on this, please let me know! 

As always, please e-mail me if you have questions or comments, or even just to say hello. Thanks for reading!

 

References

Anderson, J. E. (1984). Frequency of quizzes in a behavioral science course: an attempt to increase medical student study behavior. Teaching Of Psychology, 11, 34.

Bebrend, A. H. (2013). Teaching note: The one-question pop quiz. Teaching History: A Journal of Methods, 38(1), 39-40.

Carter, C., & Gentry, J. A. (2000). Use of pop quizzes as an innovative strategy to promote critical thinking in nursing students. Nurse Educator, 25(4), 155.

Graham, R. B. (1999). Unannounced quizzes raise test scores selectively for mid-range students. Teaching of Psychology, 26(4), 271.

Kouyoumdjian, H. (2004). Influence of unannounced quizzes and cumulative exam on attendance and study behavior. Teaching of Psychology, 31(2), 110-111.

Ruscio, J. (2001). Administering quizzes at random to increase students’ reading. Teaching of Psychology, 28(3), 204-206.

Sappington, J., Kinsey, K., & Munsayac, K. (2002). Two studies of reading compliance among college students. Teaching of Psychology, 29(4), 272-274.

Thorne, B. M. (2000). Extra credit exercise: A painless pop quiz. Teaching of Psychology, 27(3), 204.

Previous Article On the Legal Front: Considering Supreme Court Nominee Neil Gorsuch’s Record on Employment Law
Next Article SIOP in Washington: Advocating for I-O in Federal Public Policy
Print
1889 Rate this article:
No rating