I don’t believe many would fault teachers for believing uncertainty is the kryptonyte of learning. It makes sense that teachers are here to provide clarity in a murky world. We should clear up any misconceptions and provide answers to questions when our students are confused. I mean, the faster our students know the correct answer, the sooner we can move on to tackling the next standard…right? But is there a place for uncertainty in the classroom? Is there power in letting your students hang out on the edge of that educational canyon for a bit? Just to see if they can figure it out themself? In their paper, Harnessing the Power of Uncertainty to Enhance Learning, Overoye and Storm leap from that canyon wall to explore how uncertainty could lead to real gains in learning. They contend that it is in “the attempting to overcome the uncertainty that learning has the greatest opportunity to occur.”
Uncertainty is discussed through two conditions: uncertainty through inquiry and uncertainty through contradiction. Uncertainty through inquiry involves students being required to generate information, as opposed to receive it. Uncertainty through contradiction occurs when students are confronted with information contradictory to their belief or understanding and have to distinguish the correct from the incorrect. No matter the condition, Overoye and Storm set forth a basic principle: uncertainty is not a state to be avoided. When used correctly, it should be employed “as a vehicle to effectively engage and enhance student learning.”
The authors further dissect the two conditions of uncertainty, stating differing types of inquiry that can lead to gains when students are faced with uncertainty having to create answers to questions or prompts or when met with information that contradicts their beliefs.
Uncertainty Through Inquiry
Testing is a first type of uncertainty through inquiry. Using the retrieval process of memories to recall information is a researched strategy that can lead to gains across many content areas and testing types. This testing effect often leads to greater gains on more difficult tests than on easier tests, in particular when testing is delayed. When creating answers for either formative or summative assessments, an uncertainty may set in with the students. Please see The Learning Scientists for more information on differing types of testing and their positive effects.
A second type of uncertainty through inquiry is interrogative questioning. This strategy implores students to answer questions while they are studying. For example, when using elaborative interrogation, students are led to generate their own explanations and reasons about certain concepts. Also, self-explanation, as a type of interrogative questioning, requires students to relate their current knowledge to new information or discuss how they solved a prompt. This questioning or self-explanation forces the students to interact and focus on the text or material, which can lead to an uncertainty that can produce curiosity in the students to further research material, re-read material, etc. for clarification.
A last form of uncertainty by inquiry is generation, which asks students to create new information, often times with uncertainty being a byproduct of this generation. This allows material to be better recalled later, perhaps because of the intrinsic nature of the generation. Also, research has shown generation enables students to be better learners in the future. In a way that other forms of practice don’t allow, generation gives the students a taste of learning how to learn.
Uncertainty Through Contradiction
Through inquiry (testing, interrogative questioning, and generating), students create answers, ideas, concepts, etc. With this level of thinking, students are apt to make mistakes. When students are allowed to make mistakes and are later made aware of their errors, research has shown this actually facilitates learning, calling it error-facilitating learning. Through error-facilitating learning, when students are in a place of uncertainty because of their incorrect answer, they are in a position to more effectively encode and remember the correct information. Furthermore, when students give a cue before being told the correct answer, they perform better when later tested than if they had simply studied without generating incorrect responses. Does the level of confidence the student has in their answer matter? According to research, and perhaps contrary to intuitive thought, incorrect answers given with high confidence are actually more readily corrected by feedback than those given with low confidence.
Doesn’t this contradiction cause a confusion in the student’s understanding that can create a certain level of proactive interference? Actually, no. As long as the correct answer is eventually given, the confusion of the incorrect belief enhances learning. The confusion appears to challenge the student’s schema and forces the creation of a new understanding; a theory called cognitive disequilibrium. Another case for contradictions and confusion is the idea that critical thinking skills and scientific reasoning are enhanced. Allowing for confusion allows students to challenge their beliefs and create new ideas or ways of thinking that may be difficult to cultivate otherwise. This idea of learning how to learn and assimilate new information to accommodate schemas is also very transferrable from subject to subject and is even a much needed skill throughout life.
Allowing for uncertainty in the classroom seems ironic and somewhat hypocritical of the outdated beliefs of school. Students enter our classroom to learn. Teachers should show them the correct answers so they can commit them to memory and regurgitate the answer when needed. While this may lead to more memorization, if certain learning strategies are present, this doesn’t necessarily lead to a better learner. Placing students in a state of limbo with their beliefs and allowing them to fester in the uncertainty of their knowledge leads to better learning and to the cultivation of better thinkers. As Overoye and Storm conclude, “Education is about more than learning facts; it is about teaching students to evaluate what they know and do not know and to think critically about what they learn.”
[This post first appeared on Blake's own excellent blog, The Effortful Educator. We highly encourage you to visit, read, and subscribe.]
Blake Harvard is an AP Psychology teacher at James Clemens High School in Madison, Alabama. He has been teaching for about a decade and received his M. Ed. and B. S. degrees from the University of Montevallo. Blake has a particular affinity for all things cognition and psychology; especially when those areas are also paired with education and learning. He started his blog The Effortful Educator to highlight research being done on learning, memory, and cognition and their connections to the classroom.
Overoye, Acacia L., and Benjamin C. Storm. “Harnessing the Power of Uncertainty to Enhance Learning.” Translational Issues in Psychological Science 1.2 (2015): 140-48.
Each semester I attempt to overhaul a single course based on my assessment of the previous semester. In fall 2014, that class was Learning and Memory. As a memory researcher, I know that frequent testing is a powerful memory enhancer (1), so I incorporated daily quizzes. I also wanted to engage students in psychology research, so students designed their own memory experiments. I saw huge gains in student learning from the beginning of the semester to the end and assumed that students loved the course as much as I did. (Because why wouldn’t they?) On that dreaded day when course evaluations are released and faculty are crying over that one negative comment, I was eager for confirmation that my hard work paid off… only to be completely blind-sided by the amount of negative reviews from students.
One comment read, very simply, “I did not like the quizzes or the paper we had to write.”
“But it was for your own good!” I yelled at the computer screen.
While there was satisfaction in seeing gains in student learning, it was completely overshadowed by their distaste for the course. Consumed by the need to be liked (a feeling that plagues most junior faculty), I concluded that another overhaul of the course was necessary. Inspired by a Psychonomics talk by Dr. Steven Luck on “Using Cognitive Psychology to Improve the Teaching of Cognitive Psychology,” I decided to give students more of what they seemed to hate. After careful research and reflection, I incorporated even more empirically supported techniques into my course, but did so in a way that would garner student buy-in. The six learning techniques described below became the essence of my new and improved Learning and Memory course.
Learning and Memory Revised
Six learning-enhancing techniques were used to improve student learning:
1. Daily Low-Stakes Quizzes. Testing/quizzing is a very powerful memory enhancer (when compared to re-study (1)). There were pop-quizzes at the end of most class periods. These quizzes were a small proportion of their final grade and covered the material presented that day. Students were allowed to use hand-written notes to look up answers, but were encouraged to attempt to retrieve the answer from memory before looking it up.
2. Hand-Written Notes. Hand-writing (as opposed to typing) notes leads to a better understanding of the material (2). Although students were free to type their notes, only hand-written notes could be used on the quizzes.
3. Self-Correcting Exams. Students took the multiple-choice portion of each exam once in class and again at home (with notes) for a chance to improve their score. The average score (from both attempts) was used to calculate the final exam grade. This self-correcting method improves learning because students need to spend more time on the material and they are challenged to find the correct answer on their own (3). Furthermore, research indicates that errors made in high confidence will lead to better learning of the correct answer— a phenomenon called hypercorrection (4). The self-correcting method ensures that students will take a careful second look at their exams.
4. Distributed Practice. At the beginning of each class period, I asked students questions about key concepts from the class period before. Distributed practice helps connect the material and improves retention of information (5).
5. Elaborative Processing. Relating the information learned in class and from the text to one’s own life can improve learning (6). Reflections, in-class activities, experiments, and the group research project reinforced this type of deeper understanding.
6. Collaborative Review Sessions. Students answered practice test questions first on their own, then with a partner, followed by corrective feedback from the instructor. Incorporating both testing and elaborative processing during review sessions increases student understanding of the material by providing an initial retrieval opportunity followed by immediate feedback from peers (7).
These learning-enhancing methods became the theme of the course. Students developed their own experiments related to improving memory, created flyers about study methods and posted them around campus before finals week, and reflected on the process of learning in these ways at several points in the semester. Reflection was guided by the use of prompts that encouraged students to apply the learning-enhancing methods to their studying for the course and their learning in other courses.
Cultivating Student Buy-in
One of the biggest challenges in any course re-design is getting students to buy into the process. On day one of the semester I talked about the six learning-enhancing techniques and how they would be used in this course. I included this information (citations and all) in the syllabus and quizzed them on it.
In the past I had used some of the learning-enhancing techniques, but I never explained why I used them. I foolishly assumed that students would make the connection between the testing effect that we discussed in class and the fact that they took a quiz each day. This time was different. I let the students see behind the curtain.
Although it is a great start, telling students why we do what we do is not enough to get buy-in. Throughout the semester, students read the original research behind these learning-enhancing techniques and reflected on their application to learning in this course and beyond.
Reflection is also a great way to get feedback. The general consensus from my students was that these methods were beneficial to their learning. In their reflections, they even developed interesting ways to incorporate other learning-enhancing techniques into the course such as interleaving (8) and dual-coding (9).
It was course evaluation day. Again. After the last time I was cautiously optimistic. I opened the link to find that I had done it. Not only did I see student learning increase throughout the semester, but their ratings of the course increased significantly as well. They had learned something and enjoyed learning it! When asked what helped their learning, nearly every student comment mentioned the learning-enhancing techniques—especially the daily quizzes. Yes, they liked the daily quizzes because now they could see the value in it.
Applying These Methods to Other Courses
This course design can easily be applied to any cognitive-related course. Applying these techniques in other courses could also be done, but on a smaller scale. Let your students in on why they take quizzes every day in your class, why they are required to take hand-written notes, or why they are engaging in research in a psychology class. At midterm and the end of the semester, have students reflect on their learning and how the daily quizzes, review sessions, and projects they completed helped them understand the material. By implementing some of these techniques and learning reflections, you may improve student learning and be pleasantly surprised when examining those pesky end-of-semester course evaluations.
Jessica LaPaglia is an Assistant Professor of Psychology at Morningside College in Sioux City, IA, where she lives with her husband and daughter. She received her B.A. in psychology from Augsburg College (Minneapolis, MN) and M.S. and Ph.D. in psychology from Iowa State University (Ames, IA). She teaches a variety of courses including cognitive psychology, brain and behavior, and research methods.
Special Thanks: To Jason Chan for helping me discover the awesomeness that is the testing effect.
(1) Karpicke, J. D., & Roediger, H. L. (2008). The critical importance of retrieval for learning. Science, 19, 966–968.
(2) Mueller, P. A., & Oppenheimer, D. M. (2014). The pen is mightier than the keyboard: Advantages of longhand over laptop note taking. Psychological Science, 25, 1159-1168.
(3) Gruhn, D., & Cheng, Y. (2014). A self-correcting approach to multiple choice exams improves students’ learning. Teaching of Psychology, 41, 335-339.
(4) Metcalfe, J., & Finn, B. (2011). People’s hypercorrection of high-confidence errors: Did they know it all along? Journal of Experimental Psychology: Learning , Memory, and Cognition, 37, 437-448.
(5) Cepeda, N. J., Pashler, H., Vul, E., Wixted, J. T., & Rohrer, D. (2006). Distributed practice in verbal recall tasks: A review and quantitative synthesis. Psychological Bulletin, 132, 354-380.
(6) Craik, F. I. M., & Tulving E. (1975). Depth of processing and the retention of words in episodic memory. Journal of Experimental Psychology: General, 104, 268-294.
(7) Maxwell, E. J., McDonnell, L., & Wieman, C. E. (2015). An improved design for in-class review. Journal of College Science Teaching, 44, 48-52.
(8) Kornell, N., & Bjork, R. A. (2008). Learning concepts and categories: Is spacing the “enemy of induction”?Psychological Science, 19,585–592.
(9) Paivio, A., & Csapo, K. (1973). Picture superiority in free recall: Imagery or dual coding? Cognitive Psychology, 5, 176-206.
In my five-year tenure as graduate student, I’ve asked many students what originally drew them to psychology. To date, none have enthusiastically responded, “Statistics!” If there are students who gush about hand-calculating F-values, they must be attending other schools. More commonly, I get students who say they have (or hate!) to take statistics. All manner of groans, squirming, and eye-rolling follow. They matter-of-factly inform me that they got into this field for the people, not the numbers. They seem to believe that they “don’t need to know this to be successful in psychology!”
Sadly, they do. That said, I don’t blame them for their complaints. Very few students sign up for a psychology major with the end-goal of a quantitative career. Usually, they either want to help people or to understand them (or perhaps even explore themselves.)
Some studies have sought to understand why students come in hating psychology-statistics courses. Researchers often arrive at the explanation that people range greatly in math experience; other experts note greater anxiety as a function of low interest in statistics. Finally, it is possible that beliefs about math ability may play a role; social psychology research suggests women might be anxious about confirming stereotypes (here, being ‘bad at math’)—keeping them from performing at their best. These factors and more can lead students to have a distaste for their required stats course.
In my experience, complaints fall into two general types: “stats class is not fun” and “stats class is not useful to me.” It’s challenging to address these grievances, largely because of the way courses are designed. Even the labs, which should be more hands on and engaging are a source of frustration or boredom for many students.
Harry Potter and the Sorcerer’s Stats
How can you make lab something that a student would look forward to each week? A little Harry Potter goes a long way. The smaller nature of lab allows me to sort people into small groups, so I bring in a Sorting Hat on the first day. Students pick from the hat to determine their House, and they’re seated near their Housemates. In case you are not familiar with Harry Potter, there are four Houses into which students are sorted—often, members of these Houses have common characteristics. I use the story’s frame to my advantage as an instructor. Students will explore a psychological construct relevant to their House for the duration of lab.
The Data Hallows
In the first week, Houses receive a short survey. They can add to it (with approval) and then collect data from around the university: about 45-50 participants per House. Since the research is not to be published and for educational purposes, our IRB requires no protocol, but you should check with yours.
All surveys should contain the following items: age, gender, grade point average, and any other scales you’d like to use as predictors. However, each House administers a survey that has unique items—5 items each from a validated scale involving their topic above. Hufflepuff might have a few items from the Big Five. Ravenclaw might have IQ-relevant questions. You could use whatever items you would like, but keep in mind that students are more interested by significant effects!
They have a week to collect these data, and then they return next lab to enter it into SPSS together. Students love this about the course, according to evaluations—they often touch upon the data collection. Their data is not invented, like every exercise they see in the book. It’s real! Learning statistics suddenly is no longer a hypothetical exercise; students can now take ownership and carry out their first study by following the course’s guidance. From day one, they’ve gone out into the world and collected new knowledge. And every lesson can unlock a new piece of the puzzle—a veritable alohomora.
This wizarding frame and their House dataset follow them throughout the course. Here were a few specific approaches I used with these critical elements to keep students interested.
A nice thematic touch for giving quizzes is to allow 20 minutes to do something basic in SPSS (or your platform of choice). If you can make the quiz House-relevant, that’s even better. Remember, you do not have to change the data set for each House—just change the wording of the quiz question. For example, one of our quizzes involved correlation. For Ravenclaw House, the two variables were standardized test scores and number of books in the home—playing on Ravenclaw’s thirst for knowledge. Meanwhile, Hufflepuff students had conscientiousness and loyalty: two qualities the House values. The data sets, though, were numerically identical, so the quiz was equal in difficulty for both Houses.
Flourishes and Blots
I stress to my students that much of science flies or flops in the writing, and that I’ll be taking them through all the steps of the process. Writing assignments need not be long (2-3 pages), but they should be somewhat APA-structured: Introduction, Method, Results, and Discussion. Each paper tackles one question about the data they collected at the semester’s start: for example, what the typical participant is like, if there are gender differences, and potential relationships between their variable and GPA.
I give them an opportunity in lab to peer-edit rough drafts—I lead these sessions to teach them what to look for. Like our own process for publication, the students rely on their peers initially to catch glaring issues or limitations with the paper. It’s a friendly environment to introduce them to peer-review. But I still remind them that the final draft heads to the editor—me! At the end of the semester, each student should have three good papers to work with.
The House Cup
Writing these papers becomes meaningful, because at the end of the semester, some professors are gracious enough to hold a mock-conference in lecture. In the last weeks of lab, I teach students how to construct a poster from their paper(s) and present it—I encourage students to have fun with it. Each House chooses their favorite analyses. Professors can award points, extra credit, or prizes. I’ve found that students get very excited about their posters. I’m not sure I’ve gotten all the glitter off from last semester.
What’s interesting about using lecture time is that all five labs come together. Maybe Slytherin groups from all labs will have the same results. In that case, the professor can talk about reliability. If results differ, the professor can talk about difficulty in replication, something very topical to our science.
Fantastic Stats and Where to Find Them
With this structure, students conduct science, making the numbers about logic and truth rather than calculations. Statistics becomes a tool in their skillset, not an obstacle. By the end of class, students begin to learn about psychology from their peers—not passively from professors or assigned readings. They walk through each stage of our research process, from data collection to presentation. And many of them find that it is, well, kind of fun!
Rather than dreading statistics and avoiding it like the Dark Mark, they’ll begin to understand that statistics are a psychologist’s “most inexhaustible source of magic.” Maybe it won’t convert them all to statistics wizardry, but they’ll remember the course for years after.
Tom Tibbett is a PhD candidate in Social and Personality Psychology at Texas A&M University, graduating with an Advanced Statistics certification. He loves his alma mater, the College of William and Mary, where he graduated with a Bachelor’s degree in Psychology and English. His interests include teaching, number-crunching, puppies, and wizards. He can be contacted at email@example.com.
To my colleagues teaching PSYC 203, especially Heather Lench: you’ve been an inspiration and given me the foundation for these ideas. To Kaileigh Byrne: after meeting you, I don’t think I could avoid thinking about Harry Potter if I tried.
For more information
Conners, F. A., McCown, S. M., & Roskos-Ewoldsen, B. (1998). Unique challenges in teaching undergraduate statistics. Teaching of Psychology, 25, 40-42.
Hudak, M. A., & Anderson, D. E. (1990). Formal operations and learning styles predict success in statistics and computer science courses. Teaching of Psychology, 17, 231–234.
Sciutto, M. J. (1995). Student-centered methods for decreasing anxiety and increasing interest level in undergraduate statistics courses. Journal of Instructional Psychology, 22, 277–280.
Steele, J., James, J. B., Barnett, R.C. (2002). Learning in a man's world: Examining the perceptions of undergraduate women in male-dominated academic areas. Psychology of Women Quarterly, 26, 46–50.
Tibbett, T. P. (2017: January). Making statistics psychology: Engaging students with relevant applications. Presented at the National Institute on the Teaching of Psychology (NITOP) Annual Meeting.
Wilson, S. G. (2013). The flipped class: A method to address the challenges of an undergraduate statistics course. Teaching of Psychology, 10, 1-7.
"I get paid to teach them physics. It's not my job to care about their personal lives."
This was what I heard from a colleague during a professional development conference. The session emphasized understanding the personal needs of our students, and during an experiential demonstration this is what a professor of physics had to say. To say I was astounded would be an understatement. Flabbergasted? Getting closer. Disgusted? Bingo! How can a professional educator be so obtuse? We are not babysitters, but to suggest that we need have no interest or investment in the personal struggles of our students was so antithetical to everything I thought my job was about left me wondering how this man was still employed. (hint: tenure isn't always a good thing!)
On the other side of the coin, however, we have the student who does feel that their teacher is their own personal sounding board. You probably know that student if you've been teaching for any length of time. She always stays after class to talk about how today's lesson really hits home for her. He always wants to discuss a family member who he "thinks has those exact symptoms." They want you to diagnose them, to second guess their doctor, to fix their friend, or to tell them if they are on the right prescription. These situation call into question the professional boundaries that must be established and maintained with students, and if we are not prepared can leave us fumbling for a way to tell the student that they have crossed the line.
At the 2016 National Institute on the Teaching of Psychology (NITOP) I presented a Participant Idea Exchange that was meant to focus on the over-disclosing student. I had envisioned an interactive discussion about the student who monopolizes class time, often to the chagrin of other students. The idea was born from a section of abnormal psychology I had instructed that included "that student." She was very nice, but did not seem to understand that self-identifying a history of at least one diagnosis in every – EVERY – DSM category had gone beyond useful personal revelation and into the realm of "now the class is your own personal group therapy." But the attendees of this PIE also included the issue of students who do not respect boundaries during office hours, turning that time into a personal psychotherapy session. Too often the student who does is unreceptive to subtle cues and fine spun hints that the conversation is inappropriate. In point of fact, they may be equally dismissive of obvious, direct statements that their needs are more appropriately addressed in a different office.
What are some strategies for dealing with these delicate situations? How do we balance appropriate caring and concern with professionally necessary distance? Where is the demarcation line between trying to help and becoming a de facto psychotherapist? How do we support without offending? The ultimate outcome of this PIE was that there appears to be no singular technique. Some of the suggestions that came forth were as follows:
IN THE CLASS
It all begins with the messages that are given in class and in the course syllabus. A clear statement of what class time and office hours are for is essential, but it may also be necessary to make an equally clear establishment of boundaries.
Dealing with these issues when they first arise, though sometimes uncomfortable, is essential. As we all know, behaviors that are repeated can become entrenched. Thereafter, they become harder to disrupt. So sparing the student a bit of discomfort early can actually set them up for more embarrassment or awkwardness later. Addressing a student in private and making a clear statement of appropriate in-class boundaries is well-advised. Including a plan for how this can be accomplished is also helpful. In the semester following this PIE, I had another student who presented in a similar way. We spoke privately, and agreed on a very subtle facial cue from me (an extended eye blink in her direction) that would be a code for "It's time for you to pull back"). This cue was only needed twice, and her behavior improved within two weeks.
IN THE OFFICE
Keeping the office door open prevents creating a private space where inappropriate disclosures are likely to happen. When a student asks if they can close the door, you may respond by saying, "I prefer to keep it open." There is still some privacy, but not the intimate space that resembles a therapeutic setting. In an era where faculty members must also be cognizant of the appearance of impropriety with students, this is doubly important.
Operating Within Your Expertise
The risk of bruising a student's feelings must sometimes be met head on. Statements like, "I'm sorry but this time is really reserved for classroom issues," "perhaps we can walk together to the counseling center" (if your campus has one), or "these are issues that are not really appropriate for us to be discussing" might be met with some displeasure. At the same time, they provide an unequivocal message to the student about what topics you are and are not comfortable discussing.
I strongly advise that a professor avoid using phrases like, "I'm sorry, but…" before setting a boundary. It sends a mixed message that you may, in fact, be receptive to a topic even if your words are suggesting otherwise. Students with boundary issues are likely to receive a mixed message but to attend primarily to the part of that message that serves their immediate needs.
Kind, but Firm
Be prepared for pushback, and be willing to re-assert the boundary. Students may not be intentionally pushing you into an uncomfortable place, and may need to hear where the line is more than once, in different words.
This list is certainly far from exhaustive, and is open to interpretation based on one's personal style, theory of teaching, and desire to attend to the students' academic and personal needs. Demonstrating sensitivity to our students' personal challenges will only enhance their engagement and ultimately their success in the classroom. But for the student who demonstrates diffuse (or absent) boundaries, the academic professional is well-advised to have some appropriate interventions at the ready.
Jason S. Spiegelman is an Associate Professor of Psychology at The Community College of Baltimore County in Baltimore, MD, where he lives with his wife and three sons. He teaches a variety of courses, including introductory, abnormal, social, and developmental psychology among others. He is an expert in the preparation and revision of psychology textbook supplements, having worked on such projects for over 150 textbooks over the years. He also serves as an advisor and contributor to The Noba Project.
The end of term is a stressful time. There are countless papers to mark. There are students eager to challenge the wording of each item on the final. Deadlines to submit grades. During the crush of work that comes at the close of the academic season a few instructors have been known to complain. I am one of them. I have made jokes about doing my grading at a local bar and I have dished on my students’ failings. I have to admit, however, that complaining about my work and about my students has done little to boost my quality of life.
It is ironic that I so easily fall prey to the temptation to kvetch because I am a subjective well-being researcher. That is, I study happiness and know a fair amount about which behaviors and thinking styles are most likely to pay happiness dividends. In this post, I would like to share with you just one of the many practical suggestions that have emerged from studies on happiness. I do so now—at the beginning of the academic term—so that you have the mental resources to hear the message and use this information to manage stress months from now at the end of the term.
The tip I would like to share is deceptively simple: it is savoring. Simply put, savoring is the act of mentally extending a pleasant moment. Instead of gobbling down a chocolate, for example, you can stretch your pleasure by taking your time and letting it melt in your mouth a bit and appreciating the flavor. The same holds true for your teaching. Instead of focusing on every late student or every bungled Power Point slide you can start cataloging everything that goes right.
Interestingly, experts in savoring suggest that this phenomenon takes a unique form depending on whether you are focusing on yourself or on others. When the focus is internal we call that form of savoring “basking” or “luxuriating.” These are your proudest moments. They include the e-mails you save from grateful students or that minor brag about how you really helped guide someone at this morning’s office hours. When the focus is external, on the other hand, we call that “thanksgiving” or “marveling.” Examples include appreciating a particularly astute comment in class or being awed by an exceptional piece of student writing.
Taking the time to catalog pleasant moments and to share and remember them has been shown to boost happiness (Bryant & Veroff, 2007). This line of positive thinking is the counterpart to so-called “dampening” strategies. That’s right, we all engage in ways of thinking and behaving that drive our happiness right into the ground. Two of the most common dampening strategies are: 1) “fault finding” (spending time noticing and complaining about all your students do wrong and how difficult your job is and how little you get paid and…. Well, you get the idea); and 2) “negative mental time travel” (remembering all the unpleasant moments such as how you made a mistake on the syllabus or how the class was so confused when you tried to explain misattribution of arousal or how you rescheduled to meet a student at office hours and then he didn’t even show up!).
Right now, fortunately, you have a clean mental slate. The term is just beginning and you have countless opportunities ahead of you to log many positive and pleasant experiences. I am not suggesting that you pretend life is exclusively pleasant or that you ignore problems. Not at all. Instead, I am recommending that you take the time—even during the stressful moments at the end of the term when self-care is the first thing to fall by the wayside—to appreciate the many wonderful moments, awesome teaching and student contributions that are really why we are all instructors in the first place.
Dr. Robert Biswas-Diener is the senior editor of the Noba Project and author of more than 50 publications on happiness and other positive topics. His latest book is The Upside of Your Dark Side.
By Tabitha Kirkland, University of Washington & Deepti Karkhanis, Bellevue College
We teach with the hope that our students are learning, and we test to find out whether or not they have. At least, this is the traditional approach. We certainly give our students active opportunities to learn during class sessions, but we wondered: wouldn’t it be great if we allowed students to continue learning throughout the class? Traditionally, exams assess rather than create learning. And in introductory classes, these exams are better at evaluating recall and recognition rather than evaluating or even promoting deeper levels of understanding. We suggest that learning can occur throughout the course, even during a test. And if we really believe that deep learning is more important than memorization and regurgitation, we should be willing to reconceptualize the way we approach testing.
So, we’d like to share our adventures in group testing.
How group testing works
We did a series of little experiments at a large community college in Washington, where all classes are for first- and second-year students. We tried several variations on this basic theme, but what all of these had in common were that psychology students had the opportunity to review and discuss their multiple-choice exams during class with peers before submitting it for a grade.
The first round, we gave students the same multiple-choice exam twice: once alone, and then immediately afterward in assigned small groups of 3-5 students. We let students complete individual response forms during both portions, so they were still responsible for their own scores and were not required to agree with the group. We then averaged both their individual and group efforts to calculate their exam score. The results? Almost 20% improvement from individual to group exams -- with some students improving by as much as 40%! (Karkhanis & Kirkland Turowski, 2015)
Importantly, this improvement in learning did not seem to be due to one high-performing student dominating the group conversation. We noticed that almost everyone seemed to be contributing about equally during group exams, and students’ feedback lined up with these observations: in a follow-up survey, they reported that almost everyone contributed equally to discussion.
Scores also improved across the grade distribution: even the highest-performing individual students benefited from the group conversation (improved scores from individual to group exams), suggesting these overall improvements were not simply due to the lower-performing students catching up.
We tried mixing up who was in the groups. It didn’t seem to matter whether the groups were carefully pre-sorted to distribute members across the grade spectrum, or assigned randomly, or even chosen by the students themselves. It seems that what matters for performance is simply the group format, not the specific makeup of the group membership. (Group size probably does matter, however -- smaller groups give more opportunity for individual contribution.)
We also checked to see if there were reliable differences based on class topic (e.g., general psychology vs. lifespan psychology) or instructor or even instructional quarter. Nope. So this suggests that this approach would be appropriate for a variety of psychology classes.
One of us also tried this neat variation: group quizzes followed by individual exams. The rationale was that if students are learning more in the group atmosphere, then that learning should show when being tested on those concepts later. She put students in randomly-assigned pairs for weekly quizzes, then compared their individual exam performance with another class (same instructor) that had taken their weekly quizzes alone. Students in the paired-quiz class performed an average of 10% better on the individual exams!
Why do group exams work so well?
We asked our students what worked for them about these exams.
First, our students enjoyed the opportunity to discuss each answer choice with one another. They found the process of talking, debating, and bouncing ideas off each other to be useful, and they were able to see different ways of reasoning through these conversations. Students reported that they learned more through teaching others and being taught, and sharing these answers boosted their confidence in their own knowledge. Groups used different ways of arriving at a conclusion: some voted and went with the majority, others discussed until a consensus was reached, and others chose their own answers after discussion (remember, they had individual response forms). Regardless, almost everyone enjoyed the group exam format, which can positively affect the entire class experience (Stearns, 1996).
Second, group testing relieved stress and anxiety that often is caused by evaluative situations. Our class sessions tend to be interactive, experiential, and collaborative, and we try to build community to encourage a positive learning environment. But when it comes time to test them, that environment changes. Tests can create high levels of anxiety for many students, yet we know that the best performance under challenging circumstances comes from moderate levels of arousal (Yerkes & Dodson, 1908; Broadhurst, 1959). How do we manage this traditionally? We just tell our students not to be anxious and hope it works out. Building on notions of context- and state-dependent memory, this means we are actually creating incongruent contexts for our students to perform in, while still hoping that they will be able to succeed. We teach this science to our students, so we should really be implementing practices based on it. Some of our students explicitly discussed the stress relief provided by the group-testing environment, and how that boosted their performance and recall. Group testing then not only mitigates test anxiety, it also make in-class testing a more positive educational experience (just like the regular class sessions) (Ioannou & Artino, 2010).
A third reason group exams work: they give students an opportunity to explicitly review their answers and reasoning. If you’re not yet on board with testing in groups, you might just have students take the exam individually, then get them into groups to discuss the exam afterward and figure out why their answers were correct or incorrect together. This would be a more traditional approach -- collaboration during review -- in which one major benefit is simply having students look at the exam again instead of the more common practice of tossing it in their bags and never looking at it again. Of course, we’d like to think that some of the benefits of the group format would carry over to this also, like students learning from one another. But in this format, learning would occur after the fact, so this might work best if you plan to give a comprehensive final or something else cumulative to reward that effort.
Some of you must be wondering about social loafing and individual accountability. We do think it’s important to include an individual component of the grade so as to encourage students to prepare for exams with the same rigor. One way to address this is to implement some minimum cut-off grade that students must earn individually in order to be eligible for group benefits. For example, one of us set a minimum of 70%. Students who scored lower than this on the individual exam did not have their group score count toward their grade. Another strategy is lower-stakes group quizzing followed by higher-stakes individual testing (described in “Variations” above).
We recognize that our suggestions have some possible limitations:
First, we tried all this stuff at a community college, where competition is limited and stakes are lower, and class sizes are fairly small (30-40 students). We think the possibility of group dynamics encouraging collaboration over competition would definitely benefit students at more competitive schools, provided the grading scale allowed for this. And we think it would probably be feasible to do in moderately large classes (e.g., 100 students), provided there is additional instructional support staff. One of us has actually just moved to a large university, and plans to try this out in a larger class, so we will keep you posted on how well this works!
We also did this with multiple-choice exams, on which performance can be more easily quantified. Of course, exams were rigorous -- using a variety of difficulty levels and skewing toward conceptual/application questions -- but we have not tested this out with short-answer or essay exams. We’d imagine that you would have to vary your approach if this is more your exam style, like designating a single group scribe (thus sacrificing a bit of independence) or having meticulous rubrics.
Group exams are good for performance. Students performed significantly better on exams taken in small groups relative to exams taken individually (Bloom, 2009; Karkhanis & Kirkland Turowski, 2015). This improvement in performance was due to a number of factors, including the opportunity to think carefully about the concepts being tested and the ability to teach and learn from one’s peers.
More importantly, group exams are good for learning. Students reported that the group-learning atmosphere helped to relieve stress, increase confidence, and solidify their understanding of course material. And students who completed frequent group quizzes scored higher on midterm and final exams, suggesting that they learned more during those collaborative quizzes than students taking them alone.
We’d love to study this further, including how these effects might change when we take into account question difficulty and level of abstraction. But we’re pretty clear on one thing -- if your class size permits it, you should definitely consider implementing group exams.
Tabitha Kirkland is a lecturer in psychology at the University of Washington. She received her B.A. from the University of California, San Diego, and her M.A. and Ph.D. from The Ohio State University. Tabitha lives in Seattle with her family and enjoys traveling, outdoor adventures, and a strong cup of coffee.
Deepti Karkhanis is an Associate Professor and Department Chair of Psychology at Bellevue College. She received her Bachelor’s and Master’s degree from Delhi University in India, and her Ph.D. in Applied Developmental Psychology from George Mason University in Fairfax, VA. She is a developmentalist whose teaching interests include lifespan psychology, adolescent and youth psychology, cross-cultural psychology and positive psychology. Dr. Karkhanis explores a variety of pedagogical topics such as collaborative testing, student-teacher rapport, positive psychology in classroom curriculum, and teacher training on social justice and educational equity.
1.Bloom, D. (2009). Collaborative test taking: benefits for learning and retention. College Teaching, 57(4), 216-220.
2.Broadhurst, P. L. (1959). The interaction of task difficulty and motivation: The Yerkes-Dodson Law revived. Acta Psychologica, 16, 321-338.
3.Ioannou, A., & Artino Jr, A. R. (2010). Learn more, stress less: Exploring the benefits of collaborative assessment. College Student Journal, 44(1), 189-199.
Imagine a hotshot graduate student who fancies himself quite the professor-in-waiting. Given the opportunity to independently instruct a section of Introductory Psychology, he selects the perfect book, plans lectures, practices the lectures, practices the lectures, and then practices some more. What he produces is, in some cases, fairly impressive. For example, during the lesson on motivation, he uses the slide below to lecture on Maslow’s hierarchy of needs, giving creative examples of people at each stage, and engaging students with some discussion questions. Clear? Professional? Engaging? Check, check, and check.
“But what is the learning objective?” Despite the apparent quality of the lesson, if the graduate student could not answer this question, it would be some seriously bad pedagogical news.
To illustrate why that would be such bad news, take a closer look at the graduate student’s lecture slide and reconsider how you would study it if you were preparing for a test.
Would you memorize the stages? Would you test your ability to describe stages? Would you memorize the teacher’s examples or produce your own? Would you form an opinion about the theory’s validity? Now, switching to the teacher’s perspective, which one of those topics would be on the test? Without a learning objective, all options are equally right and wrong – hence, the very bad pedagogical news.
Based on recent work sponsored by the Society for the Teaching of Psychology, being a model teacher is, at least partially, defined by the use of learning objectives (An Evidence-based Guide to College and University Teaching; Richmond, Boysen, & Gurung, 2016). After reading this post, you should know how to write a learning objective and be able to explain why they are essential to model teaching.
Defining Learning Objectives
Learning objectives are specific, behavioral definitions of the knowledge, skills, or attitudes that students are supposed to take away from a learning experience. Unlike learning goals, which are often so broad as to apply to an entire curriculum, learning objectives narrowly define what students should be able to do with regard to just one, specific educational experience. A course might have only a handful of broad learning goals – they are typically listed on the syllabus – but the number of learning objectives corresponds to the individual lessons that occur over the entire course. Yes, that means you may have 100 or more learning objectives for a course!
Writing 100 or more objectives may seem like an insurmountable amount of work, but it is as easy as ABC: Audience, Behavior, and Condition (Boysen, 2012).
Audience (A): Who will be learning? This is the easy part for most college teachers because it only has to be written once for each course.
Students in [fill in course name] will…
Behavior (B): What will be learned? Here is where the hard work occurs. Teachers must decide what they want students to be able to do after each lesson and put it into terms that can be assessed. Bloom’s taxonomy is an essential tool for writing learning objectives because it defines the levels of cognitive complexity of the learning and can be associated with action verbs that are perfect for plugging into learning objective statements (see Figure).
Students in Introductory Psychology (A) will differentiate classical and operant conditioning (B);
Students in Statistics (A) will calculate t tests (B); and
Students in Biology (A) will construct pedigrees (B).
Condition (C): In what situation will the behavior occur? Conditions help determine what information or tools students will use when they engage in the behavior. They are not always necessary but can add a much-needed level of specificity.
For example, continuing with the objectives outlined above:
Students in Introductory Psychology (A) will differentiate classical and operant conditioning (B) when given real-world examples (C);
Students in Statistics (A) will calculate t tests (B) using SPSS (C); and
Students in Biology (A) will construct pedigrees (B) using human genetic information (C).
Why Learning Objectives Matter
Writing learning objectives is time consuming, but it is a worthwhile investment in model teaching. For the last few years, I have been part of a team effort to define model teaching (Richmond et al., 2016). We define model teaching as the possession of a set of evidence-based characteristics related to pedagogical training, instructional methods, course content, assessment, syllabus construction, and student evaluations. These characteristics are not the ineffable stuff that separates renown master teachers from the rest of us mere mortals (Buskist, Sikorski, Buckley, & Saville, 2002). Rather, it is a list of nuts and bolts fundamentals that anyone can develop and incorporate into their pedagogy.
Why did we include the use of learning objectives as a characteristic of model teaching? Here are just a few of their advantages, with the best saved for last.
Selection of teaching methods. Pop quiz: Why did you choose the specific pedagogical approach you took in the last class that you taught? If this was a difficult question to answer, learning objectives can help. Classes should be designed backwards; decide what students will learn first, then match the teaching method to the learning objective.
Intentional selection of evaluation methods. Imagine that the learning objective for the graduate student’s lesson on Maslow was for students to devise improvements to the theory. What should the test question for this objective look like? Going back to Bloom’s taxonomy, the cognitive skill represented in a learning objective helps determine its evaluation. If the goal is memorization of facts, multiple choice might suffice; evaluation of those facts, however – a high-level cognitive ability – would likely require a detailed written response.
Useful evaluations of learning. Have you ever tried to teach someone to shoot free throws while blindfolded? Me neither, because accurate feedback is needed for learning to occur. Both you and the learner need to know if efforts are hitting the mark. Teachers who have a list of learning objectives can directly evaluate achievement of objectives and then provide detailed feedback. For example, if the objective is to summarize all of Maslow’s stages and student can only summarize half, then the focus of future teaching and learning efforts is abundantly clear.
Increased learning. The 1970s gave us more than classic rock, it also gave us classic educational research showing that students learn more when they have specific learning objectives (Duell, 1974; Kaplan & Simmons, 1975; Rothkopf & Kaplan, 1972). I told you I saved the best for last!
How would you answer the question?
How would the hotshot graduate student have answered the question “What is your learning objective?” In fact, he couldn’t have answered it. I know because it was me. Although I had enthusiasm and dedication at that point in my career, model teaching requires so much more. After many years of development, I can answer that important pedagogical question before I walk into any classroom – I hope you can as well!
Guy A. Boysen is an Associate Professor of Psychology at McKendree University. He received his Bachelor’s degree from St. John’s University in Collegeville, MN and his PhD from Iowa State University in Ames, IA. He is a generalist whose teaching emphasizes clinical topics and the mentorship of student research. Dr. Boysen has studied a wide variety of pedagogical topics included student teaching evaluations, bias in the classroom, and teacher training.
Buskist, W., Sikorski, J., Buckley, T., & Saville, B. K. (2002). Elements of master teaching. In S. F. Davis & W. Buskist (Eds.), The teaching of psychology: Essays in honor of Wilbert J. McKeachie and Charles L. Brewer (pp. 27–39). Mahwah, NJ: Lawrence Erlbaum.
Duell, O. P. (1974). Effect of type of objective, level of test questions, and the judged importance of tested materials upon posttest performance. Journal of Educational Psychology, 66, 225-323.
Kaplan, R., & Simmons, F. G. (1974). Effects of instructional objectives used as orienting stimuli or as summary/review upon prose learning. Journal of Educational Psychology,66, 614-622.
Richmond, A. S., Boysen, G. A., Gurung, R. A. R. (2016). An evidence-based guide for college and university teaching: Developing the model teacher. Routledge.
Rothkopf, E. Z., & Kaplan, R. (1972). Exploration of the effect of density and specificity of instructional objectives on learning from text. Journal of Educational Psychology,63, 295-302.
A few years back I taught my first service learning class. From that experience I learned two things. The first was that incorporating service into my teaching left my students and myself enriched both educationally and personally. Secondly, teaching a service learning class can be very difficult.
During the period of our collaboration, my service-learning partners experienced a great deal of turnover in leadership. This led to frustration as students’ experiences were often discrepant and at times their service efforts did not align with the material we were discussing in class. All in all, the benefits outweighed the costs, but it wasn’t long before I started thinking, “There has to be an easier way.”
My Done-in-a-Day Activity
The desire to incorporate service in a more manageable fashion led to the development of a number of possibilities, including a one-day service activity in my Introduction to Psychology course. When covering the material on altruism and pro-social behaviors in the social psychology chapter, I ask students to select and participate in a hands-on demonstration of helping.
Past projects have included donating holiday gifts through the Salvation Army’s Angel Tree program, sending care-packages to victims of Hurricane Sandy, collecting school supplies for a local elementary school, creating coloring books for the Children’s Miracle Network, making blankets for local hospitals through Project Linus, and sending over a thousand thank you cards and letters of support to US troops stationed around the globe.
Each semester I observed my students’ interest and engagement in the activity and noted how it also appeared to connect them with the course. It wasn’t long until I began to tout my efforts as a success. Unfortunately (but in hindsight, perhaps quite fortunately), my bravado was diminished when a colleague asked the million dollar question, “Do you have any data to show that this activity is actually benefiting your students?” At the time I did not.
Does It Work?
Since then, I have collected Scholarship of Teaching and Learning (SoTL) data on the impact of my Introduction to Psychology Helping Behavior Activity. I’m thrilled to say that this project, which requires merely one class period, can yield a number of the benefits associated with a semester-long service learning course. For instance, following participation in this activity students reported more ethical awareness and increased participation in volunteer activities. Students also reported higher intrinsic motivation for the class and greater endorsement of the belief that the course contributed to their overall development.
If you are like me and love the idea of teaching students through the incorporation of service, but are not able to devote an entire semester to a service learning partnership, you are in luck. Instructors may be able to harness some of the important benefits affiliated with experiential and service-based instruction by employing a much more manageable Done-in-a-Day Activity. Although I have yet to collect data on the sustainability of the benefits associated with my Done-in-a-Day Activity, this work suggests that it may be easier for instructors to encourage a sense of social responsibility and greater ethical awareness in their students than previously believed.
Suggestions for Implementation
The beauty of this project is its simplicity. If you can spare one day of class, then you have the potential to substantially enhance not only your students’ academic, but also personal growth and development. Although I have used this Done-in-a-Day Activity in my Introduction to Psychology course, I feel confident that it can be applied to a number of classes. For those interested in trying it out, here are a few suggestions:
Allow students to select the service activity, but provide guidance about what is realistic for one class period.
Help students explicitly make connections between the class material and the service activity.
When possible, partner with an established organization so your efforts are part of something sustainable.
Know that even this activity will be more work than you imagine, but it’s worth it!
In order to make my Done-in-a-Day Activity a success, I start by building excitement for it from the very first day of class. I then have several days where students can nominate potential class activities, and periodically we have brief discussions about the feasibility of and interest level in the suggested projects. Finally, we pick the activity and I work to explicitly tie it to the course material, both before and after participation. Where possible, I even try to bring in speakers that help students understand the impact of their efforts. Although students are quite literally done with this activity in a day, the impact that participation has on them appears to be extensive. In past semesters, I have even had students begin working with the organizations that we’ve partnered with, sometimes for a number of months or years.
Sadie Leder-Elder received her Ph.D. in Social/Personality Psychology from the State University of New York at Buffalo. She is an Assistant Professor of Psychology at High Point University in High Point, North Carolina. Her research and teaching are focused on the study of close relationships. Sadie is a celebrated educator, having received numerous teaching awards, including being named the national recipient of the Society for the Teaching of Psychology’s Wilbert J. McKeachie Teaching Excellence Award in 2010 and the Jane S. Halonen Teaching Excellence Award in 2014.
Leder-Elder, S. (2016, January). Teaching psychology through social action: Even a done-in-a-day activity can make a difference. Poster presented at the annual meeting for the National Institute on the Teaching of Psychology, St. Pete Beach, FL.
Last semester I taught a course on human sexuality.
I love teaching this class. I talk with my students about issues related to gender identity, sexual orientation, sexual behavior, and more. As I was prepping my course, I wanted to revise my standard “opinion paper” assignment. In the traditional version of the paper, students are assigned a controversial topic (e.g., marriage equality). In it, they are expected to put forth an opinion on the topic and then justify their opinion using empirical literature. But in classes that address complex social issues, people’s opinions and attitudes are not based exclusively on empirical evidence.
The problem with the standard opinion paper
The problem with an opinion paper that emphasizes only empirical evidence is that it implicitly assumes that:
1) students will be convinced to adopt the attitude most strongly supported by evidence
2) students who aren’t convinced by empirical evidence have somehow failed at a critical element of the course.
With the standard opinion paper guidelines, I was concerned about students feeling attacked for their beliefs or creating an assignment for which they were literally unable to satisfy expectations. When students leave my class, I want them to own their attitudes about these complex issues and understand why they hold the attitudes that they do. And I recognize that some students' opinions may be grounded in factors that cannot be empirically supported (e.g., religion) and thus they simply can’t use empirical research to support their arguments.
Getting the perspective of instructors
In an effort to improve the traditional approach to this type of assignment I appealed to my colleagues for their thoughts. Specifically I said that the goals of the assignment were:
1) to get students to form an opinion about issues relevant to the class
2) to challenge them to express and argue for their opinions with evidence (i.e., research), while being sensitive to opinions that may not have empirical support behind them.
My colleagues were as gracious as always in providing their valuable insights. However it was striking that the opinions I received consistently advocated for students to support their opinions with empirical evidence because psychology is, after all, a science.
Here’s the thing: although psychology courses are grounded in science they also deal with important and complex real-life issues. And like most of us, our students’ opinions are not grounded solely in empirical evidence, nor will empirical evidence be the only factor that persuades them. So I shifted my goal for the course. Instead of asking my students to identify the opinion with the most empirical evidence, I asked them to give me their own opinion, and to reflect critically on the existing empirical evidence (even if it did not support their opinion) as well as any other factors that influenced their beliefs.
This approach is directly in line with the research on attitudes. We know that attitudes are based not only on thoughtful cognitive evaluations of evidence, but also on our affective responses to an object, on the function an attitude serves, and on our past experiences with the object, as well as others (Fabrigar & Wegener, 2010). If our students’ attitudes ultimately contradict what is supported by evidence, don’t we want them to have intentionally and thoughtfully considered all the issues involved and consciously decided that in spite of what the evidence says, they are willing to give greater weight to other factors? This might seem contradictory to the goals of higher education where we advocate for rational, critical thought. But if we want to challenge students to expand or critique their beliefs, we have to first let them identify those beliefs.
In the end, I gave students a list of controversial topics. Their task was to write a paper that reflected their authentic personal opinion. They had to include a clear, nuanced opinion on the issue, review empirical research on the topic, and present their arguments for their opinion (with relevant evidence, whatever that may be). If the empirical research contradicted their opinion, they had to explain why they were discounting it. They were told that they would not be graded on their opinion, even if it was not supported by empirical evidence. Instead, their grade reflected the thoughtfulness and complexity of their opinion and their reflection on the factors that underlie it.
In the final course evaluation, a few students spontaneously reflected on the opinion papers. They talked about how the papers allowed them to think about these topics more deeply than they might have otherwise, and recognize the complexity of the issues in ways they hadn’t before. And importantly, multiple students stated that they felt free to express their opinions, without judgment.
Attitudes are important because they (can) predict behavior (see Glasman & Albarracin, 2016 for a review). And they are complex because they are influenced by myriad factors (Fabrigar & Wegener, 2010). If our goal is to have students think critically about their worldviews, we need to stop limiting what are acceptable opinions to only those supported empirically. Our students will hold attitudes that contradict empirical evidence. And when we limit discussion and reflection to only the empirical, we create an environment where students may not be personally empowered by their learning. By asking students to identify their attitudes and reflect on why they hold those attitudes, without fear of judgment or a poor grade, we are asking them to take responsibility for their attitudes and their behavior.
Sara Branch received her Ph.D. in Social Psychology from Purdue University. She is an Assistant Professor of Personality Psychology at Hobart and William Smith Colleges. Her research focuses on the intersection of social and cognitive psychology as an approach to scholarship of teaching and learning.
We’ve all been there. Staring out into the sea of blank, tired student faces. We tell ourselves it’s the time of day (Too early! Need an after lunch nap!) or the students (Senioritis! Non-major!). It couldn’t possibly be the material, right? I mean, who doesn’t love PSYCHOLOGY! Especially SOCIAL psychology! This material is so applicable and interesting that there must be another reason for their lack of attention. Although we may think that just talking about the material will cause fireworks and lightbulbs in our students’ heads, sometimes we need to acknowledge that our teaching style may need to be adjusted in order to facilitate said fireworks. How can we make the material we cover more relevant to our students and help them connect it to the real world?
This question led us to a brainstorming session while walking on the beach in Florida during a break at the National Institute on the Teaching of Psychology (NITOP) conference. Since we both teach Social Psychology, we started sharing ideas to get our social students more involved and invested. We talked about the long history of social psychologists conducting research to understand and address societal problems. We thought it would be great if we could bring this important tradition into our classrooms. As our conversations continued they evolved into an assignment we called the Application Challenge Project where students work in groups to identify a real-life problem and then use social psychological concepts to develop a solution.
Two of the most essential elements of the project are a high degree of autonomy for the students and the requirement to apply social psych in meaningful ways to the world outside our classroom. When students can choose a topic that interests them, they become more invested and engage in deeper learning (e.g., Heilman et al., 2010; Walkington, 2013). Then by challenging students to come up with solutions, they realize the real-world implications and how the course material can truly make a difference.
We have used the Application Challenge Project now for two semesters of Social Psychology and in the opinions of both instructors and students it has been a clear success. So successful that we are inspired to share the idea with you, our colleagues. We’d like to offer key steps and suggestions for implementing this project in your classroom.
1.The dreaded ‘C’ word: Getting students to care!
Our students are at an idealistic age, so when given an opportunity to do something good in the world they’ll usually jump at the chance.
We leverage this on the first day of class by showing a series of attention-getting videos addressing social problems (traffic deaths, hygiene, public health, etc.). Then we discuss the purpose of each and the ways they were successful in delivering their messages (catchy songs, emotion, humor). Viewing these videos puts students in the right frame of mind to think about social change.
Then on the second day we task them with identifying other current issues that social psychologists might be interested in addressing. We encourage them to think about local issues as well as broader issues in the state, country, and world. The more personally relevant the issues are, the better!
With ideas starting to flow in their heads, it’s time to get them into groups and on to the real work of the project.
2. A little bit of logistics goes a long way
Dividing students into groups can be a challenging task. We decided on 4 - 6 students per group so the division of labor would be appropriate but groups would not be so big that students would have trouble coordinating their efforts.
Although students tend to prefer to pick partners themselves, we assigned them to groups to avoid a number of typical issues (shyness, pressure to work with friends, etc.)
If this sounds like too much work for you, then feel free to allow your students to pick their own groups.
3. Finding a real-world problem: So many to choose from! [Where do we even start?!]
We found that students typically generated ideas of problems that are quite large and multi-faceted (e.g., improve people’s health) and needed guidance in focusing their topic to something manageable within a semester (e.g., persuading parents to vaccinate their children). So we created a topic statement assignment.
As a group, students submitted a brief one to two paragraph topic/problem statement that explained the importance of the problem and described the major tenants/components of the problem. The problem statement helped students come to an agreement on what they were going to address and also gave us a chance to help students make specific enough choices.
Here are some examples of the questions they chose to investigate:
How can California residents reduce their water consumption during the drought?
How can police brutality be reduced?
How can the stigma associated with mental health problems be reduced?
Are there ways to reduce students’ stress during finals week?
How can universities reduce binge drinking among first-year students?
4. The nuts and bolts of the process and the final reveal!
With topics decided and approved, each individual group member then conducted a literature search and found two relevant empirical peer-reviewed research articles that supported the group’s topic. In an attempt to reduce social loafing, each group member wrote a short summary (i.e., one to two paragraphs) of each article and how it was relevant to the group’s problem. We made a point to tell students that group members should coordinate to make sure they were not reporting on the same empirical articles.
Each group submitted a short proposed solution for their topic/problem. We told students we wanted to see that they had thought critically about the problem and potential obstacles to rolling out their solution. For example, the group trying to increase water conservation wanted residents to be aware of their water use so they needed to discuss how water use would be monitored (e.g., Will residents really check a small water gauge? How often do residents need feedback?).
In addition to the written assignment, groups gave short informal presentations in a “speed dating” fashion where one group explained their problem and proposed solution to another group in approximately 5 minutes with 2 minutes for questions; groups then switched roles (the group who first presented their project listened to the other group’s presentation). This format allowed students to practice communicating about their project multiple times and helped students refine how they presented their arguments (or highlighted where additional work was needed!).
After incorporating the feedback from the “speed dating” presentation and their written proposals, each group formally presented their project to the class in an 8 - 10 minute presentation.
Each group then submitted a final paper that included (1) a statement about the topic/problem at hand, (2) a discussion about relevant social psychological concepts used to address the topic/problem, and (3) a proposal on how to solve the topic/problem.
Knowing that some students would be likely to put in little effort whereas other students would likely put in a lot of effort, each group member evaluated the other group members’ contribution to the group as well as their own contribution throughout the semester-long project ranging from 0% contribution to 110% contribution. We let students rate contributions up to 110% in order to account for students who went above and beyond to carry the group. We had students include a short rational for their rating. Students’ final grades on all group assignments were adjusted based on the average contribution rating.
5. We found the glitches so you don’t have to!
Some students have a really hard time narrowing in on a suitable topic and time can be lost if you’re not careful. The topic statement assignment (#4 above) helped get groups on the right track early.
Next time we do this project, we plan to give students a hypothetical budget (e.g., $5,000 - $10,000) to help them think more specifically about how they could address the problem.
Social loafing remains a problem. Next time we do the project we might have students rate each other’s contribution for each assignment so that it is a more salient reminder that students need to be consistently contributing to their group.
Students seemed to put more effort into their projects when they knew someone from outside our classroom would be listening to their presentation (i.e. someone from the university/community who had a real interest in the specific topic). Even though inviting guests took some effort and coordination, it seemed to be worthwhile for students and guests alike.
6. Final thoughts: A project worth continuing
Most students seemed more engaged with the course material and were excited to see practical ways to use the information they were learning about. After one of the small group discussion days when we gave quite a bit of feedback and challenged students to dig deeper into ways to use course material in their projects, one student commented, “I really like this class because we actually have to think. It’s fun! I wish more classes were like this.” We also saw projects extend beyond the classroom. For example, several projects were presented at a local conference, while another extended into a film class and became the subject of a documentary.
Several groups selected problems that were relevant to their anticipated careers. For example, one group of pre-medical students all volunteered as EMTs because they were concerned about the high rates of PTSD among emergency responders. Another group of pre-medical students addressed the issue of parents not vaccinating their children. Both of these groups reported that they learned how difficult it can be to change people’s behaviors and attitudes. They even commented that it was helpful to read the literature and learn how to use what is already available to solve problems.
Most of the projects were successful in meeting the objectives of integrating course material to address a real world problem. However, some groups stayed at a surface level and recycled solutions that have already been used rather than developing more creative or novel solutions. Given that this course is a 200-level course comprised of students across all four years (but primarily freshmen and sophomores), it was not surprising that some projects were stronger than others.
We consider this project a success! In addition to students being more engaged in Social Psychology, students interacted with members of the university and local community and learned about ways they could continue their projects. This project could easily be adjusted and tailored to other courses (e.g., Health Psychology, Cognitive Psychology, Developmental Psychology) to help students engage more with the course material. We hope you’ll consider using it with your own students.
Melisa Barden is an Associate Professor of Psychology at Walsh University in Canton, OH. Her recent research focus has been on the teaching of psychology including the use of “clickers” in the classroom. In her free time, she enjoys watching her 4-month-old develop and spending time with her friends and family.
Jennifer Knack is an Assistant Professor of Psychology at Clarkson University in Potsdam, NY. She studies the antecedents and mental and physical health outcomes of being bullied. She has two cats -- Nimbus and Albus Weasley -- and enjoys hiking and going to the beach.
Heilman, M., Collins-Thompson, K., Callan, J., Eskenazi, M., Juffs, A., & Wilson, L. (2010). Personalization of reading passages improves vocabulary acquisition. International Journal of Artificial Intelligence in Education, 20, 73-98. doi: 10.3233/JAI-2010-0003
Walkington, C.A. (2013). Using adaptive learning technologies to personalize instruction to student interests: The impact of relevant contexts on performance and learning outcomes. Journal of Educational Psychology, 105, 932-945. doi: 10.1037/a0031882