Noba Blog

Statistics as Psychology: Harry Potter and the Sorcerer’s Stats

February 8, 2017

By Thomas P. Tibbett

In my five-year tenure as graduate student, I’ve asked many students what originally drew them to psychology. To date, none have enthusiastically responded, “Statistics!” If there are students who gush about hand-calculating F-values, they must be attending other schools. More commonly, I get students who say they have (or hate!) to take statistics. All manner of groans, squirming, and eye-rolling follow. They matter-of-factly inform me that they got into this field for the people, not the numbers. They seem to believe that they “don’t need to know this to be successful in psychology!”

Sadly, they do. That said, I don’t blame them for their complaints. Very few students sign up for a psychology major with the end-goal of a quantitative career. Usually, they either want to help people or to understand them (or perhaps even explore themselves.)

"These factors and more can lead students to have a distaste for their required stats course."

Some studies have sought to understand why students come in hating psychology-statistics courses. Researchers often arrive at the explanation that people range greatly in math experience; other experts note greater anxiety as a function of low interest in statistics. Finally, it is possible that beliefs about math ability may play a role; social psychology research suggests women might be anxious about confirming stereotypes (here, being ‘bad at math’)—keeping them from performing at their best. These factors and more can lead students to have a distaste for their required stats course.

In my experience, complaints fall into two general types: “stats class is not fun” and “stats class is not useful to me.” It’s challenging to address these grievances, largely because of the way courses are designed. Even the labs, which should be more hands on and engaging are a source of frustration or boredom for many students.

Now... where to put you?

Harry Potter and the Sorcerer’s Stats

How can you make lab something that a student would look forward to each week? A little Harry Potter goes a long way. The smaller nature of lab allows me to sort people into small groups, so I bring in a Sorting Hat on the first day. Students pick from the hat to determine their House, and they’re seated near their Housemates. In case you are not familiar with Harry Potter, there are four Houses into which students are sorted—often, members of these Houses have common characteristics. I use the story’s frame to my advantage as an instructor. Students will explore a psychological construct relevant to their House for the duration of lab.

The Data Hallows

In the first week, Houses receive a short survey. They can add to it (with approval) and then collect data from around the university: about 45-50 participants per House. Since the research is not to be published and for educational purposes, our IRB requires no protocol, but you should check with yours.

All surveys should contain the following items: age, gender, grade point average, and any other scales you’d like to use as predictors. However, each House administers a survey that has unique items—5 items each from a validated scale involving their topic above. Hufflepuff might have a few items from the Big Five. Ravenclaw might have IQ-relevant questions. You could use whatever items you would like, but keep in mind that students are more interested by significant effects!

They have a week to collect these data, and then they return next lab to enter it into SPSS together. Students love this about the course, according to evaluations—they often touch upon the data collection. Their data is not invented, like every exercise they see in the book. It’s real! Learning statistics suddenly is no longer a hypothetical exercise; students can now take ownership and carry out their first study by following the course’s guidance. From day one, they’ve gone out into the world and collected new knowledge. And every lesson can unlock a new piece of the puzzle—a veritable alohomora.

This wizarding frame and their House dataset follow them throughout the course. Here were a few specific approaches I used with these critical elements to keep students interested.

Quizzing Whizbees

A nice thematic touch for giving quizzes is to allow 20 minutes to do something basic in SPSS (or your platform of choice). If you can make the quiz House-relevant, that’s even better. Remember, you do not have to change the data set for each House—just change the wording of the quiz question. For example, one of our quizzes involved correlation. For Ravenclaw House, the two variables were standardized test scores and number of books in the home—playing on Ravenclaw’s thirst for knowledge. Meanwhile, Hufflepuff students had conscientiousness and loyalty: two qualities the House values. The data sets, though, were numerically identical, so the quiz was equal in difficulty for both Houses. 

Flourishes and Blots

I stress to my students that much of science flies or flops in the writing, and that I’ll be taking them through all the steps of the process. Writing assignments need not be long (2-3 pages), but they should be somewhat APA-structured: Introduction, Method, Results, and Discussion. Each paper tackles one question about the data they collected at the semester’s start: for example, what the typical participant is like, if there are gender differences, and potential relationships between their variable and GPA.

I give them an opportunity in lab to peer-edit rough drafts—I lead these sessions to teach them what to look for. Like our own process for publication, the students rely on their peers initially to catch glaring issues or limitations with the paper. It’s a friendly environment to introduce them to peer-review. But I still remind them that the final draft heads to the editor—me! At the end of the semester, each student should have three good papers to work with.

The House Cup

Writing these papers becomes meaningful, because at the end of the semester, some professors are gracious enough to hold a mock-conference in lecture. In the last weeks of lab, I teach students how to construct a poster from their paper(s) and present it—I encourage students to have fun with it. Each House chooses their favorite analyses. Professors can award points, extra credit, or prizes. I’ve found that students get very excited about their posters. I’m not sure I’ve gotten all the glitter off from last semester.

What’s interesting about using lecture time is that all five labs come together. Maybe Slytherin groups from all labs will have the same results. In that case, the professor can talk about reliability. If results differ, the professor can talk about difficulty in replication, something very topical to our science.

Fantastic Stats and Where to Find Them

With this structure, students conduct science, making the numbers about logic and truth rather than calculations. Statistics becomes a tool in their skillset, not an obstacle. By the end of class, students begin to learn about psychology from their peers—not passively from professors or assigned readings. They walk through each stage of our research process, from data collection to presentation. And many of them find that it is, well, kind of fun!

Rather than dreading statistics and avoiding it like the Dark Mark, they’ll begin to understand that statistics are a psychologist’s “most inexhaustible source of magic.” Maybe it won’t convert them all to statistics wizardry, but they’ll remember the course for years after.

Bio

Tom Tibbett is a PhD candidate in Social and Personality Psychology at Texas A&M University, graduating with an Advanced Statistics certification. He loves his alma mater, the College of William and Mary, where he graduated with a Bachelor’s degree in Psychology and English. His interests include teaching, number-crunching, puppies, and wizards. He can be contacted at [email protected].

Special Thanks

To my colleagues teaching PSYC 203, especially Heather Lench: you’ve been an inspiration and given me the foundation for these ideas. To Kaileigh Byrne: after meeting you, I don’t think I could avoid thinking about Harry Potter if I tried.

For more information

Conners, F. A., McCown, S. M., & Roskos-Ewoldsen, B. (1998). Unique challenges in teaching undergraduate statistics. Teaching of Psychology, 25, 40-42.

Hudak, M. A., & Anderson, D. E. (1990). Formal operations and learning styles predict success in statistics and computer science courses. Teaching of Psychology, 17, 231–234.

Sciutto, M. J. (1995). Student-centered methods for decreasing anxiety and increasing interest level in undergraduate statistics courses. Journal of Instructional Psychology, 22, 277–280.

Steele, J., James, J. B., Barnett, R.C. (2002). Learning in a man's world: Examining the perceptions of undergraduate women in male-dominated academic areas. Psychology of Women Quarterly, 26, 46–50.

Tibbett, T. P. (2017: January). Making statistics psychology: Engaging students with relevant applications. Presented at the National Institute on the Teaching of Psychology (NITOP) Annual Meeting.

Wilson, S. G. (2013). The flipped class: A method to address the challenges of an undergraduate statistics course. Teaching of Psychology, 10, 1-7.

Establishing and Maintaining Boundaries: When Classroom Limits Are Tested

January 30, 2017

By Jason S. Spiegelman

"I get paid to teach them physics. It's not my job to care about their personal lives."

This was what I heard from a colleague during a professional development conference. The session emphasized understanding the personal needs of our students, and during an experiential demonstration this is what a professor of physics had to say. To say I was astounded would be an understatement. Flabbergasted? Getting closer. Disgusted? Bingo! How can a professional educator be so obtuse? We are not babysitters, but to suggest that we need have no interest or investment in the personal struggles of our students was so antithetical to everything I thought my job was about left me wondering how this man was still employed. (hint: tenure isn't always a good thing!)

On the other side of the coin, however, we have the student who does feel that their teacher is their own personal sounding board. You probably know that student if you've been teaching for any length of time. She always stays after class to talk about how today's lesson really hits home for her. He always wants to discuss a family member who he "thinks has those exact symptoms." They want you to diagnose them, to second guess their doctor, to fix their friend, or to tell them if they are on the right prescription. These situation call into question the professional boundaries that must be established and maintained with students, and if we are not prepared can leave us fumbling for a way to tell the student that they have crossed the line.

At the 2016 National Institute on the Teaching of Psychology (NITOP) I presented a Participant Idea Exchange that was meant to focus on the over-disclosing student. I had envisioned an interactive discussion about the student who monopolizes class time, often to the chagrin of other students. The idea was born from a section of abnormal psychology I had instructed that included "that student." She was very nice, but did not seem to understand that self-identifying a history of at least one diagnosis in every – EVERY – DSM category had gone beyond useful personal revelation and into the realm of "now the class is your own personal group therapy." But the attendees of this PIE also included the issue of students who do not respect boundaries during office hours, turning that time into a personal psychotherapy session. Too often the student who does is unreceptive to subtle cues and fine spun hints that the conversation is inappropriate. In point of fact, they may be equally dismissive of obvious, direct statements that their needs are more appropriately addressed in a different office.

How do we balance appropriate caring and concern with professionally necessary distance? [Image: CAFNR, https://goo.gl/21dPOy, CC BY-NC 2.0, https://goo.gl/uk4xos]

What are some strategies for dealing with these delicate situations? How do we balance appropriate caring and concern with professionally necessary distance? Where is the demarcation line between trying to help and becoming a de facto psychotherapist? How do we support without offending? The ultimate outcome of this PIE was that there appears to be no singular technique. Some of the suggestions that came forth were as follows:

IN THE CLASS

Preparation

It all begins with the messages that are given in class and in the course syllabus. A clear statement of what class time and office hours are for is essential, but it may also be necessary to make an equally clear establishment of boundaries.

Being Proactive

Dealing with these issues when they first arise, though sometimes uncomfortable, is essential. As we all know, behaviors that are repeated can become entrenched. Thereafter, they become harder to disrupt. So sparing the student a bit of discomfort early can actually set them up for more embarrassment or awkwardness later. Addressing a student in private and making a clear statement of appropriate in-class boundaries is well-advised. Including a plan for how this can be accomplished is also helpful. In the semester following this PIE, I had another student who presented in a similar way. We spoke privately, and agreed on a very subtle facial cue from me (an extended eye blink in her direction) that would be a code for "It's time for you to pull back"). This cue was only needed twice, and her behavior improved within two weeks.

IN THE OFFICE

Office "Geometry"

Keeping the office door open prevents creating a private space where inappropriate disclosures are likely to happen. When a student asks if they can close the door, you may respond by saying, "I prefer to keep it open." There is still some privacy, but not the intimate space that resembles a therapeutic setting. In an era where faculty members must also be cognizant of the appearance of impropriety with students, this is doubly important.

Operating Within Your Expertise

The risk of bruising a student's feelings must sometimes be met head on. Statements like, "I'm sorry but this time is really reserved for classroom issues," "perhaps we can walk together to the counseling center" (if your campus has one), or "these are issues that are not really appropriate for us to be discussing" might be met with some displeasure. At the same time, they provide an unequivocal message to the student about what topics you are and are not comfortable discussing.

Don't Apologize

I strongly advise that a professor avoid using phrases like, "I'm sorry, but…" before setting a boundary. It sends a mixed message that you may, in fact, be receptive to a topic even if your words are suggesting otherwise. Students with boundary issues are likely to receive a mixed message but to attend primarily to the part of that message that serves their immediate needs.

Kind, but Firm

Be prepared for pushback, and be willing to re-assert the boundary. Students may not be intentionally pushing you into an uncomfortable place, and may need to hear where the line is more than once, in different words.

This list is certainly far from exhaustive, and is open to interpretation based on one's personal style, theory of teaching, and desire to attend to the students' academic and personal needs. Demonstrating sensitivity to our students' personal challenges will only enhance their engagement and ultimately their success in the classroom. But for the student who demonstrates diffuse (or absent) boundaries, the academic professional is well-advised to have some appropriate interventions at the ready.

Bio

Jason S. Spiegelman is an Associate Professor of Psychology at The Community College of Baltimore County in Baltimore, MD, where he lives with his wife and three sons. He teaches a variety of courses, including introductory, abnormal, social, and developmental psychology among others. He is an expert in the preparation and revision of psychology textbook supplements, having worked on such projects for over 150 textbooks over the years. He also serves as an advisor and contributor to The Noba Project. 

Instructor Self-Care: How to Get (Much) More Out of Your Positive Experiences

January 11, 2017

By Robert Biswas-Diener

@biswasdiener

The end of term is a stressful time. There are countless papers to mark. There are students eager to challenge the wording of each item on the final. Deadlines to submit grades. During the crush of work that comes at the close of the academic season a few instructors have been known to complain. I am one of them. I have made jokes about doing my grading at a local bar and I have dished on my students’ failings. I have to admit, however, that complaining about my work and about my students has done little to boost my quality of life.

It is ironic that I so easily fall prey to the temptation to kvetch because I am a subjective well-being researcher. That is, I study happiness and know a fair amount about which behaviors and thinking styles are most likely to pay happiness dividends. In this post, I would like to share with you just one of the many practical suggestions that have emerged from studies on happiness. I do so now—at the beginning of the academic term—so that you have the mental resources to hear the message and use this information to manage stress months from now at the end of the term.

The tip I would like to share is deceptively simple: it is savoring. Simply put, savoring is the act of mentally extending a pleasant moment. Instead of gobbling down a chocolate, for example, you can stretch your pleasure by taking your time and letting it melt in your mouth a bit and appreciating the flavor. The same holds true for your teaching. Instead of focusing on every late student or every bungled Power Point slide you can start cataloging everything that goes right.

Interestingly, experts in savoring suggest that this phenomenon takes a unique form depending on whether you are focusing on yourself or on others. When the focus is internal we call that form of savoring “basking” or “luxuriating.” These are your proudest moments. They include the e-mails you save from grateful students or that minor brag about how you really helped guide someone at this morning’s office hours. When the focus is external, on the other hand, we call that “thanksgiving” or “marveling.” Examples include appreciating a particularly astute comment in class or being awed by an exceptional piece of student writing.

Taking the time to catalog pleasant moments and to share and remember them has been shown to boost happiness (Bryant & Veroff, 2007). This line of positive thinking is the counterpart to so-called “dampening” strategies. That’s right, we all engage in ways of thinking and behaving that drive our happiness right into the ground. Two of the most common dampening strategies are: 1) “fault finding” (spending time noticing and complaining about all your students do wrong and how difficult your job is and how little you get paid and…. Well, you get the idea); and 2) “negative mental time travel” (remembering all the unpleasant moments such as how you made a mistake on the syllabus or how the class was so confused when you tried to explain misattribution of arousal or how you rescheduled to meet a student at office hours and then he didn’t even show up!).

Don't short change yourself on happiness. Take the time to acknowledge and savor the little successes that you have throughout each term. [Image: petukhov.anton, https://goo.gl/e3bZJB, CC BY 2.0, https://goo.gl/sZ7V7x]

Right now, fortunately, you have a clean mental slate. The term is just beginning and you have countless opportunities ahead of you to log many positive and pleasant experiences. I am not suggesting that you pretend life is exclusively pleasant or that you ignore problems. Not at all. Instead, I am recommending that you take the time—even during the stressful moments at the end of the term when self-care is the first thing to fall by the wayside—to appreciate the many wonderful moments, awesome teaching and student contributions that are really why we are all instructors in the first place.

Bio

Dr. Robert Biswas-Diener is the senior editor of the Noba Project and author of more than 50 publications on happiness and other positive topics. His latest book is The Upside of Your Dark Side.

Want Your Students to Learn More? Test Them in Groups!

January 9, 2017

By Tabitha Kirkland, University of Washington & Deepti Karkhanis, Bellevue College

We teach with the hope that our students are learning, and we test to find out whether or not they have. At least, this is the traditional approach. We certainly give our students active opportunities to learn during class sessions, but we wondered: wouldn’t it be great if we allowed students to continue learning throughout the class? Traditionally, exams assess rather than create learning. And in introductory classes, these exams are better at evaluating recall and recognition rather than evaluating or even promoting deeper levels of understanding. We suggest that learning can occur throughout the course, even during a test. And if we really believe that deep learning is more important than memorization and regurgitation, we should be willing to reconceptualize the way we approach testing.

So, we’d like to share our adventures in group testing.

How group testing works

We did a series of little experiments at a large community college in Washington, where all classes are for first- and second-year students. We tried several variations on this basic theme, but what all of these had in common were that psychology students had the opportunity to review and discuss their multiple-choice exams during class with peers before submitting it for a grade.

The first round, we gave students the same multiple-choice exam twice: once alone, and then immediately afterward in assigned small groups of 3-5 students. We let students complete individual response forms during both portions, so they were still responsible for their own scores and were not required to agree with the group. We then averaged both their individual and group efforts to calculate their exam score. The results? Almost 20% improvement from individual to group exams -- with some students improving by as much as 40%! (Karkhanis & Kirkland Turowski, 2015

Figure 1. General Psych students in Spring 2015 taking a test in pairs.

Importantly, this improvement in learning did not seem to be due to one high-performing student dominating the group conversation. We noticed that almost everyone seemed to be contributing about equally during group exams, and students’ feedback lined up with these observations: in a follow-up survey, they reported that almost everyone contributed equally to discussion.

Scores also improved across the grade distribution: even the highest-performing individual students benefited from the group conversation (improved scores from individual to group exams), suggesting these overall improvements were not simply due to the lower-performing students catching up.

Variations

We tried mixing up who was in the groups. It didn’t seem to matter whether the groups were carefully pre-sorted to distribute members across the grade spectrum, or assigned randomly, or even chosen by the students themselves. It seems that what matters for performance is simply the group format, not the specific makeup of the group membership. (Group size probably does matter, however -- smaller groups give more opportunity for individual contribution.)

We also checked to see if there were reliable differences based on class topic (e.g., general psychology vs. lifespan psychology) or instructor or even instructional quarter. Nope. So this suggests that this approach would be appropriate for a variety of psychology classes.

One of us also tried this neat variation: group quizzes followed by individual exams. The rationale was that if students are learning more in the group atmosphere, then that learning should show when being tested on those concepts later. She put students in randomly-assigned pairs for weekly quizzes, then compared their individual exam performance with another class (same instructor) that had taken their weekly quizzes alone. Students in the paired-quiz class performed an average of 10% better on the individual exams!

Why do group exams work so well?

We asked our students what worked for them about these exams.

First, our students enjoyed the opportunity to discuss each answer choice with one another. They found the process of talking, debating, and bouncing ideas off each other to be useful, and they were able to see different ways of reasoning through these conversations. Students reported that they learned more through teaching others and being taught, and sharing these answers boosted their confidence in their own knowledge. Groups used different ways of arriving at a conclusion: some voted and went with the majority, others discussed until a consensus was reached, and others chose their own answers after discussion (remember, they had individual response forms). Regardless, almost everyone enjoyed the group exam format, which can positively affect the entire class experience (Stearns, 1996).

Second, group testing relieved stress and anxiety that often is caused by evaluative situations. Our class sessions tend to be interactive, experiential, and collaborative, and we try to build community to encourage a positive learning environment. But when it comes time to test them, that environment changes. Tests can create high levels of anxiety for many students, yet we know that the best performance under challenging circumstances comes from moderate levels of arousal (Yerkes & Dodson, 1908; Broadhurst, 1959). How do we manage this traditionally? We just tell our students not to be anxious and hope it works out. Building on notions of context- and state-dependent memory, this means we are actually creating incongruent contexts for our students to perform in, while still hoping that they will be able to succeed. We teach this science to our students, so we should really be implementing practices based on it. Some of our students explicitly discussed the stress relief provided by the group-testing environment, and how that boosted their performance and recall. Group testing then not only mitigates test anxiety, it also make in-class testing a more positive educational experience (just like the regular class sessions) (Ioannou & Artino, 2010). 

Figure 2. Benefits of classroom collaboration. Image source: https://collaborativegrouplearning.com/2015/06/04/...

A third reason group exams work: they give students an opportunity to explicitly review their answers and reasoning. If you’re not yet on board with testing in groups, you might just have students take the exam individually, then get them into groups to discuss the exam afterward and figure out why their answers were correct or incorrect together. This would be a more traditional approach -- collaboration during review -- in which one major benefit is simply having students look at the exam again instead of the more common practice of tossing it in their bags and never looking at it again. Of course, we’d like to think that some of the benefits of the group format would carry over to this also, like students learning from one another. But in this format, learning would occur after the fact, so this might work best if you plan to give a comprehensive final or something else cumulative to reward that effort.

Social loafing

Some of you must be wondering about social loafing and individual accountability. We do think it’s important to include an individual component of the grade so as to encourage students to prepare for exams with the same rigor. One way to address this is to implement some minimum cut-off grade that students must earn individually in order to be eligible for group benefits. For example, one of us set a minimum of 70%. Students who scored lower than this on the individual exam did not have their group score count toward their grade. Another strategy is lower-stakes group quizzing followed by higher-stakes individual testing (described in “Variations” above). 

Limitations

We recognize that our suggestions have some possible limitations:

  • First, we tried all this stuff at a community college, where competition is limited and stakes are lower, and class sizes are fairly small (30-40 students). We think the possibility of group dynamics encouraging collaboration over competition would definitely benefit students at more competitive schools, provided the grading scale allowed for this. And we think it would probably be feasible to do in moderately large classes (e.g., 100 students), provided there is additional instructional support staff. One of us has actually just moved to a large university, and plans to try this out in a larger class, so we will keep you posted on how well this works!
  • We also did this with multiple-choice exams, on which performance can be more easily quantified. Of course, exams were rigorous -- using a variety of difficulty levels and skewing toward conceptual/application questions -- but we have not tested this out with short-answer or essay exams. We’d imagine that you would have to vary your approach if this is more your exam style, like designating a single group scribe (thus sacrificing a bit of independence) or having meticulous rubrics.

Takeaways

  1. Group exams are good for performance. Students performed significantly better on exams taken in small groups relative to exams taken individually (Bloom, 2009; Karkhanis & Kirkland Turowski, 2015). This improvement in performance was due to a number of factors, including the opportunity to think carefully about the concepts being tested and the ability to teach and learn from one’s peers.
  2. More importantly, group exams are good for learning. Students reported that the group-learning atmosphere helped to relieve stress, increase confidence, and solidify their understanding of course material. And students who completed frequent group quizzes scored higher on midterm and final exams, suggesting that they learned more during those collaborative quizzes than students taking them alone.

We’d love to study this further, including how these effects might change when we take into account question difficulty and level of abstraction. But we’re pretty clear on one thing -- if your class size permits it, you should definitely consider implementing group exams.

Feel free to contact us if you’d like more suggestions or resources for implementing this in your classes: [email protected] or [email protected].

Bios

Tabitha Kirkland is a lecturer in psychology at the University of Washington. She received her B.A. from the University of California, San Diego, and her M.A. and Ph.D. from The Ohio State University. Tabitha lives in Seattle with her family and enjoys traveling, outdoor adventures, and a strong cup of coffee.

Deepti Karkhanis is an Associate Professor and Department Chair of Psychology at Bellevue College. She received her Bachelor’s and Master’s degree from Delhi University in India, and her Ph.D. in Applied Developmental Psychology from George Mason University in Fairfax, VA. She is a developmentalist whose teaching interests include lifespan psychology, adolescent and youth psychology, cross-cultural psychology and positive psychology. Dr. Karkhanis explores a variety of pedagogical topics such as collaborative testing, student-teacher rapport, positive psychology in classroom curriculum, and teacher training on social justice and educational equity.

References

1.Bloom, D. (2009). Collaborative test taking: benefits for learning and retention. College Teaching, 57(4), 216-220.

2.Broadhurst, P. L. (1959). The interaction of task difficulty and motivation: The Yerkes-Dodson Law revived. Acta Psychologica, 16, 321-338.

3.Ioannou, A., & Artino Jr, A. R. (2010). Learn more, stress less: Exploring the benefits of collaborative assessment. College Student Journal, 44(1), 189-199.

4.Karkhanis, D. G., & Kirkland Turowski, T. (May 2015). Group exams improve student learning. Psychology Teacher Network, 25(2), 8-10. http://www.apa.org/ed/precollege/ptn/2015/05/may-ptn.pdf

5.Stearns, S. A. (1996). Collaborative exams as learning tools. College Teaching, 44(3), 111-112.

6.Yerkes, R. M., & Dodson, J. D. (1908). The relation of strength of stimulus to rapidity of habit‐formation. Journal of Comparative Neurology and Psychology, 18(5), 459-482.

But What Is the Learning Objective?

December 6, 2016

By Guy A. Boysen

Imagine a hotshot graduate student who fancies himself quite the professor-in-waiting. Given the opportunity to independently instruct a section of Introductory Psychology, he selects the perfect book, plans lectures, practices the lectures, practices the lectures, and then practices some more. What he produces is, in some cases, fairly impressive. For example, during the lesson on motivation, he uses the slide below to lecture on Maslow’s hierarchy of needs, giving creative examples of people at each stage, and engaging students with some discussion questions. Clear? Professional? Engaging? Check, check, and check.

But what is the learning objective?” Despite the apparent quality of the lesson, if the graduate student could not answer this question, it would be some seriously bad pedagogical news.

To illustrate why that would be such bad news, take a closer look at the graduate student’s lecture slide and reconsider how you would study it if you were preparing for a test.

Would you memorize the stages? Would you test your ability to describe stages? Would you memorize the teacher’s examples or produce your own? Would you form an opinion about the theory’s validity? Now, switching to the teacher’s perspective, which one of those topics would be on the test? Without a learning objective, all options are equally right and wrong – hence, the very bad pedagogical news.

Based on recent work sponsored by the Society for the Teaching of Psychology, being a model teacher is, at least partially, defined by the use of learning objectives (An Evidence-based Guide to College and University Teaching; Richmond, Boysen, & Gurung, 2016). After reading this post, you should know how to write a learning objective and be able to explain why they are essential to model teaching.

Defining Learning Objectives

Learning objectives are specific, behavioral definitions of the knowledge, skills, or attitudes that students are supposed to take away from a learning experience. Unlike learning goals, which are often so broad as to apply to an entire curriculum, learning objectives narrowly define what students should be able to do with regard to just one, specific educational experience. A course might have only a handful of broad learning goals – they are typically listed on the syllabus – but the number of learning objectives corresponds to the individual lessons that occur over the entire course. Yes, that means you may have 100 or more learning objectives for a course!

Writing 100 or more objectives may seem like an insurmountable amount of work, but it is as easy as ABC: Audience, Behavior, and Condition (Boysen, 2012).

Audience (A): Who will be learning? This is the easy part for most college teachers because it only has to be written once for each course.

  • Students in [fill in course name] will…

Behavior (B): What will be learned? Here is where the hard work occurs. Teachers must decide what they want students to be able to do after each lesson and put it into terms that can be assessed. Bloom’s taxonomy is an essential tool for writing learning objectives because it defines the levels of cognitive complexity of the learning and can be associated with action verbs that are perfect for plugging into learning objective statements (see Figure).

For example:

  • Students in Introductory Psychology (A) will differentiate classical and operant conditioning (B);
  • Students in Statistics (A) will calculate t tests (B); and
  • Students in Biology (A) will construct pedigrees (B).

Condition (C): In what situation will the behavior occur? Conditions help determine what information or tools students will use when they engage in the behavior. They are not always necessary but can add a much-needed level of specificity.

For example, continuing with the objectives outlined above:

  • Students in Introductory Psychology (A) will differentiate classical and operant conditioning (B) when given real-world examples (C);
  • Students in Statistics (A) will calculate t tests (B) using SPSS (C); and
  • Students in Biology (A) will construct pedigrees (B) using human genetic information (C).


Why Learning Objectives Matter

Writing learning objectives is time consuming, but it is a worthwhile investment in model teaching. For the last few years, I have been part of a team effort to define model teaching (Richmond et al., 2016). We define model teaching as the possession of a set of evidence-based characteristics related to pedagogical training, instructional methods, course content, assessment, syllabus construction, and student evaluations. These characteristics are not the ineffable stuff that separates renown master teachers from the rest of us mere mortals (Buskist, Sikorski, Buckley, & Saville, 2002). Rather, it is a list of nuts and bolts fundamentals that anyone can develop and incorporate into their pedagogy.

Why did we include the use of learning objectives as a characteristic of model teaching? Here are just a few of their advantages, with the best saved for last.

Selection of teaching methods. Pop quiz: Why did you choose the specific pedagogical approach you took in the last class that you taught? If this was a difficult question to answer, learning objectives can help. Classes should be designed backwards; decide what students will learn first, then match the teaching method to the learning objective.

Intentional selection of evaluation methods. Imagine that the learning objective for the graduate student’s lesson on Maslow was for students to devise improvements to the theory. What should the test question for this objective look like? Going back to Bloom’s taxonomy, the cognitive skill represented in a learning objective helps determine its evaluation. If the goal is memorization of facts, multiple choice might suffice; evaluation of those facts, however – a high-level cognitive ability – would likely require a detailed written response.

Useful evaluations of learning. Have you ever tried to teach someone to shoot free throws while blindfolded? Me neither, because accurate feedback is needed for learning to occur. Both you and the learner need to know if efforts are hitting the mark. Teachers who have a list of learning objectives can directly evaluate achievement of objectives and then provide detailed feedback. For example, if the objective is to summarize all of Maslow’s stages and student can only summarize half, then the focus of future teaching and learning efforts is abundantly clear.

Increased learning. The 1970s gave us more than classic rock, it also gave us classic educational research showing that students learn more when they have specific learning objectives (Duell, 1974; Kaplan & Simmons, 1975; Rothkopf & Kaplan, 1972). I told you I saved the best for last!

How would you answer the question?

How would the hotshot graduate student have answered the question “What is your learning objective?” In fact, he couldn’t have answered it. I know because it was me. Although I had enthusiasm and dedication at that point in my career, model teaching requires so much more. After many years of development, I can answer that important pedagogical question before I walk into any classroom – I hope you can as well!

Bio

Guy A. Boysen is an Associate Professor of Psychology at McKendree University. He received his Bachelor’s degree from St. John’s University in Collegeville, MN and his PhD from Iowa State University in Ames, IA. He is a generalist whose teaching emphasizes clinical topics and the mentorship of student research. Dr. Boysen has studied a wide variety of pedagogical topics included student teaching evaluations, bias in the classroom, and teacher training.

https://www.mckendree.edu/directory/guy-boysen.php

References

Boysen, G. A. (2012). A guide to writing learning objectives for teachers of psychology. Society for the Teaching of Psychology Office of Teaching Resources in Psychology Online. Retrieved from http://teachpsych.org/otrp/resources/index.php?category=Outcomes

Buskist, W., Sikorski, J., Buckley, T., & Saville, B. K. (2002). Elements of master teaching. In S. F. Davis & W. Buskist (Eds.), The teaching of psychology: Essays in honor of Wilbert J. McKeachie and Charles L. Brewer (pp. 27–39). Mahwah, NJ: Lawrence Erlbaum.

Duell, O. P. (1974). Effect of type of objective, level of test questions, and the judged importance of tested materials upon posttest performance. Journal of Educational Psychology, 66, 225-323.

Kaplan, R., & Simmons, F. G. (1974). Effects of instructional objectives used as orienting stimuli or as summary/review upon prose learning. Journal of Educational Psychology, 66, 614-622.

Richmond, A. S., Boysen, G. A., Gurung, R. A. R. (2016). An evidence-based guide for college and university teaching: Developing the model teacher. Routledge.

Rothkopf, E. Z., & Kaplan, R. (1972). Exploration of the effect of density and specificity of instructional objectives on learning from text. Journal of Educational Psychology, 63, 295-302.

Even a "Done-in-a-Day" Service Activity Can Make a Difference

December 1, 2016

By: Sadie Leder-Elder

@sadieleder

A few years back I taught my first service learning class. From that experience I learned two things. The first was that incorporating service into my teaching left my students and myself enriched both educationally and personally. Secondly, teaching a service learning class can be very difficult.

During the period of our collaboration, my service-learning partners experienced a great deal of turnover in leadership. This led to frustration as students’ experiences were often discrepant and at times their service efforts did not align with the material we were discussing in class. All in all, the benefits outweighed the costs, but it wasn’t long before I started thinking, “There has to be an easier way.”

My Done-in-a-Day Activity

The desire to incorporate service in a more manageable fashion led to the development of a number of possibilities, including a one-day service activity in my Introduction to Psychology course. When covering the material on altruism and pro-social behaviors in the social psychology chapter, I ask students to select and participate in a hands-on demonstration of helping.

Past projects have included donating holiday gifts through the Salvation Army’s Angel Tree program, sending care-packages to victims of Hurricane Sandy, collecting school supplies for a local elementary school, creating coloring books for the Children’s Miracle Network, making blankets for local hospitals through Project Linus, and sending over a thousand thank you cards and letters of support to US troops stationed around the globe.

Each semester I observed my students’ interest and engagement in the activity and noted how it also appeared to connect them with the course. It wasn’t long until I began to tout my efforts as a success. Unfortunately (but in hindsight, perhaps quite fortunately), my bravado was diminished when a colleague asked the million dollar question, “Do you have any data to show that this activity is actually benefiting your students?” At the time I did not.

Does It Work?

Since then, I have collected Scholarship of Teaching and Learning (SoTL) data on the impact of my Introduction to Psychology Helping Behavior Activity. I’m thrilled to say that this project, which requires merely one class period, can yield a number of the benefits associated with a semester-long service learning course. For instance, following participation in this activity students reported more ethical awareness and increased participation in volunteer activities. Students also reported higher intrinsic motivation for the class and greater endorsement of the belief that the course contributed to their overall development.

If you are like me and love the idea of teaching students through the incorporation of service, but are not able to devote an entire semester to a service learning partnership, you are in luck. Instructors may be able to harness some of the important benefits affiliated with experiential and service-based instruction by employing a much more manageable Done-in-a-Day Activity. Although I have yet to collect data on the sustainability of the benefits associated with my Done-in-a-Day Activity, this work suggests that it may be easier for instructors to encourage a sense of social responsibility and greater ethical awareness in their students than previously believed.

Suggestions for Implementation

The beauty of this project is its simplicity. If you can spare one day of class, then you have the potential to substantially enhance not only your students’ academic, but also personal growth and development. Although I have used this Done-in-a-Day Activity in my Introduction to Psychology course, I feel confident that it can be applied to a number of classes. For those interested in trying it out, here are a few suggestions:

  • Allow students to select the service activity, but provide guidance about what is realistic for one class period.
  • Help students explicitly make connections between the class material and the service activity.
  • When possible, partner with an established organization so your efforts are part of something sustainable.
  • Know that even this activity will be more work than you imagine, but it’s worth it!

In order to make my Done-in-a-Day Activity a success, I start by building excitement for it from the very first day of class. I then have several days where students can nominate potential class activities, and periodically we have brief discussions about the feasibility of and interest level in the suggested projects. Finally, we pick the activity and I work to explicitly tie it to the course material, both before and after participation. Where possible, I even try to bring in speakers that help students understand the impact of their efforts. Although students are quite literally done with this activity in a day, the impact that participation has on them appears to be extensive. In past semesters, I have even had students begin working with the organizations that we’ve partnered with, sometimes for a number of months or years.

Bio:

Sadie Leder-Elder received her Ph.D. in Social/Personality Psychology from the State University of New York at Buffalo. She is an Assistant Professor of Psychology at High Point University in High Point, North Carolina. Her research and teaching are focused on the study of close relationships. Sadie is a celebrated educator, having received numerous teaching awards, including being named the national recipient of the Society for the Teaching of Psychology’s Wilbert J. McKeachie Teaching Excellence Award in 2010 and the Jane S. Halonen Teaching Excellence Award in 2014. 

References

Leder-Elder, S. (2016, January). Teaching psychology through social action: Even a done-in-a-day activity can make a difference. Poster presented at the annual meeting for the National Institute on the Teaching of Psychology, St. Pete Beach, FL.

The “True” Opinion Paper

November 10, 2016

By Sara Branch

Last semester I taught a course on human sexuality.

I love teaching this class. I talk with my students about issues related to gender identity, sexual orientation, sexual behavior, and more. As I was prepping my course, I wanted to revise my standard “opinion paper” assignment. In the traditional version of the paper, students are assigned a controversial topic (e.g., marriage equality). In it, they are expected to put forth an opinion on the topic and then justify their opinion using empirical literature. But in classes that address complex social issues, people’s opinions and attitudes are not based exclusively on empirical evidence.

The problem with the standard opinion paper

The problem with an opinion paper that emphasizes only empirical evidence is that it implicitly assumes that:

1) students will be convinced to adopt the attitude most strongly supported by evidence

2) students who aren’t convinced by empirical evidence have somehow failed at a critical element of the course.

With the standard opinion paper guidelines, I was concerned about students feeling attacked for their beliefs or creating an assignment for which they were literally unable to satisfy expectations. When students leave my class, I want them to own their attitudes about these complex issues and understand why they hold the attitudes that they do. And I recognize that some students' opinions may be grounded in factors that cannot be empirically supported (e.g., religion) and thus they simply can’t use empirical research to support their arguments.

Image: uoeducation, https://goo.gl/YpN4l1, CC-BY-NC 2.0, https://goo.gl/qOP7mj / cropped and thought bubble added]

Getting the perspective of instructors

In an effort to improve the traditional approach to this type of assignment I appealed to my colleagues for their thoughts. Specifically I said that the goals of the assignment were:

1) to get students to form an opinion about issues relevant to the class

2) to challenge them to express and argue for their opinions with evidence (i.e., research), while being sensitive to opinions that may not have empirical support behind them.

My colleagues were as gracious as always in providing their valuable insights. However it was striking that the opinions I received consistently advocated for students to support their opinions with empirical evidence because psychology is, after all, a science.

Here’s the thing: although psychology courses are grounded in science they also deal with important and complex real-life issues. And like most of us, our students’ opinions are not grounded solely in empirical evidence, nor will empirical evidence be the only factor that persuades them. So I shifted my goal for the course. Instead of asking my students to identify the opinion with the most empirical evidence, I asked them to give me their own opinion, and to reflect critically on the existing empirical evidence (even if it did not support their opinion) as well as any other factors that influenced their beliefs.

This approach is directly in line with the research on attitudes. We know that attitudes are based not only on thoughtful cognitive evaluations of evidence, but also on our affective responses to an object, on the function an attitude serves, and on our past experiences with the object, as well as others (Fabrigar & Wegener, 2010). If our students’ attitudes ultimately contradict what is supported by evidence, don’t we want them to have intentionally and thoughtfully considered all the issues involved and consciously decided that in spite of what the evidence says, they are willing to give greater weight to other factors? This might seem contradictory to the goals of higher education where we advocate for rational, critical thought. But if we want to challenge students to expand or critique their beliefs, we have to first let them identify those beliefs.

The Assignment

In the end, I gave students a list of controversial topics. Their task was to write a paper that reflected their authentic personal opinion. They had to include a clear, nuanced opinion on the issue, review empirical research on the topic, and present their arguments for their opinion (with relevant evidence, whatever that may be). If the empirical research contradicted their opinion, they had to explain why they were discounting it. They were told that they would not be graded on their opinion, even if it was not supported by empirical evidence. Instead, their grade reflected the thoughtfulness and complexity of their opinion and their reflection on the factors that underlie it.

In the final course evaluation, a few students spontaneously reflected on the opinion papers. They talked about how the papers allowed them to think about these topics more deeply than they might have otherwise, and recognize the complexity of the issues in ways they hadn’t before. And importantly, multiple students stated that they felt free to express their opinions, without judgment.

Final Reflections

Attitudes are important because they (can) predict behavior (see Glasman & Albarracin, 2016 for a review). And they are complex because they are influenced by myriad factors (Fabrigar & Wegener, 2010). If our goal is to have students think critically about their worldviews, we need to stop limiting what are acceptable opinions to only those supported empirically. Our students will hold attitudes that contradict empirical evidence. And when we limit discussion and reflection to only the empirical, we create an environment where students may not be personally empowered by their learning. By asking students to identify their attitudes and reflect on why they hold those attitudes, without fear of judgment or a poor grade, we are asking them to take responsibility for their attitudes and their behavior. 

Bio

Sara Branch received her Ph.D. in Social Psychology from Purdue University. She is an Assistant Professor of Personality Psychology at Hobart and William Smith Colleges. Her research focuses on the intersection of social and cognitive psychology as an approach to scholarship of teaching and learning.

You mean I have to actually APPLY what I’m learning?!

October 19, 2016

By Jennifer M. Knack and Melisa A. Barden

The Application Challenge Project is meant to inspire students to take psychology outside the classroom and apply it to real-world issues.

Missed Connections Lead to a Challenge

We’ve all been there. Staring out into the sea of blank, tired student faces. We tell ourselves it’s the time of day (Too early! Need an after lunch nap!) or the students (Senioritis! Non-major!). It couldn’t possibly be the material, right? I mean, who doesn’t love PSYCHOLOGY! Especially SOCIAL psychology! This material is so applicable and interesting that there must be another reason for their lack of attention. Although we may think that just talking about the material will cause fireworks and lightbulbs in our students’ heads, sometimes we need to acknowledge that our teaching style may need to be adjusted in order to facilitate said fireworks. How can we make the material we cover more relevant to our students and help them connect it to the real world?

This question led us to a brainstorming session while walking on the beach in Florida during a break at the National Institute on the Teaching of Psychology (NITOP) conference. Since we both teach Social Psychology, we started sharing ideas to get our social students more involved and invested. We talked about the long history of social psychologists conducting research to understand and address societal problems. We thought it would be great if we could bring this important tradition into our classrooms. As our conversations continued they evolved into an assignment we called the Application Challenge Project where students work in groups to identify a real-life problem and then use social psychological concepts to develop a solution.

Two of the most essential elements of the project are a high degree of autonomy for the students and the requirement to apply social psych in meaningful ways to the world outside our classroom. When students can choose a topic that interests them, they become more invested and engage in deeper learning (e.g., Heilman et al., 2010; Walkington, 2013). Then by challenging students to come up with solutions, they realize the real-world implications and how the course material can truly make a difference.

We have used the Application Challenge Project now for two semesters of Social Psychology and in the opinions of both instructors and students it has been a clear success. So successful that we are inspired to share the idea with you, our colleagues. We’d like to offer key steps and suggestions for implementing this project in your classroom.

1.The dreaded ‘C’ word: Getting students to care!

Our students are at an idealistic age, so when given an opportunity to do something good in the world they’ll usually jump at the chance.

  • We leverage this on the first day of class by showing a series of attention-getting videos addressing social problems (traffic deaths, hygiene, public health, etc.). Then we discuss the purpose of each and the ways they were successful in delivering their messages (catchy songs, emotion, humor). Viewing these videos puts students in the right frame of mind to think about social change.
  • Then on the second day we task them with identifying other current issues that social psychologists might be interested in addressing. We encourage them to think about local issues as well as broader issues in the state, country, and world. The more personally relevant the issues are, the better!

With ideas starting to flow in their heads, it’s time to get them into groups and on to the real work of the project.

2. A little bit of logistics goes a long way

  • Dividing students into groups can be a challenging task. We decided on 4 - 6 students per group so the division of labor would be appropriate but groups would not be so big that students would have trouble coordinating their efforts.
  • Although students tend to prefer to pick partners themselves, we assigned them to groups to avoid a number of typical issues (shyness, pressure to work with friends, etc.)
  • If this sounds like too much work for you, then feel free to allow your students to pick their own groups.

3. Finding a real-world problem: So many to choose from! [Where do we even start?!]

  • We found that students typically generated ideas of problems that are quite large and multi-faceted (e.g., improve people’s health) and needed guidance in focusing their topic to something manageable within a semester (e.g., persuading parents to vaccinate their children). So we created a topic statement assignment.
  • As a group, students submitted a brief one to two paragraph topic/problem statement that explained the importance of the problem and described the major tenants/components of the problem. The problem statement helped students come to an agreement on what they were going to address and also gave us a chance to help students make specific enough choices.

Here are some examples of the questions they chose to investigate:

  • How can California residents reduce their water consumption during the drought?
  • How can police brutality be reduced?
  • How can the stigma associated with mental health problems be reduced? 
  • Are there ways to reduce students’ stress during finals week?
  • How can universities reduce binge drinking among first-year students?

4. The nuts and bolts of the process and the final reveal!

  • With topics decided and approved, each individual group member then conducted a literature search and found two relevant empirical peer-reviewed research articles that supported the group’s topic. In an attempt to reduce social loafing, each group member wrote a short summary (i.e., one to two paragraphs) of each article and how it was relevant to the group’s problem. We made a point to tell students that group members should coordinate to make sure they were not reporting on the same empirical articles.
  • Each group submitted a short proposed solution for their topic/problem. We told students we wanted to see that they had thought critically about the problem and potential obstacles to rolling out their solution. For example, the group trying to increase water conservation wanted residents to be aware of their water use so they needed to discuss how water use would be monitored (e.g., Will residents really check a small water gauge? How often do residents need feedback?).
  • In addition to the written assignment, groups gave short informal presentations in a “speed dating” fashion where one group explained their problem and proposed solution to another group in approximately 5 minutes with 2 minutes for questions; groups then switched roles (the group who first presented their project listened to the other group’s presentation). This format allowed students to practice communicating about their project multiple times and helped students refine how they presented their arguments (or highlighted where additional work was needed!).
  • After incorporating the feedback from the “speed dating” presentation and their written proposals, each group formally presented their project to the class in an 8 - 10 minute presentation.
  • Each group then submitted a final paper that included (1) a statement about the topic/problem at hand, (2) a discussion about relevant social psychological concepts used to address the topic/problem, and (3) a proposal on how to solve the topic/problem.
  • Knowing that some students would be likely to put in little effort whereas other students would likely put in a lot of effort, each group member evaluated the other group members’ contribution to the group as well as their own contribution throughout the semester-long project ranging from 0% contribution to 110% contribution. We let students rate contributions up to 110% in order to account for students who went above and beyond to carry the group. We had students include a short rational for their rating. Students’ final grades on all group assignments were adjusted based on the average contribution rating.
Having student groups go through a "speed dating" type of presentation format really helped them improve how they discussed their topics and lead to better final presentations. [Image: UniPID, https://goo.gl/NhVa3z]

5. We found the glitches so you don’t have to!

  • Some students have a really hard time narrowing in on a suitable topic and time can be lost if you’re not careful. The topic statement assignment (#4 above) helped get groups on the right track early.
  • Next time we do this project, we plan to give students a hypothetical budget (e.g., $5,000 - $10,000) to help them think more specifically about how they could address the problem.
  • Social loafing remains a problem. Next time we do the project we might have students rate each other’s contribution for each assignment so that it is a more salient reminder that students need to be consistently contributing to their group.
  • Students seemed to put more effort into their projects when they knew someone from outside our classroom would be listening to their presentation (i.e. someone from the university/community who had a real interest in the specific topic). Even though inviting guests took some effort and coordination, it seemed to be worthwhile for students and guests alike.

6. Final thoughts: A project worth continuing

Most students seemed more engaged with the course material and were excited to see practical ways to use the information they were learning about. After one of the small group discussion days when we gave quite a bit of feedback and challenged students to dig deeper into ways to use course material in their projects, one student commented, “I really like this class because we actually have to think. It’s fun! I wish more classes were like this.” We also saw projects extend beyond the classroom. For example, several projects were presented at a local conference, while another extended into a film class and became the subject of a documentary.

Several groups selected problems that were relevant to their anticipated careers. For example, one group of pre-medical students all volunteered as EMTs because they were concerned about the high rates of PTSD among emergency responders. Another group of pre-medical students addressed the issue of parents not vaccinating their children. Both of these groups reported that they learned how difficult it can be to change people’s behaviors and attitudes. They even commented that it was helpful to read the literature and learn how to use what is already available to solve problems.

Most of the projects were successful in meeting the objectives of integrating course material to address a real world problem. However, some groups stayed at a surface level and recycled solutions that have already been used rather than developing more creative or novel solutions. Given that this course is a 200-level course comprised of students across all four years (but primarily freshmen and sophomores), it was not surprising that some projects were stronger than others.

We consider this project a success! In addition to students being more engaged in Social Psychology, students interacted with members of the university and local community and learned about ways they could continue their projects. This project could easily be adjusted and tailored to other courses (e.g., Health Psychology, Cognitive Psychology, Developmental Psychology) to help students engage more with the course material. We hope you’ll consider using it with your own students.

Bios

Melisa Barden is an Associate Professor of Psychology at Walsh University in Canton, OH. Her recent research focus has been on the teaching of psychology including the use of “clickers” in the classroom. In her free time, she enjoys watching her 4-month-old develop and spending time with her friends and family.

Jennifer Knack is an Assistant Professor of Psychology at Clarkson University in Potsdam, NY. She studies the antecedents and mental and physical health outcomes of being bullied. She has two cats -- Nimbus and Albus Weasley -- and enjoys hiking and going to the beach.

References

Heilman, M., Collins-Thompson, K., Callan, J., Eskenazi, M., Juffs, A., & Wilson, L. (2010). Personalization of reading passages improves vocabulary acquisition. International Journal of Artificial Intelligence in Education, 20, 73-98. doi: 10.3233/JAI-2010-0003

Walkington, C.A. (2013). Using adaptive learning technologies to personalize instruction to student interests: The impact of relevant contexts on performance and learning outcomes. Journal of Educational Psychology, 105, 932-945. doi: 10.1037/a0031882 

Are Your Students Paying Attention?

July 25, 2016

By Jeff Wammes

On a quick read of some educational research, you might be left with the feeling that university students don’t have much of an attention span. Recent reports that about 60% of online students abandon video lectures in under 10 minutes (1), and countless studies showing that student attentiveness fades quickly into mind wandering (2,3) have not helped debunk this comparison either. In video recorded lectures, staged live lectures, and even selected authentic live lectures, mind wandering tends to increase quickly over time. So, we were saddled with a critical disconnect: On one hand, there was loads of research showing that mind wandering, or inattention, increases pathologically over time within lectures. On the other hand, we had our own anecdotal observations that while students obviously (and unfortunately for instructors) had a base rate of inattention, it didn’t seem to increase all that dramatically over time. Thankfully, we hadn’t really seen an increase in that glazed over look that every instructor or speaker dreads. Yes, the telltale signs of mind wandering were present, but no, they did not seem to be noticeably or dramatically increasing, despite what the in-lab research seems to say. We wanted to explore mind wandering in a real, live, lecture setting to resolve this discrepancy.

Our Studies

When students are actually enrolled in a course, the outcome has very real consequences for their futures. Studying attention during staged lectures in the lab might lead to quite different results than studying the real thing in the wild. Some researchers have looked at the interplay between attention and retention in actual lectures (2,4,5), but few have looked at how this behavior changes over time, or on a scale larger than one lecture, or a few lectures. So, we designed a few long-term experiments, conducted in actual live lectures over entire semesters. Instructors often use clicker remotes to ask quiz questions during lectures, both to measure attendance and to check understanding. We simply piggy-backed on this technology and asked participants to tell us, using a quick button press, whether or not they were mind wandering, whether they were doing so intentionally, or the degree to which they were mind wandering just before they were asked. Our motivations were mainly to observe and describe the way students’ minds wander in lectures. Here is what we found (6,7):

While we might not be able to prevent mind wandering, we can work to understand it better and how it impacts student learning. [Image: Tadeej, https://www.flickr.com/photos/tadeeej/3228729514/, CC BY-NC-SA 2.0]

Rates of Unintentional Mind Wandering Are Relatively Low

Mind wandering is usually described as a failure of attention that occurs even though people are trying their best to pay attention. But, recent research shows that people often choose to mind wander, or mind wander intentionally during boring tasks. Surprisingly, in our lecture study, students were mind wandering about a third of the time during lectures, but more than half of this mind wandering was intentional. This is a different issue than the previously conceptualized student who helplessly succumbs to inattention. Instead, it seems that students are purposely tuning out. This might be an issue of motivation. Our research has repeatedly shown that motivation is related to mind wandering, especially of the intentional sort. One way of combatting this intentional mind wandering might be incentivizing student attention. This could be accomplished quite easily by making in-class quiz scores contribute directly to the students’ final grades. In any case, the large proportion of intentional episodes of mind wandering tells us that in reducing mind wandering during lectures, a fruitful target for intervention might be the intrinsic motivation of the student to retain information in class.

Mind Wandering Does Not Increase Over Time!

Unlike most research exploring attention during lectures, we found essentially no evidence that mind wandering increased over time within a lecture, in any of our three studies. The rates of intentional or unintentional mind wandering did not increase, and neither did students’ depth of mind wandering (the degree to which they were mind wandering). Of course, this does not mean that no students increased in mind wandering over time, but on the whole, rates and depth of mind wandering remained surprisingly stable.

However, mind wandering did increase over the course of the term or semester. People have a tendency to mind wander about current concerns or issues in their lives. With deadlines and exams steadily stacking up, people are likely dwelling more on those concerns, leading to a greater incidence of mind wandering as the term marches on.

Effects on Academic Performance Do Occur, But Are Not Catastrophic!

At the end of each class in our study, we quizzed students about information taken from the lecture. We mainly asked questions about slides that the professor spoke about right before, and right after mind wandering responses were given. This allowed us to get a really good sense of the immediate consequences of students’ zone outs. We were also able to get a sense of the long-term consequences by looking at students’ final grades in the course.

The immediate consequences of mind wandering were clear! In both studies where information was available, people did worse on quiz questions based on material they mind-wandered through and their scores were actually lower as they mind wandered to greater degrees. Also, the more people reported mind wandering overall, the worse they did on quizzes overall. Mind wandering was associated with poorer final grades, but the impact was not that large. To put this in perspective, a 10% increase in mind wandering rate was, on average, associated with only roughly a 2.3% decrease in final grade.

The Differences between Live and Video Recorded Lectures Are Important!

We showed a new group of students a video-recording of one of the lectures. So, this new group viewed the exact same lecture, and told us about their mind wandering at the exact same times as the in-class group. The only difference was that the new group watched the video lecture in the lab, while the original group viewed the lecture live.

The results were striking! We were surprised to see that the new group, the people who viewed the video lecture, did show an increase in mind wandering over time, but the group who viewed it live in class did not. Even more interesting is that half of the people in the new group were students who were currently taking the class that the lecture was taken from (in a subsequent term), and the other half had never been enrolled. There was essentially no difference in the trend in mind wandering between these groups. This tells us two things. The first is that research on mind wandering in the lab might not so readily translate to real-life situations, so we need to be careful in drawing general conclusions from these studies. The second is that it might be much more difficult for students’ to maintain their attention when they are not physically present in a classroom. These things, especially the latter, are very important for instructors to consider when using online or video-lecture content.

What’s Next?

We have our work cut out for us in uncovering the reasons for our findings, but some studies are underway. These look both at the role of motivation in dictating mind wandering, and the importance of instructor-student interaction in rate-changes in mind wandering. We are currently investigating mind wandering during lectures using a small desktop app that independently monitors students’ attention by showing them a pop-up response window during class, and we also video-recorded some of these lectures. We can then compile students’ responses and associate them with students’ class-by-class alertness, sleep and motivation, as well as instructors’ movement, hand-waving, and speech inflections. It is our hope that we can use this to determine what people on both sides of the projection screen (instructor and student) can do to minimize inattention in the classroom, hopefully improving learning and retention along the way.

Bio

Jeff Wammes received his B.A. from Western University in London, Ontario, and is currently completing his PhD at the University of Waterloo. Jeff's research focuses very generally on the intersection between attention and memory. This includes exploring how mind wandering influences performance in academic settings, as well as looking at the detrimental effects of dual-tasking on memory, and how these effects can be alleviated using encoding strategies.

Thanks to those who collaborated on the work (alphabetical):

  • Nigel Bosch
  • Pierre Boucher
  • Allan Cheyne
  • Caitlin Mills
  • Paul Seli
  • Daniel Smilek

1.Kim, J., Guo, P. J., Seaton, D. T., Mitros, P., Gajos, K. Z., & Miller, R. C. (2014, March). Understanding in-video dropouts and interaction peaks inonline lecture videos. In Proceedings of the first ACM conference on Learning@ scale conference (pp. 31-40). ACM.

2.Young, M., Robinson, S., & Alberts, P. (2009). Students pay attention!: Combating the vigilance decrement to improve learning during lectures. Active Learning in Higher Education, 10. 41-55.

3.Risko, E., Anderson, N., Sarwal, A., Engelhardt, M., & Kingstone, A. (2012). Everyday Attention: Variation in Mind Wandering and Memory in a Lecture. Applied Cognitive Psychology, 26, 234-42.

4.Lindquist, S. I., & McLean, J. P. (2011). Daydreaming and its correlates in an educational environment. Learning and Individual Differences, 21(2), 158-167.

5.Varao-Sousa, T. L., & Kingstone, A. (2015). Memory for lectures: How lecture format impacts the learning experience. PloS one, 10(11).

6.Wammes, J. D., Boucher, P. O., Seli, P., Cheyne, J. A., & Smilek, D. (2016). Mind wandering during lectures I: Changes in rates across an entire semester. Scholarship of Teaching and Learning in Psychology, 2(1), 13.

7.Wammes, J. D., Seli, P., Cheyne, J. A., Boucher, P. O., & Smilek, D. (2016). Mind wandering during lectures II: Relation to academic performance. Scholarship of Teaching and Learning in Psychology, 2(1), 33.

How do you deal with disruptive students? Consider learning contracts.

July 12, 2016

By Louise Chim

It may look like a simple backpack to some, but instructors recognize this as one of the greatest noise-makers ever invented. [Image: TaylorB90, flickr.com/photos/61033722@N08/7827616674, CC BY-NC 2.0, creativecommons.org/licenses/by-nc/2.0/]

In my first year as a professor, I was tasked with teaching introductory psychology. I felt a lot of pressure to make this an enjoyable course for my students because when I was a freshman taking this course was a life-altering experience. It changed the trajectory of my undergraduate major and led me to pursue a Ph.D. in psychology. However, one thing that I struggled with during this first year was the constant chatter in the classroom and the noise from 300 students packing up 5 minutes before the end of class. What disappointed me the most about these “student incivilities” (Nilson & Jackson, 2004) was when students would come up after class and tell me they were really enjoying the course but had a hard time paying attention because their classmates talked throughout the entire class period.

My fellow introductory psychology colleague, Dr. Martin Smith, and I used student learning contracts the following semester as a way to address incivilities. Student learning contracts can take on different forms. For example, they can be used to set expectations about the completion of activities outside of the classroom, class or office hours attendance, or to set goals for self-directed learning (e.g., Barlow, 1974; Frank & Scharff, 2013; Chan & Wai-tong, 2000). In this post, I focus on using student learning contracts to set expectations for behavior in the classroom.

How to create learning contracts

The basic premise of a learning contract is to create a set of guidelines for students and instructors to adhere to. Rather than preparing a contract in advance for students to sign, we work with our students to create a set of guidelines. Our approach is similar to one described by Nilson and Jackson (2004) as a “student-generated code-of-conduct contract.” Here are the steps we used to implement student learning contracts in our classrooms:

1.  Create a student contract template. In the example below, we have three sections: (1) inappropriate behaviors, (2) appropriate behaviors, and (3) how to decrease inappropriate and increase appropriate behaviors.

2.  Discuss. Organize students into small groups (2-4 students) and ask them to discuss what behaviors bother them or impede their learning in the classroom and what behaviors help facilitate their learning in the classroom.

3.  Decide on the content of contract. Ask the class to share what they discussed in their smaller groups, write their suggestions on the board, and decide which behaviors should be included in the contract. In our experience, there is generally agreement on the appropriate and inappropriate behaviors (from both the students and instructors) so we decide informally what goes into the contract. However, you could also use a personal response system (e.g., iClickers) to have students vote on which items to include in the contract.

4.  Complete and sign the contract. Each student fills out the contract with the agreed upon bullet points, signs it, and the instructor collects the contracts. In our experience, we have not had students refuse to sign the contract. However, if this is a concern, see Nilson and Jackson (2004) for a suggestion on how to deal with this.

5.  Post the completed contract on the course website. Make the contract easily accessible for students to review. Here’s an example of a completed contract:

Why implement student-generated learning contracts?

While a syllabus outlines the expectations instructors have for their students, many students fail to read it (see Raymark & Connor-Greene, 2002 for how to administer a syllabus quiz). Students might also not want to adhere to a set of rules imposed on them (Nilson & Jackson, 2004). By using a student-generated learning contract, students are directly involved in the process of creating classroom guidelines.

There are several reasons why student-generated contracts can help create a positive classroom environment:

  • Discussing and generating ideas with their peers in small groups can enhance cognitive elaboration of appropriate and inappropriate classroom behaviors (Cooper & Robinson, 2000).
  • Students may realize that they are not anonymous in class and that their behaviors impede learning in others.
  • Students realize that the things that bother them in class also bother other students.
  • Students have control of the content of the contract thereby increasing motivation to adhere and enforce the contract (Barlow, 1974; Frank & Scharff, 2013).
  • It may allow for deeper processing of the content of the contract.

I have outlined reasons why student learning contracts may be beneficial but they may not be necessary for all types of classes. As with any teaching tool, it really depends on the context. For example, while I use student contracts in my introductory level psychology course I have not used them in my upper-level courses. As I mentioned previously, my introductory psychology course has over 300 students and the majority of them are freshmen. In contrast, my upper level courses are much smaller (60-70 students) and are comprised of juniors and seniors with more experience in the college environment. Previous research suggests that students feel more anonymous and engage in more disruptive classroom behaviors in larger classes (Alberts, Hazen, & Theobald, 2010; Carbone, 1999; Elder, Seaton, & Swinney, 2010). Therefore, it could be especially beneficial in these larger classes to spend time creating a contract about classroom behaviors and expectations. Moreover, in classes that are predominantly freshman, using student contracts can help establish insight into the appropriate behaviors for future courses.

I hope that by sharing one of my challenges as a first-year instructor and what my colleague and I did to address it with learning contracts can help you decide whether you would want to implement student-generated learning contracts in your class and give you the tools to create your own.

Bio

Louise Chim received her Ph.D. in psychology from Stanford University where she studied how cultural context shapes the affective states people want to feel. She is currently an Assistant Teaching Professor in psychology at the University of Victoria in Victoria, B.C., Canada where she teaches introduction to psychology, statistical methods, and cultural psychology.

References

Alberts, H.C., Hazen, H.D., & Theobald, R.B. (2010). Classroom incivilities: The challenge of interactions between college students and instructors in the US. Journal of Geography in Higher Education, 34, 439-462. doi:10.1080/03098260903502679

Barlow, R.M. (1974). An experiment with learning contracts. The Journal of Higher Education, 45, 441-449.

Carbone, E. (1999). Students behaving badly in large classes. New Directions for Teaching and Learning, 77, 35-43. doi: 10.1002/tl.7704

Chan, S.W., & Wai-tong, C. (2000). Implementing contract learning in a clinical context: Report on a study. Journal of Advanced Nursing, 31, 298-305. doi: 10.1046/j.1365-2648.2000.01297.x

Cooper, J. & Robinson, P. (2000). The argument for making large classes seem small. New Directions for Teaching and Learning, 81, 5-16. doi: 10.1002/tl.8101

Elder, B., Seaton, L.P. & Swinney, L.S. (2010). Lost in a crowd: Anonymity and incivility in the accounting classroom. The Accounting Educators’ Journal, 20, 91-107.

Frank, T., & Scharff, L.F.V. (2013). Learning contracts in undergraduate courses: Impacts on student behaviors and academic performance. Journal of the Scholarship of Teaching and Learning, 13, 36-53.

Nilson, L.B., & Jackson, N.S. (2004, June). Combating classroom misconduct (incivility) with bill of rights. Paper presented at the 4th Conference of the International Consortium for Educational Development, Ottawa, Ontario. Retrieved from http://www.umfk.edu/pdfs/facultystaff/combatingmis...

Raymark, P.H., & Connor-Greene, P.A. (2002). The syllabus quiz, Teaching of Psychology, 29, 286-288. doi: 10.1207/S15328023TOP2904_05