Every now and then, people trot out a scientific study from 2011 called “Story spoilers don’t spoil stories,” which claimed that spoilers generally improved readers’ enjoyment of stories. This study got lots of media attention. Unfortunately, it’s probably wrong.
This is actually the tip of an iceberg. Scientific journals are flooded with studies whose conclusions or results are wrong. There are many reasons why this is true. Some of them are malfeasance, such as data falsification, researcher biases, and “p-hacking”1. But most false results don’t arise from misbehavior. I don’t think the spoilers paper was biased or botched. To understand why this study was probably wrong, we need to get out our sonar and unveil the shape of that iceberg.
As noted since 2005, there’s a hidden structural problem that most researchers ignore: the question of “how likely is your hypothesis?” Because nothing is absolute and certain in our messy world, mainstream statistics are designed to admit a small rate of false positives2. In fields like psychology, findings are worth publishing if your data have a ≤5% chance without the effect you’re looking for. “If I get heads five times in a row, that’s enough data to conclude that the coin is weighted.” But here’s the rub: what if you had a hundred coins, and only one of them is weighted? “Five heads in a row” occurs 3% of the time, so 3/99 fair coins will pass your threshold, in addition to the 1 weighted coin. You’ve now judged 4 coins to be weighted, and 75% of those judgments are wrong.
Issues in scientific culture compound this. Particularly publication bias, where only the most exciting and novel results get into high-prestige, high-visibility journals. Think about this: “exciting and novel” means “unexpected and unlikely.” As a result, higher-prestige, mass-media-worthy research is especially likely to be wrong.
The Good Spoilers Paper was published in 2011 in the journal Psychological Science, one of the top journals in psychology. However, a recent study of the big-name psychology literature showed that few of its studies held up. They re-ran studies published in the year 2008, so they didn’t replicate the Good Spoilers Paper directly. However, of the social-psychology papers in Psychological Science, only 29% (7/24) of the experiments produced the same conclusions when re-run!3 In other words, a sample of similar research only confirmed the results for 1/3 of studies.
Between publication bias and all the myriad ways to get a false positive, odds are that if a study has counter-intuitive results, and it appeared in a high-profile psychology journal in the last ~decade, it’s probably wrong.4 Remember: “counter-intuitive” means “there is a lot of counter-evidence.”5 If most of the evidence points one way, and a little bit of evidence points the other way, the outlier is probably a statistical fluke.
Moreover, some people have tried to extend the findings of the Good Spoilers Paper, using more complex measures of enjoyment. Lo and behold, they found that unspoiled stories were more fun, suspenseful, moving, and enjoyable.
There may even be specific reasons why spoilers are bad for reasons that the original experiment would’ve missed. But others have explained that well already. My little soapbox is here to tell you not to believe the research that says spoilers increase enjoyment, because science is messy.
- Analyzing the data many different ways, and reporting only the analysis that produces the outcome you want
- Finding an effect that isn’t there. The opposite of a “miss”, where you fail to notice a true effect.
- Nothing special about social-psych or Psychological Science; the 2015 replication study only had a 36% replication rate overall.
- Thankfully, the field has recognized this problem, and is trying to adopt new statistical standards that are less susceptible to false positives. Also, psychology isn’t the only culprit, it’s just the one I know the most about.
- And boy howdy is the Good Spoilers Paper counter-intuitive. Do you know many people who consistently enjoy spoilers?