Stories published by The Babylon Bee were among the most shared factually inaccurate content in almost every survey of this research. https://t.co/x96rPCl1w9
— snopes.com (@snopes) August 16, 2019
In this story, they make a claim that their monomania is a) necessary and b) supported by research.
Satirical articles like those found on The Babylon Bee frequently showed up in our survey. In fact, stories published by The Bee were among the most shared factually inaccurate content in almost every survey we conducted. On one survey, The Babylon Bee had articles relating to five different falsehoods.
For each claim, we asked people to tell us whether it was true or false and how confident they were in their belief. Then we computed the proportion of Democrats and of Republicans who described these statements as “definitely true.”
If we zero in on The Babylon Bee, a few patterns stand out.
Members of both parties failed to recognize that The Babylon Bee is satire, but Republicans were considerably more likely to do so. Of the 23 falsehoods that came from The Bee, eight were confidently believed by at least 15% of Republican respondents. One of the most widely believed falsehoods was based on a series of made-up quotes attributed to Rep. Ilhan Omar. A satirical article that suggested that Sen. Bernie Sanders had criticized the billionaire who paid off Morehouse College graduates’ student debt was another falsehood that Republicans fell for.
Our surveys also featured nine falsehoods that emerged from The Onion. Here, Democrats were more often fooled, though they weren’t quite as credulous. Nonetheless, almost 1 in 8 Democrats was certain that White House counselor Kellyanne Conway had questioned the value of the rule of law.
Their logic is that because some people believe satire, their is a moral duty on the part of the scolding-douchebag community to label it. And, of course, Republicans are dumb. And they claim to have evaluated the propensity of different political parties to be deceived. Because Republicans are dumb.
This revelation, which appears at the website The Conversation, actually asks a lot more questions than in answers.
It is also notable that two of the professors involved in this research are funded by Facebook. [An aside here. It is a curiosity–and conceit–of the incestuous government-academic world that you are able to remove a conflict of interest by simply declaring a conflict of interest. So you can take money from a major corporation and then set on an advisory board that regulates that corporation and you have no conflict of interest so long you declare it. Fundamentally, I think this is corrupt and the worst sort of Martian logic because ultimately the academic is going to be put in the position of choosing to assist their paymaster or not. There is nothing about academia that gives one hope that it is the province of righteous philosophers who would disadvantage themselves for the sake of the commonweal…rather the opposite.]
Here are some problems with the article:
Issue 1) the charts all have "get the data links" that… Don't link to data. Now maybe you're just dumb and didn't realize your blog software did this. But in the Era of open science a site sponsored by a ton of universities should know better.
— tsrblke (@tsrblke) August 17, 2019
Finally (for now) the lack of a descriptive methodology makes me question this entire project. It's not clear exactly what they showed people as part of their survey. I don't see anything even close to resembling a control variable, there's no demographic data.
— tsrblke (@tsrblke) August 17, 2019
This is, as described functionally an unvalidated web survey. That's… Not science. And one more thing. Who funded it? Your conflict disclosures imply NSF, but I'm doubtful because they aren't explicit. Did snopes fund this research?
— tsrblke (@tsrblke) August 17, 2019
This is how they describe the methodology:
Our study on misinformation and social media lasted six months. Every two weeks, we identified 10 of the most shared fake political stories on social media, which included satirical stories. Others were fake news reports meant to deliberately mislead readers.
We then asked a representative group of over 800 Americans to tell us if they believed claims based on those trending stories. By the end of the study, we had measured respondents’ beliefs about 120 widely shared falsehoods.
But that obfuscates what really happened:
There's no way to assess these responses without knowing the sample size and population characteristics. I've never seen a data link like that before.
— Molly Ratty (@molratty) August 16, 2019
Indeed, according to the tabular data linked from the tables records a total of 259 responses to a total of 10 questions. This hints that they ran about multiple surveys with minuscule samples. For instance, let’s look at the data from their testing of Onion headlines:
Here the sample size in the GOP cells ranges between 5 and 9. Sorry, but five respondents do not constitute a survey.
Having done sample surveys professionally, my first question is which Institutional Review Board (IRB) okayed this research. Under 45 CFR 46, all research conducted on human subjects needs a research plan that is approved by the IRB of each institution participating, in this cast Ohio State University. This is the basic rule:
Q. Is my survey project really human subjects research?
A. Most surveys do meet the federal definition of research. In defining human subjects research activities, two separate determinations must be made. The first determination is whether or not the activity can be considered research. If the answer is “yes,” investigators must follow up with a second determination: Does the research involve human subjects? Both determinations must be made using the definitions of the terms “research” and “human subjects” in 45 CFR 46.102 (a-j):
Research: “a systematic investigation, including research development, testing and evaluation, designed to develop or contribute to generalizable knowledge. Activities which meet this definition constitute research, whether or not they are conducted or supported under a program which is considered research for other purposes.” Investigators unsure of whether an activity constitutes human research should contact their IRB.
Human subjects: “living individual(s) about whom an investigator (whether professional or student) conducting research obtains (1) data through intervention or interaction with the individual, or (2) identifiable private information.” Activities in which a researcher collects private, identifiable information about third parties would meet the definition of “human subjects.”
Examples of activities that do not meet these definitions would be (1) ad hoc evaluations of a workshop or symposium (not a systematic investigation), (2) sample surveys on employee satisfaction within one company if the goal is to identify areas for improvement within that company (not designed to contribute to generalizable knowledge) or (3) an analysis of 1880 census records (not information about living individuals). At some institutions, surveys conducted by students for a class project that will not produce a thesis or scholarly publication may be considered to be non-research; check with your institution to learn local practice with regard to student survey projects.
The fact that they are cross-referencing personal information (some means of contact, whether IP address or phone number, with party affiliation) with something personal (you’re too dumb to identify satire even when you are told it is satire) would require than this survey be conducted with university oversight.
Why is this important? Because the research seems horrifically slipshod based on what was shared with the public and an IRB would have had to approve this research plan. There is no way an IRB (again, speaking from my five years as an IRB member) would have approved of the design that we are seeing. Either it is not IRB approved or Snopes is holding back data in advance of publication. That could be the case but it is hard to find a case where a research team is teasing their findings without announcing that the detailed study is scheduled for publication. We simply don’t know, in this case.
A third answer is, of course, that this entire survey is simply bullsh** and either manufactured out of whole cloth or it was carried out with no research plan (or no effective oversight of the execution of the plan) and on a shoestring budget. For instance, in the Excel table I inserted above the small sample size in both GOP and Democrat cells in the matrix strongly implies that these are not 105 individuals but rather no more than 23 people asked on five different occasions.
Note that there is an implication that Ohio State and the National Science Foundation are involved but there is never any flat out statement that that is the case. This kind of caginess is also very unusual in a profession where advertising grant awards is critical to getting more grant awards.
Maybe I’m wrong. Maybe there are gigabytes of data and a platoon of PhD statisticians powering this survey. Maybe the results are already in press at a refereed, high impact journal (high impact being a very relative term in social science journals) and no one thought fit to mention those facts. But based on what we’ve been shown, the very best that can be hoped for is that this survey is the result of monumental incompetence.
Regardless of how this came to pass, the reason Snopes is after Babylon Bee is easy to see. They are trying to build a case that will justify Facebook (follow the money here) deplatforming that website. That’s been obvious for a while and that is why Babylon Bee is gearing up for a legal battle.