Researchers want the public to test themselves: https://yourmist.streamlit.app/. Selecting true or false against 20 headlines gives the user a set of scores and a “resilience” ranking that compares them to the wider U.S. population. It takes less than two minutes to complete.

The paper

Edit: the article might be misrepresenting the study and its findings, so it’s worth checking the paper itself. (See @realChem 's comment in the thread).

Avatar
realChem@beehaw.org
1 point
*

Hey all, thanks for reporting this to bring some extra attention to it. I’m going to leave this article up, as it is not exactly misinformation or anything otherwise antithetical to being shared on this community, but I do want to note that there are four different sources here:

  • There’s the original study which designed the misinformation susceptibility test; the ArXiv link was already provided, but in case anyone would like a look the study was indeed peer reviewed and published (as open access) in the journal Behavior Research Methods. As with all science, when reading the paper it’s important to recognize exactly what it is the authors were even trying to do, taking into account that they’re likely using field-specific jargon. I’m not a researcher in the social sciences so I’m unqualified to have too strong an opinion, but from what I can tell they did achieve what they were trying to with this study. There are likely valid critiques to be made here, but as has already been pointed out in our comments many aspects of this test were thought out and deliberately chosen, e.g. the choice to use only headlines in the test (as opposed to, e.g., headlines along with sources or pictures). One important thing to note about this study is that it is currently only validated in the US. The researchers themselves have made it clear in the paper that results based on the current set of questions likely cannot be compared between countries.

  • There’s the survey hosted on streamlit. This is being run by several authors on the original paper, but it is unclear exactly what they’re going to do with the data. The survey makes reference to the published paper so the data from this survey doesn’t seem like it was used in constructing the original paper (and indeed the original paper discusses several different versions of the test as well as a longitudinal study of participants). Again, taken for what it is I think it’s fine. In fact I think that the fact that this survey has been made available is why this has generated so much discussion and (warranted) skepticism. Being able to test yourself on a typical survey gives a feel for what is and isn’t actually being measured. I consider this a pretty good piece of science communication / outreach, if nothing else.

  • There is the poll by YouGov. This is separate from the original study. The researchers seem to be aware of it, but as far as I can tell weren’t directly involved in running the poll, analyzing the data, or writing the article about it. This is not inherently a bad poll, but I do think it’s worth noting that it is not a peer reviewed study. We have little visibility into how they conducted their data analysis here, for one thing. From what I can tell without knowing how they actually did their analysis the data here looks fine, but (this not being a scientific paper) some of the text surrounding the data is a bit misleading. EDIT: Actually it looks like they’ve shared their full dataset including how they broke categories down for analysis, it’s available here. Seeing this doesn’t much change my overall impression of the survey other than to agree with Panteleimon that the demographic representation here is not very well balanced, especially once you start trying to take the intersections of multiple categories. Doing that, some of their data points are going to have much lower statistical significance than other. My main concern is that some of the text surrounding the data is kinda misleading. For example, in one spot they write, “Older adults perform better than younger adults when it comes to the Misinformation Susceptibility Test,” which (if their data and analysis can be believed) is true. However nearby they write, “Younger Americans are less skilled than older adults at identifying real from fake news,” which is a different claim and as far as I can tell isn’t well supported by their data. To see the difference, note that when identifying real vs fake news a reader has more to go on than just a headline. MIST doesn’t test the ability to incorporate all of that context, that’s just not what it was designed to do.

  • Finally, there’s the linked phys.org article. This is the part that seems most objectionable to me. The headline is misleading in the same way I just discussed, and the text of the article does a bad job of making it clear that the YouGov poll is different from the original study. The distinction is mentioned in one paragraph, but the rest of the article blends quotes from the researchers with YouGov polling results, strongly implying that the YouGov poll was run by these researchers (again, it wasn’t). It’s a bit unfortunate that this is what was linked here, since I think it’s the least useful of these four sources, but it’s also not surprising since this kind of pop-sci reporting will always be much more visible than the research it’s based on. (And to be clear, I feel I could have easily linked this article myself, I probably wouldn’t have even noticed the conflation of different sources if this hadn’t generated so many comments and even a report; just a good reminder to keep our skeptic hats on when we’re dealing with secondary sources.)

Finally, I’d just like to say I’m pretty impressed by the level of skepticism, critical thinking, and analysis you all have already done in the comments. I think that this indicates a pretty healthy relationship to science communication. (If anything folks are maybe erring a bit on the side of too skeptical, but I blame the phys-org article for that, since it mixed all the sources together.)

permalink
report
reply
1 point
*

Throwing phys.org into my “not necessarily reliable sources” list. Sorry about this, I’ll be more careful in the future.

I added “Misleading” to the title.

permalink
report
parent
reply
22 points
*

Got 20/20, was rewarded with a message, “You’re more resilient to misinformation than 100% of the US population!” and looked for the Fake button because as a member of the US population, that is a mathematical impossibility.

permalink
report
reply
2 points

Rounding error :)

permalink
report
parent
reply
1 point

100% has a margin of error in millions, or tens of, depending on interpretation.

permalink
report
parent
reply
2 points
*

For most purposes, a 5% margin of error is considered acceptable. Since some quick websearch estimates 336M people, if up to ~17M people are informed, you can still claim that 100% is misinformed.

(But then, if you actually know how many people are informed, your acceptable margin of error falls down considerably.)

permalink
report
parent
reply
1 point

Apparently I’m more resilient to misinformation than 100% of the UK population, but I’m not from the UK; I had to lie on the form because they didn’t have my country. Turns out the real fake news was me.

permalink
report
parent
reply
11 points

May I be honest? The study is awful. It has two big methodological flaws that stain completely its outcome.

The first one is the absence of either an “I don’t know” answer, or a sliding scale for sureness of your answer. In large part, misinformation is a result of lack of scepticism - that is, failure at saying “I don’t know this”. And odds are that you’re more likely to spread discourses that you’re sure about, be them misinformation or actual information.

The second flaw is over-reliance on geographically relevant information. Compare for example the following three questions:

  1. Morocco’s King Appoints Committee Chief to Fight Poverty and Inequality
  2. International Relations Experts and US Public Agree: America Is Less Respected Globally
  3. Attitudes Toward EU Are Largely Positive, Both Within Europe and Outside It

The likelihood of someone living in Morocco, USA and the EU to be misinformed about #1, #2 and #3 respectively is far lower than the odds of someone living elsewhere. And more than that: due to the first methodological flaw, the study isn’t handling the difference between “misinformed” (someone who gets it wrong) and “uninformed” (someone who doesn’t believe anything in this regard).

(For me, who don’t live in any of those three: the questions regarding EU are a bit easier to know about, but the other two? Might as well toss a coin.)

permalink
report
reply
4 points

You are totally right. It mostly tests whether you are up to date on the current news stories in the “correct” part of the world.

What’s making this worse is that “Government Officials Have Manipulated Stock Prices to Hide Scandals” is classified as a fake news headline. That might be true in the US, but here exactly this happened. Or at least they tried and failed. Someone working for a big state pension fund was gambling with the fund’s money and when she lost a lot of it, she tried manipulation to win the money back, which failed.

The right way to discern fake news from real news (apart from maybe really obvious examples) is to read the article, check the sources and compare with other sources.

In 2013 a headline like “Putin about to start a decade-long war in Europe that will cause a world-wide financial crisis” would have been a ridiculous clickbait fake news headline.

Same with “The whole continent is not allowed to leave their homes for months due to Chinese virus” in 2019.

Or “CIA is spying on all internet users” in 2008.

And yet these things happened.

Because what makes fake news is not whether it is outlandish that something like that could happen, but instead it’s fake news because it hasn’t happened.

permalink
report
parent
reply
7 points
*

I feel like a lot of people are missing the point when it comes to the MIST. I just very briefly skimmed the paper.

Misinformation susceptibility is being vulnerable to information that is incorrect

  • @ach@feddit.de @GataZapata@kbin.social It seems that the authors are looking to create a standardised measure of “misinformation susceptibility” that other researchers can employ in their studies so that these studies can be comparable, (the authors say that ad-hoc measures employed by other studies are not comparable).
  • @lvxferre@lemmy.ml the reason a binary scale was chosen over a likert-type scale was because
    1. It’s less ambiguous to participants
    2. It’s easier for researchers to implement in their studies
    3. The results produced are of a similar ‘quality’ to the likert scale version
  • If the test doesn’t include pictures, a source name, and a lede sentence and produces similar results to a test which does, then the simpler test is superior (think about the participants here). The MIST shows high concurrent validity with existing measures and states a high level of predictive validity (although I’d have to read deeper to talk about the specifics)

It’s funny how the post about a misinformation test was riddled with misinformation because no one bothered to read the paper before letting their mouth run. Now, I don’t doubt that your brilliant minds can overrule a measure produced with years of research and hundreds of participants off the top of your head, but even if what I’ve said may be contradicted with a deeper analysis of the paper, shouldn’t it be the baseline?

permalink
report
reply
2 points

Thanks for this. I’ll freely admit I’m an idiot and didn’t feel smart enough to understand the paper (see username). Clarification is much welcome.

I added the link to the paper to the body of the post.

permalink
report
parent
reply
2 points

Not saying you’re wrong at all, but I just did the test and it’s kinda funny that the title of this article would certainly have been one of the “fake news” examples.

Obviously the study shows that the test is useful (as you pointed out quite well!), but it’s ironic that the type of “bait” that they want people to recognize as fake news was used as the title of the article for the paper.

(Also, not saying the authors knew about or approved the article title or anything)

permalink
report
parent
reply
0 points

Thank you for this!

I have to say though, it’s really interesting to see the reactions here, given the paper’s findings. Because in the study, while people got better at spotting fake news after the game/test, they got worse at identifying real news, and overall more distrustful of news in general. I feel like that’s on display here - with people (somewhat correctly) mistrusting the misleading article, but also (somewhat incorrectly) mistrusting the research behind it.

permalink
report
parent
reply
2 points

That’s a very interesting anecdote, now that you say it

permalink
report
parent
reply
6 points

As a terminally online millennial, I was scared for a second, but I did okay on the test. Then again, I’m 40 and barely even qualify as ‘millennial’, and not at all as ‘young’.

I found the language of the questions was glaringly obvious. What do you think?

permalink
report
reply
2 points
*

I thought that without any article text it was very difficult. Is the government trying to increase acceptance of genetic modification? Well, if the goal is to improve agricultural outputs they probably should be; I would expect that there is a pamphlet or website somewhere that helpfully explains that genetically modified crops are safe to consume and can be made more productive, or more resistant to pests or drought, or enhanced in other beneficial ways. That should count right?

But I marked it as false because based on the tone I assumed the associated article would actually be about modifying babies or something. I scored 20/20, which means I was rewarded for making assumptions about the article without reading it, which is not a good method for determining the truth of an article in real life!

Basically in order to achieve a good score, I stopped thinking about the information itself and started thinking about why the specific headlines were chosen for the test, which means they probably aren’t measuring what they believe they’re measuring.

Edit: Thinking again, maybe the skill of guessing why researchers chose specific headlines is related to the skill of guessing which actual headlines are intentionally misleading. Guessing if the original writer/editor of a headline is trying to trick me is only one step removed from guessing why a researcher would include it in their survey. Apparently this test produces similar results to versions that have ledes, images, bylines, etc., which is interesting. Also in IRL scenarios, when I’m uncertain about an article I go looking for more info, which I didn’t do for this test because their suggestion that “it only takes 2 minutes” seemed to imply a rule against using other sites. I would be interested to see how much accuracy improves when participants are encouraged to take their time and try to find the correct answers.

permalink
report
parent
reply
1 point

I found the language of the questions was glaringly obvious. What do you think?

It’s potentially on purpose, to exploit the fact that fake news have often a certain “discursive pattern”.

permalink
report
parent
reply
1 point

On the tail end of the millennial generation and I scored 19/20. I think I realised what the tells were and went off that. I ended up classifying a real story as fake so I leaned slightly more on the sceptical side.

permalink
report
parent
reply

Science

!science@beehaw.org

Create post

Studies, research findings, and interesting tidbits from the ever-expanding scientific world.

Subcommunities on Beehaw:


Be sure to also check out these other Fediverse science communities:


This community’s icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

Community stats

  • 760

    Monthly active users

  • 816

    Posts

  • 4.7K

    Comments