This has been a common theme here as I have campaigned against cherry-picking research papers, relying on confirmation bias and putting blind faith in peer-review as a guarantee of research quality.
In short I have pleaded for readers to approach published research critically and intelligently.
The article The 10 stuff-ups we all make when interpreting research from The Conversation gives some specific advice on how to do this. Well worth keeping in mind when you next set out to scan the literature to find the current state of scientific knowledge on a subject that interests you.
UNDERSTANDING RESEARCH: What do we actually mean by research and how does it help inform our understanding of things? Understanding what’s being said in any new research can be challenging and there are some common mistakes that people make.
Have you ever tried to interpret some new research to work out what the study means in the grand scheme of things?
Well maybe you’re smart and didn’t make any mistakes – but more likely you’re like most humans and accidentally made one of these 10 stuff ups.
1. Wait! That’s just one study!
You wouldn’t judge all old men based on just Rolf Harris or Nelson Mandela. And so neither should you judge any topic based on just one study.
If you do it deliberately, it’s cherry-picking. If you do it by accident, it’s an example of the exception fallacy.
The well-worn and thoroughly discredited case of the measles, mumps and rubella (MMR) vaccine causing autism serves as a great example of both of these.
People who blindly accepted Andrew Wakefield’s (now retracted) study – when all the other evidence was to the contrary – fell afoul of the exception fallacy. People who selectively used it to oppose vaccination were cherry-picking.
2. Significant doesn’t mean important
Some effects might well be statistically significant, but so tiny as to be useless in practice.
Associations (like correlations) are great for falling foul of this, especially when studies have huge number of participants. Basically, if you have large numbers of participants in a study, significant associations tend to be plentiful, but not necessarily meaningful.
One example can be seen in a study of 22,000 people that found a significant (p<0.00001) association between people taking aspirin and a reduction in heart attacks, but the size of the result was miniscule.
The difference in the likelihood of heart attacks between those taking aspirin every day and those who weren’t was less than 1%. At this effect size – and considering the possible costs associated with taking aspirin – it is dubious whether it is worth taking at all.
3. And effect size doesn’t mean useful
We might have a treatment that lowers our risk of a condition by 50%. But if the risk of having that condition was already vanishingly low (say a lifetime risk of 0.002%), then reducing that might be a little pointless.
We can flip this around and use what is called Number Needed to Treat (NNT).
In normal conditions if two random people out of 100,000 would get that condition during their lifetime, you’d need all 100,000 to take the treatment to reduce that number to one.
4. Are you judging the extremes by the majority?
Biology and medical research are great for reminding us that not all trends are linear.
We all know that people with very high salt intakes have a greater risk of cardio-vascular disease than people with a moderate salt intake.
But hey – people with a very low salt intake may also have a high risk of cardio-vascular disease too.
The graph is U shaped, not just a line going straight up. The people at each end of the graph are probably doing different things.
5. Did you maybe even want to find that effect?
Even without trying, we notice and give more credence to information that agrees with views we already hold. We are attuned to seeing and accepting things that confirm what we already know, think and believe.
There are numerous example of this confirmation bias but studies such as this reveal how disturbing the effect can be.
In this case, the more educated people believed a person to be, the lighter they (incorrectly) remembered that person’s skin was.
6. Were you tricked by sciencey snake oil?
You won’t be surprised to hear that sciencey-sounding stuff is seductive. Hey, even the advertisers like to use our words!
But this is a real effect that clouds our ability to interpret research.
In one study, non-experts found even bad psychological explanations of behaviour more convincing when they were associated with irrelevant neuroscience information. And if you add in a nice-and-shiny fMRI scan, look out!
7. Qualities aren’t quantities and quantities aren’t qualitites
For some reason, numbers feel more objective than adjectivally-laden descriptions of things. Numbers seem rational, words seem irrational. But sometimes numbers can confuse an issue.
For example, we know people don’t enjoy waiting in long queues at the bank. If we want to find out how to improve this, we could be tempted to measure waiting periods and then strive to try and reduce that time.
But in reality you can only reduce the wait time so far. And a purely quantitative approach may miss other possibilities.
If you asked people to describe how waiting made them feel, you might discover it’s less about how long it takes, and more about how uncomfortable they are.
8. Models by definition are not perfect representations of reality
A common battle-line between climate change deniers and people who actually understand evidence is the effectiveness and representativeness of climate models.
But we can use much simpler models to look at this. Just take the classic model of an atom. It’s frequently represented as a nice stable nucleus in the middle of a number of neatly orbiting electrons.
While this doesn’t reflect how an atom actually looks, it serves to explain fundamental aspects of the way atoms and their sub-elements work.
This doesn’t mean people haven’t had misconceptions about atoms based on this simplified model. But these can be modified with further teaching, study and experience.
9. Context matters
The US president Harry Truman once whinged about all his economists giving advice, but then immediately contradicting that with an “on the other hand” qualification.
Individual scientists – and scientific disciplines – might be great at providing advice from just one frame. But for any complex social, political or personal issue there are often multiple disciplines and multiple points of view to take into account.
To ponder this we can look at bike helmet laws. It’s hard to deny that if someone has a bike accident and hits their head, they’ll be better off if they’re wearing a helmet.
But if we are interested in whole-of-society health benefits, there is research suggesting that a subset of the population will choose not to cycle at all if they are legally required to wear a helmet.
Balance this against the number of accidents where a helmet actually makes a difference to the health outcome, and now helmet use may in fact be negatively impacting overall public health.
Valid, reliable research can find that helmet laws are both good and bad for health.
10. And just because it’s peer reviewed that doesn’t make it right
Peer review is held up as a gold standard in science (and other) research at the highest levels.
But even if we assume that the reviewers made no mistakes or that there were no biases in the publication policies (or that there wasn’t any straight out deceit), an article appearing in a peer reviewed publication just means that the research is ready to be put out to the community of relevant experts for challenging, testing, and refining.
It does not mean it’s perfect, complete or correct. Peer review is the beginning of a study’s active public life, not the culmination.
And finally …
Research is a human endeavour and as such is subject to all the wonders and horrors of any human endeavour.
Just like in any other aspect of our lives, in the end, we have to make our own decisions. And sorry, appropriate use even of the world’s best study does not relieve us of this wonderful and terrible responsibility.
There will always be ambiguities that we have to wade through, so like any other human domain, do the best you can on your own, but if you get stuck, get some guidance directly from, or at least originally via, useful experts.
This article is part of a series on Understanding Research.
Why research beats anecdote in our search for knowledge
Clearing up confusion between correlation and causation
Where’s the proof in science? There is none
Positives in negative results: when finding ‘nothing’ means something
The risks of blowing your own trumpet too soon on research
How to find the knowns and unknowns in any research
How myths and tabloids feed on anomalies in science
Ken there is also the problem of dissemination bias.
And that MMR autism study, though they had data from Black boys they left it out because it was harder to get birth certificates for them.
Blacks would find it harder to get vitamin D.
And I note in the study that Gluckman asked for on mental health a few years ago that NZ is very high if not the highest for suicide. We don’t put vitamin D in our milk like US, though it may be voluntary soon.
Going by this it seems we do need to check up:
Don’t fall in the trap of rejecting reviewing because the reviewer may have no status, or biased like a parent who has lost a child. Listen to them and think it out.
Stop putting words in my mouth, Soundhill. I have not suggested “rejecting reviewing because the reviewer may have no status” at all. Where the hell did you get that from?
In fact I am suggesting readers approach the literature critically and intelligently whatever the status of authors or reviewers.
Ken, that video is a sort of a review of some NZ research direction. I thought readers may want to reject it because of the status of he presentation.
Sorry I did not write quite clearly, but also an author working in MMR/ADHD which you refer to had a child with the trouble, which may have been a reason his study was rejected by reviewers.
Research may beat anecdote but it often may start with it. http://www.scientificamerican.com/article/statins-may-affect-memory/?WT.mc_id=SA_HLTH_20150407