Tag Archives: scientific literature

Industry-funded translation can introduce bias in selection of studies for scientific review

Image credit: Assessing and addressing bias in systematic reviews

The Fluoride Action Network (FAN), in the last decade, paid for translation of a lot of Chinese-language scientific papers linking high fluoride dietary intake to IQ deficits in children. They, of course, selected papers to fit their own ideologically-motivated bias. This is perfectly understandable for an activist group. But has this caused a bias in available English-language sources on this topic? And does this mean recent scientific reviews of this subject unintentionally suffer from selection bias?

I hadn’t considered this possibility before, but it is an issue raised in the recent US National Academies of Sciences (NAS) peer review of the US National Toxicity Program’s (NTP) review of possible neurotoxic effects of fluoride (see Another embarrassment for anti-fluoride campaigners as neurotoxic claim found not to be justified).

Use of FAN sources introduces biased study selection

The NAS peer reviewers are harshly critical of the NTP draft review. A central concern was the way the NTP evaluated the literature on the subject. The NAS peer reviewers say on page 3 of their report:

“The committee had substantive concerns regarding NTP’s evaluation of the human evidence as noted below. The strategy used for the literature search indicated that NTP used FAN as a source to identify relevant literature. The process by which FAN identified and selected studies is unclear, and that uncertainty raises the question of whether the process could have led to a biased selection of studies. Such a concern raises the need for a formal evaluation of any potential bias that might have been introduced into the literature-search process.”

OK, I am not impressed that the NTP used FAN as a source. FAN is hardly a reliable source and its “study tracker” certainly does not pick up anywhere near the full literature available (see Cherry-picking and ring-fencing the scientific literature). But, at first thought, I imagined that the FAN source simply produced a subset of anything that is picked up using a more reliable source like PubMed to do literature searches.

Injection of study bias into English-language scientific literature

But the NAS peer reviewers raise an important problem with reliance on FAN as a source and its effect on the available English-language scientific literature. On page 24 of their report they say:

“. . the process by which FAN identified and selected studies is not clear. FAN identified a number of studies published in Chinese language journals—some of which are not in PubMed or other commonly used databases—and translated them into English. That process might have led to a biased selection of studies and raises the question of whether it is possible that there are a number of other articles in the Chinese literature that FAN did not translate and about which NTP is unaware. NTP should evaluate the potential for any bias that it might have introduced into the literature search process. Possible ways of doing so could include conducting its own searches of the Chinese or other non–English-language literature and conducting subgroup analyses of study quality and results based on the resource used to identify the study (for example, PubMed vs non-PubMed articles). As an initial step in such evaluations, NTP should consider providing empirical information on the pathway by which each of the references was identified. That information would also improve understanding of the sources that NTP used for evidence integration and the conclusions drawn in the monograph.”

In a nutshell, FAN arranged and paid for translation of quite a large number of Chinese papers on this issue (fluoride intake and child IQ deficits). Naturally, they have selected papers supporting their political cause (the abolition of community water fluoridation) and ignored papers which they could not use to that end. It is therefore likely they have introduced into English-language scientific literature a biased selection of Chinese papers because FAN effectively “republished” the translated papers in the journal “Fluoride” – a well-known repository of anti-fluoride material.

Maybe I was wrong to assume anything from FAN would simply be a subset of what is available through more respectable searching sources. But, according to the peer reviewers, some of the translated papers may be picked up when FAN is used as a source of studies but not when PubMed or similar respected sources are used. A warning, though – many of the FAN-promoted translated studies have only been partly translated, maybe only the abstract is available. This is not sufficient for a proper scientific review (see Beware of scientific paper abstracts – read the full text to avoid being fooled).

I am not saying this bias introduction into the English-language scientific literature was intentional, but it is a likely end-result of their actions. Importantly, it is also a likely end-result of funding from big money sources (the “natural”/alternative health industry which funds FAN and similar anti-fluoride and anti-vaccination groups – see Big business funding of anti-science propaganda on health).

So, is this a way that big industry can inject their bias into the available scientific literature? A way to ensure that reviewers will, maybe unintentionally, convey this industry bais into their own summary of scientific findings?

Reviewers should make a critical assessment of studies

The FAN-promoted Chinese studies really do not contribute to any rational discussion of issues with CWF because they were all made in areas of endemic fluorosis. Ironically they often compare child IQ in villages where fluoride intake is high, with that in villages where the fluoride intake is low. It is the low -fluoride villages which are relevant to areas of CWF because their drinking water F concentrations are comparable.

In reality, these Chinese studies could be used to support the idea that CWF is harmless. Even if that is an inherent assumption for low fluoride intake in these studies.

So, perhaps the bias introduced to the literature by translation of the FAN-promoted studies really is of no consequence to the evaluation of CWF. However, consideration of reviews like the recent one by Grandjean (2019) indicates there is a tendency to simply extrapolate from high concentration studies to make unwarranted conclusions about CWF. In this case, the tendency is understandable as Grandjean is well known for his opposition to CWF and is often used by FAN to make press statements raising doubts about this health policy (see Special pleading by Philippe Grandjean on fluorideSome fluoride-IQ researchers seem to be taking in each other’s laundry, and Fluoridation not associated with ADHD – a myth put to rest).

This was also a problem with the draft NTP review which produced the (unwarranted) conclusion “that fluoride is presumed to be a cognitive neurodevelopmental hazard to humans.” The draft did actually mention that the conclusion “is based primarily on higher levels of fluoride exposure (i.e., >1.5 ppm in drinking water” and “effects on cognitive neurodevelopment are inconsistent, and therefore unclear”  for “studies with exposures in ranges typically found in the water distribution systems in the United States (i.e., approximately 0.03 to 1.5 ppm according to NHANES data).” But, of course, it is the unwarranted conclusion that gets promoted.

Conclusions

Reviewers need to be aware of this and other ways activist groups and big business can inject bias into the scientific literature.

This problem underlines the responsibility reviewers have of recognising all possible ways that biased selection of studies they consider can occur. It also means they should make every effort to include negative studies (not supporting the effect they may personally prefer) as well as positive studies. They also need to include all the findings (positive and negative) included in the individual studies they review.

In cases like the FAN-promoted Chinese studies, there is an obligation to at least note the possibility of bias introduced by activists and industry-funded translations. Even better, to ensure that the reviewer undertakes to independently search for all studies on the subject and arrange for translations where necessary.

Above all, reviewers should critically consider the quality of the studies they include in their reviews and not simply rely on their own confirmation bias.

Similar articles

Making sense of scientific research

This has been a common theme here as I have campaigned against cherry-picking research papers, relying on confirmation bias and putting blind faith in peer-review as a guarantee of research quality.

In short I have pleaded for readers to approach published research critically and intelligently.

The article The 10 stuff-ups we all make when interpreting research from The Conversation gives some specific advice on how to do this. Well worth keeping in mind when you next set out to scan the literature to find the current state of scientific knowledge on a subject that interests you.


UNDERSTANDING RESEARCH: What do we actually mean by research and how does it help inform our understanding of things? Understanding what’s being said in any new research can be challenging and there are some common mistakes that people make.

Have you ever tried to interpret some new research to work out what the study means in the grand scheme of things?

Well maybe you’re smart and didn’t make any mistakes – but more likely you’re like most humans and accidentally made one of these 10 stuff ups.

1. Wait! That’s just one study!

You wouldn’t judge all old men based on just Rolf Harris or Nelson Mandela. And so neither should you judge any topic based on just one study.

If you do it deliberately, it’s cherry-picking. If you do it by accident, it’s an example of the exception fallacy.

The well-worn and thoroughly discredited case of the measles, mumps and rubella (MMR) vaccine causing autism serves as a great example of both of these.

People who blindly accepted Andrew Wakefield’s (now retracted) study – when all the other evidence was to the contrary – fell afoul of the exception fallacy. People who selectively used it to oppose vaccination were cherry-picking.

2. Significant doesn’t mean important

Some effects might well be statistically significant, but so tiny as to be useless in practice.

You know what they say about statistics? Flickr/Frits Ahlefeldt-Laurvig, CC BY-ND
Click to enlarge

Associations (like correlations) are great for falling foul of this, especially when studies have huge number of participants. Basically, if you have large numbers of participants in a study, significant associations tend to be plentiful, but not necessarily meaningful.

One example can be seen in a study of 22,000 people that found a significant (p<0.00001) association between people taking aspirin and a reduction in heart attacks, but the size of the result was miniscule.

The difference in the likelihood of heart attacks between those taking aspirin every day and those who weren’t was less than 1%. At this effect size – and considering the possible costs associated with taking aspirin – it is dubious whether it is worth taking at all.

3. And effect size doesn’t mean useful

We might have a treatment that lowers our risk of a condition by 50%. But if the risk of having that condition was already vanishingly low (say a lifetime risk of 0.002%), then reducing that might be a little pointless.

We can flip this around and use what is called Number Needed to Treat (NNT).

In normal conditions if two random people out of 100,000 would get that condition during their lifetime, you’d need all 100,000 to take the treatment to reduce that number to one.

4. Are you judging the extremes by the majority?

Biology and medical research are great for reminding us that not all trends are linear.

We all know that people with very high salt intakes have a greater risk of cardio-vascular disease than people with a moderate salt intake.

Too much or too little salt – which as worse? Flickr/JD Hancock, CC BY
Click to enlarge

But hey – people with a very low salt intake may also have a high risk of cardio-vascular disease too.

The graph is U shaped, not just a line going straight up. The people at each end of the graph are probably doing different things.

5. Did you maybe even want to find that effect?

Even without trying, we notice and give more credence to information that agrees with views we already hold. We are attuned to seeing and accepting things that confirm what we already know, think and believe.

There are numerous example of this confirmation bias but studies such as this reveal how disturbing the effect can be.

In this case, the more educated people believed a person to be, the lighter they (incorrectly) remembered that person’s skin was.

6. Were you tricked by sciencey snake oil?

A classic – The Turbo Encabulator.

You won’t be surprised to hear that sciencey-sounding stuff is seductive. Hey, even the advertisers like to use our words!

But this is a real effect that clouds our ability to interpret research.

In one study, non-experts found even bad psychological explanations of behaviour more convincing when they were associated with irrelevant neuroscience information. And if you add in a nice-and-shiny fMRI scan, look out!

7. Qualities aren’t quantities and quantities aren’t qualitites

For some reason, numbers feel more objective than adjectivally-laden descriptions of things. Numbers seem rational, words seem irrational. But sometimes numbers can confuse an issue.

For example, we know people don’t enjoy waiting in long queues at the bank. If we want to find out how to improve this, we could be tempted to measure waiting periods and then strive to try and reduce that time.

But in reality you can only reduce the wait time so far. And a purely quantitative approach may miss other possibilities.

If you asked people to describe how waiting made them feel, you might discover it’s less about how long it takes, and more about how uncomfortable they are.

8. Models by definition are not perfect representations of reality

A common battle-line between climate change deniers and people who actually understand evidence is the effectiveness and representativeness of climate models.

But we can use much simpler models to look at this. Just take the classic model of an atom. It’s frequently represented as a nice stable nucleus in the middle of a number of neatly orbiting electrons.

While this doesn’t reflect how an atom actually looks, it serves to explain fundamental aspects of the way atoms and their sub-elements work.

This doesn’t mean people haven’t had misconceptions about atoms based on this simplified model. But these can be modified with further teaching, study and experience.

9. Context matters

The US president Harry Truman once whinged about all his economists giving advice, but then immediately contradicting that with an “on the other hand” qualification.

Individual scientists – and scientific disciplines – might be great at providing advice from just one frame. But for any complex social, political or personal issue there are often multiple disciplines and multiple points of view to take into account.

To ponder this we can look at bike helmet laws. It’s hard to deny that if someone has a bike accident and hits their head, they’ll be better off if they’re wearing a helmet.

Do bike helmet laws stop some people from taking up cycling? Flickr/Petar, CC BY-NC
Click to enlarge

But if we are interested in whole-of-society health benefits, there is research suggesting that a subset of the population will choose not to cycle at all if they are legally required to wear a helmet.

Balance this against the number of accidents where a helmet actually makes a difference to the health outcome, and now helmet use may in fact be negatively impacting overall public health.

Valid, reliable research can find that helmet laws are both good and bad for health.

10. And just because it’s peer reviewed that doesn’t make it right

Peer review is held up as a gold standard in science (and other) research at the highest levels.

But even if we assume that the reviewers made no mistakes or that there were no biases in the publication policies (or that there wasn’t any straight out deceit), an article appearing in a peer reviewed publication just means that the research is ready to be put out to the community of relevant experts for challenging, testing, and refining.

It does not mean it’s perfect, complete or correct. Peer review is the beginning of a study’s active public life, not the culmination.

And finally …

Research is a human endeavour and as such is subject to all the wonders and horrors of any human endeavour.

Just like in any other aspect of our lives, in the end, we have to make our own decisions. And sorry, appropriate use even of the world’s best study does not relieve us of this wonderful and terrible responsibility.

There will always be ambiguities that we have to wade through, so like any other human domain, do the best you can on your own, but if you get stuck, get some guidance directly from, or at least originally via, useful experts.


This article is part of a series on Understanding Research.

Further reading:
Why research beats anecdote in our search for knowledge
Clearing up confusion between correlation and causation
Where’s the proof in science? There is none
Positives in negative results: when finding ‘nothing’ means something
The risks of blowing your own trumpet too soon on research
How to find the knowns and unknowns in any research
How myths and tabloids feed on anomalies in science

Approaching scientific literature sensibly

thinking-conf-bias

We all suffer more or less from confirmation bias – it is just human.  So it’s natural for people to be selective, and to indulge in some cherry-picking and biased interpretation, when quoting scientific literature to support an idea they promote.

pseudoscience-cherry-picking

In the scientific community peer review and continual submission of ideas to scrutiny by colleagues helps keep this under control. But it can really get out of hand when used political activists use the literature to support their claims.

I have got used to anti-fluoride commenters on social media simply citing a paper or even providing a bare link, without comment, as if this somehow makes their claims irrefutable. Perhaps, in truth, they have not even read the paper they cite, or understood it, so do not feel confident discussing it.

But this tactic is particularly lazy – and stupid. To simply give a Google Scholar search as proof. Lately I have been presented with links to such searches to argue that fluoridation is toxic. Just a search for “fluoride toxicity.”

This is what that search produces – 234,000 hits:

Fluoride toxicity – 234,000 results

fluoride-toxicity

Sounds good to the uninitiated, I guess. It does seem to produce a large number. But does that mean anything?

What about searching for water toxicity. This produces over 2 million hits. Are we to assume from this that water is toxic, seemingly 10 times more toxic than fluoride?

Water toxicity – 2,190,000 results

water-toxicity

Yes, I know some social media do not offer much space for commenting but that should not be an excuse for such silly citations.

Similar articles