Tag Archives: bad science

Experts complain to funding body about quality of fluoride-IQ research

Science should never be protected from critical and rational discussion. Funding bodies should also be aware of problems in the research they fund. Image credit: The value of experience in criticizing research.

The scientific community was generally critical of the recent Canadian maternal neonatal fluoride – child IQ research (see expert reaction to study looking at maternal exposure to fluoride and IQ in children). But this has now taken a more serious turn.  Thirty academics and professional experts from health and dental institutions in the US, Canada, UK, Ireland, and Australia have formally complained to the US National Insitute of Environmental Health Science (NEHS) about the study.

This is highly important as the NEHS is the funding body for this research. If it takes seriously the criticisms of poor quality of the research and its bias it could well mean these study authors lose their funding.

I have covered professional criticism of this study in previous articles (and included some of my own critical comments). See:

Here is the letter to the NIEHS – readers can download and read it for themselves. I urge you to do this as there may well be a lot of misrepresentation circulating in the near future if anti-fluoride activists launch a campaign to discredit it.

Release of data and methodology requested

The letter requests the NIEHS:

“formally ask the Green authors to release the HIPAA-compliant, Research Identifiable File (RIF) data sets from their study, as well as a complete explanation of their methods and the computer program/codes used in their data management and analysis.”

This request is motivated by the fact that several of the study authors “have declined to respond affirmatively to requests from other researchers for access to the data and analytical methods they used.”

I know that study authors have gone even further – for example, asking that a university department pressure one of their research students to remove social media discussion of the study. Unfortunately, the student did remove his posts – but I can understand the power of institutional pressure.

I think such to such limiting of critical post-publication discussion is ethically unscientific as it inhibits true peer review. It’s made worse in this situation as the journal has a policy of restricting publishing any critiques of papers to four weeks after publication. The journal editor did refer to “the implications of this study” being “debated in the public arena” – but it appears that the authors are not exactly keen on that either.

Large range of problems with the Canadian study

The letter lists a number of problems with the Canadian study. These include:

  1. Focusing on a subgroup analysis amid “noisy data”:
  2. Modeling and variable anomalies:
  3. Lacking data on relevant factors:
  4. Omitting crucial findings:
  5. Using invalid measures to determine individual exposures:
  6. Defining the final study group:
  7. Assessing the impact of fluoride exposure:
  8. Reporting anomalies:
  9. Internal inconsistency of outcomes:
  10. Overlooking research that conflicts with the authors’ conclusions:

I urge readers who are interested in either of these aspects to refer to the letter for details of the problems. The letter includes a list of 30 references relevant to these problems and to criticisms of the study by other professionals.

Scientific politics

In Politics of science – making a silk purse out of a sow’s ear I raised the problems presented by scientific politics where poor studies are often promoted by journals, institutions, and authors. Maybe that is to be expected – science is a human activity and therefore subject to human problems like ambition and self-promotion.

Billboards like this misrepresent the Canadian research. But self-promotion and ambition of researchers and authors provide “authoritative” statements that activists use for such fake advertising.

However, in this case, scientific ambition and self-promotion have led to apparently “authoritative” statements by professionals that have been used to feed the scaremongering of anti-fluoride activists. These professionals may argue they are careful to qualify their statements but in the end, they must bear a lot of responsibility for the sort of completely misleading and false advertising activists have been promoting. Advertising which has serious consequences because of its scaremongering.

Scaremongering and scientific integrity

The letter also raises the problem of scaremongering in its final paragraph:

“. . . the Green article could generate unjustified fear that undermines evidence-based clinical and public health practices. So much is at stake. Hundreds of millions of people around the globe—from Brazil to Australia—live in homes that receive fluoridated drinking water. Hundreds of millions of people use toothpaste or other products with fluoride. Many millions of children receive topical fluoride treatments in clinical or other settings. Tooth decay remains one of the most common chronic diseases for children and teens, and fluoride is a crucial weapon against this disease. Decay prevalence could increase if a journal article unnecessarily frightens people to avoid water, toothpaste or other products fortified with fluoride.”

This letter by 30 high ranking professionals is extremely important. The concerns it raises are very relevant to scientific integrity and hence scientific credibility. I hope that the NIEHS and similar bodies will take on board the responsibility they have to ensure the work they fund is credible, expert, scientifically authentic and as free as possible from personal scientific ambitions and biases.

Similar articles

 

Fake weight-loss study example of wider problem

bad science

Click on image to enlarge

Another interesting article in the Conversation – Trolling our confirmation bias: one bite and we’re easily sucked in by Will Grant. It underlines a point  I have often made – that the sensible reader must approach the scientific literature intelligently and critically.
Grant describes a “scientific” prank which fooled many news outlets who reported the “scientific finding”, and, therefore, many readers.

“Last week science journalist John Bohannon revealed that the whole study was an elaborate prank, a piece of terrible science he and documentary film makers Peter Onneken and Diana Löbl – with general practitioner Gunter Frank and financial analyst Alex Droste-Haars – had set up to reveal the corruption at the heart of the “diet research-media complex”.”

The first trick

This was more than just planting a fictitious “science” story:

“To begin the study they recruited a tiny sample of 15 people willing to go on a diet for three weeks. They divided the sample into three groups: one followed a low carbohydrate diet; another followed that diet but also got a 42 gram bar of chocolate every day; and finally the control group were asked to make no changes to their regular diet.

Throughout the experiment the researchers measured the participants in 18 different ways, including their weight, cholesterol, sodium, blood protein levels, their sleep quality and their general well being.”

So – that was the first trick. “Measuring such a tiny sample in so many ways means you’re almost bound to find something vaguely reportable.” As Bohannon explained:

“Think of the measurements as lottery tickets. Each one has a small chance of paying off in the form of a “significant” result that we can spin a story around and sell to the media. The more tickets you buy, the more likely you are to win. We didn’t know exactly what would pan out — the headline could have been that chocolate improves sleep or lowers blood pressure — but we knew our chances of getting at least one “statistically significant” result were pretty good.”

Publication

Now to get credibility they needed to publish in a scientific journal:

“But again, Bohannon chose the path that led away from truth, picking a journal from his extensive list of open access academic journals (more on this below). Although the journal, (International Archives of Medicine), looks somewhat like a real academic journal, there was no peer review. It was accepted within 24 hours, and published two weeks later.”

Now for the publicity

Bohannon then whipped up a press release to bait the media :

“The key, Bohannon stated, was to “exploit journalists’ incredible laziness” – to write the press release so that reporters had the story laid out on a plate for them, as it were. As he later wrote, he “felt a queazy mixture of pride and disgust as our lure zinged out into the world”. And a great many swallowed it whole.

Headlines around the world screamed Has the world gone coco? Eating chocolate can help you LOSE weight, Need a ‘sweeter’ way to lose weight? Eat chocolates! and, perhaps more boringly, Study: Chocolate helps weight loss.”

We should be concerned at the way the news media and reporters handle such matters:

“None did the due diligence — such as looking at the journal, looking for details about the number of study participants, or even looking for the institute Bohannon claimed to work for (which exists only as a website) — that was necessary to find out if the study was legitimate.”

This criticism, unfortunately, applies to almost anything in our news media. it really is a matter of “reader beware.”

Grant summarises the process that leads to such devious “science’ stories in the media:

  • we’ve got researchers around the world who have taken to heart the dictum that the quantity of research outputs is more important than the quality
  • we’ve got journal publishers at the high quality end that care about media impact more than facts
  • we’ve got journal publishers at the no-quality end who exploit the desperation of researchers by offering the semblance of publication for a modest sum
  • we’ve got media outlets pushing their journalists ever harder to fill our eyeballs with clickbaity and sharebaity content, regardless of truth
  • and we’ve got us: simple creatures prone to click, read and share the things that appeal to our already existing biases and baser selves.

 Problem wider than the diet industry

Bohannon gives his prank as an example of a “diet research-media complex . . that’s almost rotten to the core.” I agree readers should be far more sceptical of such diet-related science stories. But the problem is far wider than that industry. I think is particularly relevant to any area where people are ideologically motivated, or their feelings of inadequacy or danger, can be manipulated.

Take, for example, the anti-fluoride movement. I have given many examples on this blog of science being misrepresented, or poor quality science being published and promoted by this movement. There are examples of anti-fluoride scientists doing poor quality research – often relying on “statistical fairy tales. Examples of using shonky journals to get poor quality work published. But also examples of such work making its way through inadequate journal peer-review processes.

These anti-fluoride researchers, and their allied activist groups, commonly use press releases to promote their shonky findings.  Social media like Facebook and Twitter are roped in to spread the message even more widely.

There is also a link with big business interests – in this case an active anti-fluoride “natural” health business-research-media complex.

So readers beware – there are people, businesses and ideological interests out there attempting to fool you. And they are not averse to using shonky or false science, biased press releases and lazy journalists to do this.

 See also: A rough guide to spotting bad science from Compound Interest (Click to enlarge).

ck4wisusvgsuwz4u8byd

Similar articles

 

 

Spotting Bad Science

Compound Interest has produced another great infographic in their series. This one helps us to spot bad science.

 Click to enlarge.  You can download the current version as a PDF here.

It is worth thinking about each of the suggested 12 criteria.

I particularly liked that it advises one to carefully evaluate scientific papers even when they are published in a reputable journal. A good journal and peer review is not a guarantee that the paper is faultless or that its findings can be accepted without proper consideration.

Dishonesty of intelligent design “research”

In my recent post Creationists prefer numerology to real scientific research I discussed the “research” approach used by those few scientists who are proponents of intelligent design. And I concluded:

“they ignore the normal honest research approach. They never advance a structured hypothesis, one that is consistent with intelligent design. They therefore never submit such hypothesis to any testing or validation.”

Behe

Michael Behe is Professor of Biological Sciences at Lehigh University in Pennsylvania. He works as a senior fellow with the Discovery Institute’s Center for Science & Culture.

Recently I noticed another blatant example of this lack of scientific honesty – the refusal to propose and test their own hypotheses of intelligent design. It’s a quote that seems to be going around the religious apologist bogs at the moment. For example, have a look at True Paradigm: Monday quote, The Big Bad Wolf, Theism and the Foundations of Intelligent Design – Page 13, or Still Speculating After All These Years at Contra Celsum.

It’s a quote from Michael J. Behe‘s book Darwin’s Black Box: The Biochemical Challenge to Evolution – this is the short form.

“The overwhelming appearance of design strongly affects the burden of proof: in the presence of manifest design, the onus of proof is on the one who denies the plain evidence of his eyes.”

Michael J. Behe, Darwin’s Black Box: The Biochemical Challenge to Evolution p 265.

Notice the problem?

Behe is asserting that he has no need to produce any evidence, outline a structured hypothesis, or do anything to test or validate his claim.

He simply has to make an assertion – based on nothing more than his claim of an “overwhelming appearance” (to him). Then it is up to those with different hypothesis to do all the work. To test his assertion (please note – a vague assertion – not a structured hypothesis) and prove him wrong.

Or else he declares his assertion correct by default!

Similar articles