
We should be very careful about naively accepting claims made by the mainstream media – but this is also true of scientific claims. We should approach them intelligently and critical and not merely accept them on faith
I cringe every time I read an advocate of science asserting they “believe in science.” Yes, I know they may be responding to an assertion made by supporters of religion or pseudoscience. But “belief” is the wrong word because it implies trust based on faith and that is not the way science works.
Sure, those asserting this may argue that they have this belief because science is based on evidence, not faith. But that is still a copout because evidence can be used to draw conclusions or make claims that are still not true. Anyway, published evidence may be weak, misleading or poorly interpreted.
Here is an example of this dilemma taken from the Vox article Hyped-up science erodes trust. Here’s how researchers can fight back.
The figure is based on data published in Schoenfeld, J. D., & Ioannidis, J. P. A. (2013). Is everything we eat associated with cancer? A systematic cookbook review. American Journal of Clinical Nutrition, 97(1), 127–134.
It is easy to cite a scientific article, for example, as evidence that wine protected one from cancer. Or that it, in fact, causes cancer. Unfortunately, the scientific literature is full of such studies with contradictory conclusions. Usually based on real data and statistical analyses which show a significant relationship. But, if it is easy to find such studies which can be claimed as evidence of opposite effects what good is a “belief” in science? All that simple “belief” does is provide a scientific source for one’s own beliefs, an exercise in confirmation bias.
This figure should be a warning to approach published findings in fields like nutritional epidemiology and environmental epidemiology critically and intelligently. One should simply not take them as factual – we should not “believe” in them simply because they are published in scientific journals.
Schoenfeld, & Ioannidis (2013) say of the studies they investigated that:
“the large majority of these studies were interpreted by their authors as offering evidence for increased or decreased risk of cancer. However, the vast majority of these claims were based on weak statistical evidence.”
They discuss problems such as the “pressure to publish,” undervaluation or not reporting negative results, “biases in the design, execution and reporting of studies” because nutritional ingredients “viewed as “unhealthy” may be demonized.”
The authors warn that:
“studies that narrowly meet criteria for statistical significance may represent spurious results, especially when there is large flexibility in analyses, selection of contrasts, and reporting.”
And:
” When results are overinterpreted, the emerging literature can skew perspectives and potentially obfuscate other truly significant findings.”
They warn that these sorts of problems may be:
“especially problematic in areas such as cancer epidemiology, where randomized trials may be exceedingly difficult and expensive to conduct; therefore, more reliance is placed on observational studies, but with a considerable risk of trusting false-positive”
These comments are very relevant to consideration of recent scientific studies claiming a link between community after fluoridation and cognitive deficits. Studies the are heavily promoted by the anti-fluoridation activists and, more importantly for scientific readers, by the authors of these studies themselves and their institutions. I have discussed specific problems in previous posts about the results from the Till group and their promotion by the authors.
The merging of pseudoscience with science
We seem to make an issue of countering pseudoscience with science but in the process are often oblivious to the fact these to tend to merge – even for professional scientists. After all, we are human and all have our own biases to confirm and our jobs to advance.
This is a black and white contrast of science with pseudoscience promoted by Skeptics. It’s worth comparing this with the reality of the scientific world.
Do scientist always follow the evidence? Don’t they sometimes start with the conclusion and look for evidence to support it – even clutching at the straws of weak evidence (statically weak relationships in environmental epidemiological studies which are promoted as proof of harmful effects)?
Oh for the ideal scientist who embraces criticism. Sure, they are out there but so many refuse to accept criticism, “circle the wagons” and end up unfairly and emotively attacking their critics. I describe one example in When scientists get political: Lead fluoride-IQ researcher launches emotional attack on her scientific critics.
Are claims always conservative and tentative? Especially when scientists have a career or institution to promote. And institutions with their press releases are a big part of this problem of overpromotion. Unfortunately, in environmental epidemiology, some scientists will take weak research results to argue that they prove a cause and then request regulation by policymakers. I discuss this in Science is often wrong – be critical. Specifically, there is the case of weak scientific data from Till’s research group being used to promote regulatory actions to confirm their biases.
Unfortunately, scientists with biases to confirm find it quite easy to ignore or downgrade the evidence which doesn’t fit. They may even work to prevent publication of countering evidence (see for example Fluoridation not associated with ADHD – a myth put to rest).
Conclusion
I could go one taking each point in order. But, in reality, I think such absolute claims about science are just not realistic. The scientific world is not that perfect.
In the end, the intelligent scientific reader must approach even the published literature very critically if they are to truly sift the wheat from the chaff.