Category Archives: science

An open letter to Paul Connet and the anti-fluoride movement

Paul Connett and Vyvyan Howard have, through the local Fluoride Free New Zealand activist group, published an open letter addressed to NZ scientists and educators (see An Open Letter To NZ Scientists And Educators). It is strange to encourage scientific exchanges through press releases but if they are seriously interested in an exchange of informed scientific opinion on the research they mention I am all for it.

In fact, I renew my offer to Paul Connett for a new exchange on the new relevant research along the lines of the highly successful scientific exchange we had in 2013/2014 summarised in Conett & Perrott (2014) The Fluoride Debate.

Connett and Howard say they felt “let down” by the reception they received in their 2018 visit. But they should realise this sort of ridicule is inevitable when a supposedly scientific message is promoted by activist fringe groups with known funding by big business (in this case the “natural”/alternative health industry). The science should be treated more respectably and discussed in a proper scientific forum or via a proper scientific exchange rather than political style activist meetings.

It is this sort of respectable, informed and open scientific exchange I am offering to Paul Connett and Vyvyan Hoard.

Connett and Howard argue that there has recently been  “a dramatic change in the quality of these [fluoride] studies.” I agree that new research occurs all the time and there is plenty of scope upgrading of the scientific exchange we had in 2013/2014 to cover that new research. Consideration of the new research requires the objective, critical and intelligent consideration scientists are well used to and this is not helped by activist propaganda meetings. So I encourage Connett and Howard to accept my offer. after all, if they are confident in their own analysis of this research what do they have to lose?

Inaccuracies in “open letter”

One can see an “Open letter” as displaying a willingness to enter into a proper scientific exchange. However, Connett and Vivyan’s open letter includes inaccuracies and misinformation on the new research which simply demonstrates that a one-sided presentation cannot present the research findings properly.

For example, they misrepresent the 2014 New Zealand fluoridation review of Eason et al (2014). Health effects of water fluoridation: A review of the scientific evidence. Even to the extent of mistaking the authors (not Gluckman & Skegg as they claim) and misrepresenting the small mistake made in the summary which was later corrected. That attitude does not bode well for the proper consideration of the research.

Connett and Howard concentrate on new research relating child IQ to fluoride intake but they ignore completely the fact that all the research comparing IQ in fluoridated and unfluoridated areas show absolutely no effect. I have summarised the results for the three papers involve in this table.

Instead, they concentrated on a few extremely weak relationships reported in a few papers. But even here they get this wrong – for example, they say there is “a loss of about 4 IQ points in offspring for a range of 1 mg/liter of fluoride in mother’s urine.” The paper they refer to (Green et al 2019) actually found no statistically significant relationship between child IQ and maternal urinary fluoride for all children considered. The relationship Connett and Howard mention was actually for male children (no relationship for female children or for all children) and was very weak. These sort of weak relationships are commonly found in epidemiological research and are usually meaningless. In this case, Connett and Howard have simply cherry-picked one value and misrepresented it as applying to all children.

Both the Green et al (2019) and Till et al (2020) papers Connett and Howard refer to suffer from selecting a few weak statistically significant relationships and ignoring the larger number of non-significant relationships they found for the data they investigated. Connett and Howard also completely ignored the new studies that don’t fit their claims. For example that of Santa-Marina et al (2019). Fluorinated water consumption in pregnancy and neuropsychological development of children at 14 months and 4 years of age. which showed an opposite positive relationship of child IQ with maternal urinary fluoride. Similar they ignored the large Swedish study of Aggeborn & Öhman (2020). The Effects of Fluoride in the Drinking Water showing no effect of fluoride on IQ but positive effects on oral health and employment possibilities in later life.

In conclusion, I reiterate that genuine open scientific exchanges do not take place via press release and activist meetings. However, the fact that Connett and Howard have issued an “Open Letter” could be interpreted as inviting others to participate in a proper exchange. I endorse that concept and offer Connett and Howard space for a free and open exchange on the new research at this blog site.

Similar articles

 

Censorship: Thinking you are right – even if you’re wrong

I posted this video four years ago – but repost it now because the message is still very valid (see Are you really right? and Warriors, scouts, Trump’s election and your news media). In fact, it’s more valid today than it was four years ago. Things have got worse – far worse than I expected they would.

To recap – Julia describes the two different mindsets required in fighting a war:

  • The  “Warrior mindset” – emotively based and fixated on success. Not interested in stopping to think about the real facts or rationally analyse the situation.
  • The “Scout mindset” – objective and rational, ready to consider the facts (in fact, searching them out) and logically consider possibilities.

Unfortunately the “warrior mindset” – emotively based and not considering the facts or rationally analysing the situation – may be required in war but now seems to be the standard approach in politics (maybe even in science for some people). The “scout mindset” is unfortunately rare – and actually disapproved of when it occurs. Things have got worse in that respect.

There is another unfortunate dimension to this. Just as people have become convinced that they themselves are the fountains of truth, and their opponents fountains of untruth, there is now the drive to censor. Many people are arguing that people they disagree with should be denied access to the media, especially social media. And they jump on the media when their opponents are given space.

Hell, the media itself is encouraging this. Nowadays we have silicon valley corporations, which control social media, determining who should have a voice. And some people who should know better are applauding them for this. Applauding selfishly because they want to see their discussion opponents denied a voice. This is not just cowardly but extremely short-sighted of them – what will they do when they, in turn, are denied a voice by these very same corporations?

Whatever happened to the old adage – “I Disapprove of What You Say, But I Will Defend to the Death Your Right to Say It?” I grew up thinking this was a widely approved ideal but now find the people who I had considered “right-minded,” “free thinkers,” and “liberals” have almost completely abandoned it. They seem to be the first in line to impose censorship and the last to complain when censorship happens.

The folly of censorship

Censorship, the attempt to close down a rational discussion, hardly gives the impression that the supporters of censorship have truth on their side. If anything it suggests they do not have the arguments and this, in effect, hands victory to the ones censored.

It is short-sighted and cowardly and does not close down the discussion – it simply moves it elsewhere. And usually to a forum in which the instigators or supporters of the censorship have little input. 

Censorship usually hands the moral high ground to the ones being censored – and cheaply as they have gained this without even having their ideas (rational or irrational) tested in meaningful discussion.

There is a lot of truth in the old saying that sunlight is the best disinfectant – and refusal to allow this only encourages bad ideas to proliferate unchecked.

That is not to say all the ideas being censored are “bad” – many plainly aren’t. But will those who censor or support censorship (or indulge in self-censorship –  another common problem with people I used to respect) ever know? In the end, the censors and their supporters simply end up living in their own silo or bubble, seemingly oblivious to many of the ideas circulating in society or the world. Ideas they could perhaps have learned something from. Even if the ideas one wishes to censor are based on misinformation or misunderstanding the exercise of debating them can teach things to both sides. Censorship, and especially self-censorship, prevents self-development.

Arbitrary or ideologically motivated censorship?

Often censorship by social media appears arbitrary, perhaps driven by algorithms. For example, Twitter recently blocked the official account of Russian Arms Control delegation in Vienna, engaged in negotiations on the Open Skies Treaty and other important issues. It was later reinstated – without explanation or apology – but followers were lost. Arbitrary or not, should such an important body, involved in negotiations critical for the whole planet, be censored?

Are those algorithm innocent or objective? These days we see social media like Twitter and Facebook employing staff with political backgrounds. Even people who have previously worked in, or still work in, bodies like the Atlantic Council which is connected with NATO. And the revolving door by which ex-politicians and intelligence staff get employed in the mainstream media is an open secret. These people openly describe information coming from “the other side” as “disinformation,” “fake news,” or “state-supported propaganda” so have no scruples about censoring it or otherwise working to discredit it (for example labelling news media as state-controlled – but only for the “other side” – eg RT, but not the BBC or Voice of America).

Ben Nimmo, a member of the Atlantic Council, and well-known for his aggressive political views, recently announced his move to Facebook

This biased approach to information or social discussion is strongly driven by an “official” narrative. A narrative promoted by military blocks, their governments and their political leaders. But even unaffiliated persons approaching social discussion can be, and usually are, driven by a narrative. A narrative that they often strongly and emotionally adhere to. Juna Galef’s “warrior mindset.” That is only human. It’s a pity, but probably few participants in social discussions get beyond this mindset and adopt the far more useful (for obtaining objective truth) “scout mindset” which is objective and rational, ready to consider the facts (in fact, searching them out) and logically considers possibilities.

But let’s face it. If you support censorship, or even instigate censorship in social media you control, how likely is it that you can get beyond the “warrior mindset?” The “scout mindset” by necessity requires open consideration of views and facts you may initially disagree with. Censorship, especially self-censorship, prevents that. It prevents personal growth.

Similar articles

 

Embarrassing knock-back of second draft review of possible cognitive health effects of fluoride

We have come to expect exaggeration of scientific findings in media reports and institutional press releases. But it can also be a problem is original scientific publications where findings are reported in an unqualified or exaggerated way. Image Credit: Curbing exaggerated reporting

This is rather embarrassing for a US group attempting to get the science right about possible toxic effects of fluoride. It’s also embarrassing for the anti-fluoride activists who have “jumped the gun” and been citing the group’s draft review as if it was reliable when it is not.

The US National Academies of Sciences, Engineering, and Medicine (NAS) have released their peer-review of the revised US National Toxicity Program (NTP) draft on possible neurodevelopmental effects of fluoride (see Review of the Revised NTP Monograph on the Systematic Review of Fluoride Exposure and Neurodevelopmental and Cognitive Health Effects).

This is the second attempt by the NTP reviewers to get acceptance of their draft and it has now been knocked back by the NAS peer reviewers for a second time.

Diplomatic but damning peer-review

Of course, the NAS peer reviewers use diplomatic language but the peer review is quite damning. It criticises the NTP for ignoring some of the important recommendations in the first peer review. One which is quite critical was the lack of response to the request that NTP explains how the monograph can be used (or not) to inform water fluoridation concentrations. The second NAS peer review firmly states that the NTP:

“should make it clear that the monograph cannot be used to draw any conclusions regarding low fluoride exposure concentrations, including those typically associated with drinking-water fluoridation.”

And:

“Given the substantial concern regarding health implications of various fluoride exposures, comments or inferences that are not based on rigorous analyses It seems to me that there is soime internal politicsshould be avoided.”

It seems to me there is some internal politics involved and some of the NTP authors may be promoting their own, possibly anti-fluoride, agenda. Certainly, the revised NTP draft monograph continues to obfuscate this issue. It continues to state that “fluoride is presumed to be a cognitive neurodevelopmental hazard to humans” – a clause which anti-fluoride campaigner consistently quote out of context. Yes, it does state that this is based on findings demonstrating “that higher fluoride exposure (e.g., >1.5 mg/L in drinking water) is associated with lower IQ and other cognitive effects in children.” But this is separated from the other fact that the findings on cognitive neurodevelopment for “exposures in ranges typically found in drinking water in the United States (0.7 mg/L for optimally fluoridated community water systems)” are “are inconsistent, and therefore unclear.”

Monograph exaggerates by enabling unfair cherry-picking

So, you see the problem. The draft NTP monograph correctly refers to IQ and other cognitive effects in children exposed to excessive levels of fluoride. The draft also correctly refers to that lack of evidence for such effects at lower fluoride exposure levels typical of community water fluoridation. But in different places in the document.

The enables activist cherry-picking to support an anti-fluoride agenda and that is a fault of the document itself. It should clearly state that the monograph should not be used to draw any conclusion at these low exposure levels. This is strongly expressed in the peer-reviewers’ comments.

I find the blanket “presumed to be a hazard for humans” quite misleading. For example, no one says that calcium is “presumed to be a cardiovascular hazard to humans.” Or that selenium is “presumed to be a cardiovascular or neurological hazard to humans.” Or what about magnesium – would you accept that it is a “presumed neurological hazard to humans?” Would you accept that iron is a “presumed cardiovascular, cancer, kidney or erectile dysfunction hazard to humans?” Yet all those problems have been reported for humans at high intake levels of these elements.

No, we sensibly accept that various elements and microelements have beneficial, or essential benefits, to humans at reasonable intake levels., Then we sensibly warn that these same elements can be harmful at excessive intake. To proclaim that any of these elements are “presumed” to be hazardous – without clearly saying at excessive intake levels, is simply distorting or exaggerating the data.

What does “presumed” mean?

A lot of readers find the use of “presumed” strange. But it’s meaning is related to the levels of evidence found by reviewers.

No, don’t believe those anti-fluoride activists who falsely claim that “presumed” is the highest level of evidence and that the finding should be treated as factual. They are simply wrong.

Some idea of the word’s use is presented in this diagram from the NTP revised draft monograph.

So “presumed” means that the evidence for the effect is moderate. That the effect is not factual or known. But as further evidence comes in the ranking of fluoride as a hazard may increase, or decline.

As the monograph bases this “presumed” rating solely on evidence from areas of endemic fluorosis where fluoride intake levels are high it is correct to avoid stating the effects as factual. For example, consider these images from areas of endemic fluorosis in China (taken from a slide presentation by Xiang 2014):

Clearly, people in these areas suffer a range of health effects related to the high fluoride intake. The cognitive effects like IQ loss from these areas could result from these other health effects, not directly from fluoride (although excessive fluoride intake leads to the health effects).

So we can “presume” that fluoride (in areas of endemic fluorosis where fluoride intake is excessive) is a “cognitive neurodevelopmental hazard for humans” but we can not factually state that the neurodevelopment effects are directly caused by fluoride. That would require further scientific work to elucidate the specific mechanisms involved in creating that effect.

Similar articles

The promotion of weak statistical relationships in science

Image credit: Correlation, Causation, and Their Impact on AB Testing

Correlation is never evidence for causation – but, unfortunately, many scientific articles imply that it is. While paying lip service to the correlation-causation mantra, some (possibly many) authors end up arguing that their data is evidence for an effect based solely on the correlations they observe. This is one of the reasons for the replication crisis in science where contradictory results are being reported. Results which cannot be replicated by other workers (see I don’t “believe” in science – and neither should you).

Career prospects, institutional pressure and the need for public recognition will encourage scientists to publish poor quality work that they then use to claim that have found an effect. The problem is that the public, the news media and even many scientists simply do not properly scrutinise the published papers. In most cases they don’t have the specific skills required for this.

There is nothing wrong with doing statistical analyses and producing correlations. However such correlations should be used to suggest future more meaningful and better-designed research like randomised controlled trials (see Smith & Ebrahim 2002Data dredging, bias, or confounding. They can all get you into the BMJ and the Friday papers. ). They should never be used as “proof” for an effect, let alone argue that the correlation is evidence to support regulations and advise policymakers.

Hunting for correlations

However, researchers will continue to publish correlations and make great claims for them because they face powerful incentives to promote even unreliable research results. Scientific culture and institutional pressures provide expectations demanding academic researchers produce publishable results. This pressure is so great they will often clutch at straws to produce correlations even when the initial statistical analyst produces none. They will end up “torturing the data.”

These days epidemiological researchers use large databases and powerful statistical software in their search for correlations. Unfortunately, this leads to data mining which, by suitable selection of variables, makes the discovery of statistically significant correlations easy. The data mining approach also means that the often cite p-values are meaningless. P-values measure the probability the relationship occurs by chance and often cited as evidence of the “robustness” of the correlations. But probability is so much greater when researchers resort to checking a range of variables and that isn’t reflected properly in the p-values.

Where data mining occurs, even to a limited extent, researchers are simply attempting to make a purse out of sow’s ear when they support their correlations merely by citing a p-value < 0.05  because these values are meaningless in such cases. The fact that so many of these authors often ignore more meaningful results from their statistical analyses (like R-squared values which indicate the extent that the correlation “explain” the variation in their data) underlines their deceptive approach.

Poor statistical relationships

Consider these correlations below – two data sets are taken from a published paper – the other four use random data provided by Jim Jones in his book Regression Analysis: An Intuitive Guide.

You can probably guess which correlations were from real data (J and M) because there are so many more data points All of these have correlations low p values – but of course, those selected from random data sets resulted from data mining and the p-values are therefore meaningless because they are just a few of the many checked. Remember, a p-value < 0.05 means that the probability of a chance effect is one in twenty and more than twenty variable pairs were checked in this random dataset.

The other two correlations are taken from Bashash et al (2017). They do not give details of how many other variables were checked in the dataset used but it is inevitable that some degree of data mining occurred. So, again, the low p-values are probably meaningless.

J provides the correlation of General Cognitive Index (GCI) scores in children at age 4 years with maternal prenatal urinary fluoride and M provides the correlation of children’s IQ at age 6–12 y with maternal prenatal urinary fluoride. The paper has been heavily promoted by anti-fluoride scientists and activists. None of the promoters have made a critical, objective, analysis of the correlations reported. Paul Connett, director of the Fluoride Action Network, was merely supporting his anti-fluoride activist bias when he uncritically described the correlations as “robust.” They just aren’t.

There is a very high degree of scattering in both these correlations, and the R-squared values indicate they cannot explain any more than about 3 or 4% of the variance in the data. Hardly something to hang one’s hat on, or to be used to argue that policymakers should introduce new regulations controlling community water fluoridation or ban it altogether.

In an effort to make their correlations look better these authors imposed confidence intervals on the graphs (see below). This Xkcd cartoon on curve fitting gives a cynical take on that. The grey areas in the graphs may impress some people but it does not hide the wide scatter of the data points. The confidence intervals refer to estimates of the regression coefficient but when it comes to using the correlations to predict likely effects one must use the prediction intervals which are very large (see Paul Connett’s misrepresentation of maternal F exposure study debunked). In fact, the estimated slopes in these graphs are meaningless when it comes to predictions.

Correlations reported by Bashash et al (2017). The regressions explain very little of the variance in the data and connect be used to make meaningful predictions.

In critiquing the Bashash et al (2017) paper I must concede that at least they made their data available – the data points in the two figures. While they did not provide full or proper results from their statistical analysis (for example they didn’t cite the R-squared values) the data does at least make it possible for other researchers to check their conclusions.

Unfortunately, many authors simply cite p-values and possible confidence intervals for the estimate of the regression coefficient without providing any data or images. This is frustrating for the intelligent scientific reader attempting to critically evaluate their claims.

Conclusions

We should never forget that correlations, no matter how impressive, do not mean causation. It is very poor science to suggest they do.

Nevertheless, many research resort to correlations they have managed to glean from databases, usually resorting to some extent of data mining, to claim they have found an effect and to get published. The drive to publish means that even very poor correlations get promoted and are used by ideologically or career-minded scientists, and by activists, to attempt to convince policymakers of their cause.

Image credit: Xkcd – Correlation

Remember, correlations are never evidence of causation.

Similar articles

Can we trust science?

Image credit: Museum collections as research data

Studies based simply on statistically significant relationships found by mining data from large databases are a big problem in the scientific literature. Problematic because data mining, or worse data dredging, easily produces relationships that are statistically significant but meaningless. And problematic because authors wishing to confirm their biases and promote their hypotheses conveniently forget the warning that correlation is not evidence for causation and go on to promote their relationships as proof of effects. Often they seem to be successful in convincing regulators and policymakers that their serious relationships should result in regulations. Then there are the activists who don’t need convincing but will promote willingly and tiresomely these studies if they confirm their agendas.

Even random data can provide statistically significant relationships

The graphs below show the fallacy of relying only on statistically significant relationships as proof of an effect. The show linear regression result for a number of data sets. One data set is taken from a published paper – the rest use random data provided by Jim Jones in his book Regression Analysis: An Intuitive Guide.

All these regressions look “respectable.” They have low p values (less than the conventional 0.05 limit) and the R-squared values indicated they “explain” a large fraction of the data – up to 49%. But the regressions are completely meaningless for at least 7 of the 8 data sets because the data were randomly generated and have no relevance to real physical measurements.

This should be a warning that correlations reported in scientific papers may be quite meaningless.

Can you guess which of the graphs is based on real data? It is actually the graph E – published by members of a North American group currently publishing data which they claim shows community water fluoridation reduces child IQ. This was from one of their first papers where they claimed childhood ADHD was linked to fluoridation (see Malin, A. J., & Till, C. 2015. Exposure to fluoridated water and attention deficit hyperactivity disorder prevalence among children and adolescents in the United States: an ecological association).

The group used this paper to obtain funding for subsequent research. They obviously promoted this paper as showing real effects – and so have the anti-fluoride activists around the world, including the Fluoride Action Network (FAN) and its director Paul Connett.

But the claims made for this paper, and its promotion, are scientifically flawed:

  1. Correlation does not mean causation. Such relationships in larger datasets often occur by chance – hell they even occur with random data as the figure above shows.
  2. Yes, the authors argue there is a biologically plausible mechanism to “explain” their association. But that is merely cherry-picking to confirm a bias and there are other biologically plausible mechanisms they did not consider which would say there should not be an effect. The unfortunate problem with these sorts of arguments is that they are used to justify their findings as “proof” of an effect. To violate the warning that correlation is not causation.
  3. There is the problem of correcting for cofounders or other risk-modifying factors. While acknowledging the need for future studies considering other confounders, the authors considered their choice of socio-economic factors was sufficient and their peer reviewers limited their suggestion of other confounders to lead. However, when geographic factors were included in a later analysis of the data the reported relationship disappeared. 

Confounders often not properly considered

Smith & Ebrahim (2002) discuss this problem an article  – Data dredging, bias, or confounding. They can all get you into the BMJ and the Friday papers. The title itself indicates how the poor use of statistics and unwarranted promotion of statical analyses can be used to advance scientific careers and promote bad science in the public media.

These authors say:

“it is seldom recognised how poorly the standard statistical techniques “control” for confounding, given the limited range of confounders measured in many studies and the inevitable substantial degree of measurement error in assessing the potential confounders.”

This could be a problem even for studies where a range of confounders are included in the analyses. But Malin & Till (2015) considered the barest minimum of confounders and didn’t include ones which would be considered important to ADHD prevalence. In particular, they ignored geographic factors and these were shown to be important in another study using the same dataset. Huber et al (2015) reported a statistically significant relationship of ADHD prevalence with elevation. These relationships are shown in this figure

Of course, this is merely another statistically significant relationship – not proof of a real effect and no more justified than the one reported by Malin and Till (2015). But it does show an important confounder that Malin & Till should have included in their statistical analysis.

I did my own statistical analysis using the data set of Malin & Till (2015) and Huber et al (2015) and showed (Perrott 2018) that inclusion of geographic factors showed there was no statistically significant relationship of ADHD prevalence with fluoridation as suggest by Malin & Till (2015). Their study was flawed and it should never have been used to justify funding for future research on the effect of fluoridation. Nor should it have been used by activists promoting an anti-fluoridation agenda.

But, then again, derivation of a statistically significant relationship by Malin & Till (2o15) did get them published in the journal Environmental Health which, incidentally, has sympathetic reviewers (see Some fluoride-IQ researchers seem to be taking in each other’s laundry) and an anti-fluoridation Chief Editor – Phillipe Grandjean (see Special pleading by Philippe Grandjean on fluoride). It also enabled the promotion of their research via institutional press releases, newspaper article and the continual activity of anti-fluoridation activists. Perhaps some would argue this was a good career move!

Conclusion

OK, the faults of the Malin & Till (2015) study have been revealed – even though Perrott (2018) is studiously ignored by the anti-fluoride North American group which has continued to publish similar statistically significant relationships of measures of fluoride uptake and measures of ADH or IQ.

But there are many published papers – peer-reviewed papers – which suffer from the same faults and get similar levels of promotion. They are rarely subject to proper post-publication peer-review or scientific critique. But their authors get career advancement and scientific recognition out of their publication. And the relationships are promoted as evidence for real effects in the public media.

No wonder members of the public are so often confused by the contradictory reporting, the health scares of the week, they are exposed to.

No wonder many people feel they can’t trust science.

Similar articles

I don’t “believe” in science – and neither should you

We should be very careful about naively accepting claims made by the mainstream media – but this is also true of scientific claims. We should approach them intelligently and critical and not merely accept them on faith

I cringe every time I read an advocate of science asserting they “believe in science.” Yes, I know they may be responding to an assertion made by supporters of religion or pseudoscience. But “belief” is the wrong word because it implies trust based on faith and that is not the way science works.

Sure, those asserting this may argue that they have this belief because science is based on evidence, not faith. But that is still a copout because evidence can be used to draw conclusions or make claims that are still not true. Anyway, published evidence may be weak, misleading or poorly interpreted.

Here is an example of this dilemma taken from the Vox article Hyped-up science erodes trust. Here’s how researchers can fight back.

The figure is based on data published in Schoenfeld, J. D., & Ioannidis, J. P. A. (2013). Is everything we eat associated with cancer? A systematic cookbook review. American Journal of Clinical Nutrition, 97(1), 127–134.

It is easy to cite a scientific article, for example, as evidence that wine protected one from cancer. Or that it, in fact, causes cancer. Unfortunately, the scientific literature is full of such studies with contradictory conclusions. Usually based on real data and statistical analyses which show a significant relationship. But, if it is easy to find such studies which can be claimed as evidence of opposite effects what good is a “belief” in science? All that simple “belief” does is provide a scientific source for one’s own beliefs, an exercise in confirmation bias.

This figure should be a warning to approach published findings in fields like nutritional epidemiology and environmental epidemiology critically and intelligently. One should simply not take them as factual – we should not “believe” in them simply because they are published in scientific journals.

Schoenfeld, & Ioannidis (2013) say of the studies they investigated that:

“the large majority of these studies were interpreted by their authors as offering evidence for increased or decreased risk of cancer. However, the vast majority of these claims were based on weak statistical evidence.”

They discuss problems such as the “pressure to publish,” undervaluation or not reporting negative results, “biases in the design, execution and reporting of studies” because nutritional ingredients “viewed as “unhealthy” may be demonized.” 

The authors warn that:

“studies that narrowly meet criteria for statistical significance may represent spurious results, especially when there is large flexibility in analyses, selection of contrasts, and reporting.”

And:

” When results are overinterpreted, the emerging literature can skew perspectives and potentially obfuscate other truly significant findings.”

They warn that these sorts of problems may be:

“especially problematic in areas such as cancer epidemiology, where randomized trials may be exceedingly difficult and expensive to conduct; therefore, more reliance is placed on observational studies, but with a considerable risk of trusting false-positive”

These comments are very relevant to consideration of recent scientific studies claiming a link between community after fluoridation and cognitive deficits. Studies the are heavily promoted by the anti-fluoridation activists and, more importantly for scientific readers, by the authors of these studies themselves and their institutions. I have discussed specific problems in previous posts about the results from the Till group and their promotion by the authors.

The merging of pseudoscience with science

We seem to make an issue of countering pseudoscience with science but in the process are often oblivious to the fact these to tend to merge – even for professional scientists. After all, we are human and all have our own biases to confirm and our jobs to advance.

This is a black and white contrast of science with pseudoscience promoted by Skeptics. It’s worth comparing this with the reality of the scientific world.

Do scientist always follow the evidence? Don’t they sometimes start with the conclusion and look for evidence to support it – even clutching at the straws of weak evidence (statically weak relationships in environmental epidemiological studies which are promoted as proof of harmful effects)?

Oh for the ideal scientist who embraces criticism. Sure, they are out there but so many refuse to accept criticism, “circle the wagons” and end up unfairly and emotively attacking their critics. I describe one example in When scientists get political: Lead fluoride-IQ researcher launches emotional attack on her scientific critics.

Are claims always conservative and tentative? Especially when scientists have a career or institution to promote. And institutions with their press releases are a big part of this problem of overpromotion. Unfortunately, in environmental epidemiology, some scientists will take weak research results to argue that they prove a cause and then request regulation by policymakers. I discuss this in . Specifically, there is the case of weak scientific data from Till’s research group being used to promote regulatory actions to confirm their biases.

Unfortunately, scientists with biases to confirm find it quite easy to ignore or downgrade the evidence which doesn’t fit. They may even work to prevent publication of countering evidence (see for example Fluoridation not associated with ADHD – a myth put to rest).

Conclusion

I could go one taking each point in order. But, in reality, I think such absolute claims about science are just not realistic. The scientific world is not that perfect.

In the end, the intelligent scientific reader must approach even the published literature very critically if they are to truly sift the wheat from the chaff.

Similar articles

Science is often wrong – be critical

Activists, and unfortunately many scientists, use published scientific reports like a drunk uses a lamppost – more for support than illumination

Uncritical use of science to support a preconceived position is widespread – and it really gets up my nose. I have no respect for the person, often an activist, who uncritically cites a scientific report. Often they will cite a report which they have read only the abstract of – or not even that. Sometimes commenters will support their claims by producing “scientific evidence” which are simply lists of citations obtained from PubMed or Google Scholar.

[Yes, readers will recognise this is a common behaviour with anti-fluoride activists]

Unfortunately, this problem is not restricted to activists. Too often I read scientific papers with discussions where authors have simply cited studies that support, or they interpret as supporting, their own preconceived ideas or hypotheses. Compounding this scientific “sin” is the habit of some authors who completely refuse to cite, or even discuss, studies producing evidence that doesn’t fit their scientific prejudices.

Publication does not magically make scientific findings or ideas “true” – far from it. The serious reader of scientific literature must constantly remember that the chances are very high that published conclusions or findings are likely to be false. John Ioannidis makes this point in his article Why most published research findings are false. Ioannidis concentrates on the poor use, or misuse, of statistics. This is a constant problem in scientific writing – and it certainly underlines the fact that even scientists will consciously or unconsciously manipulate their data to confirm their biases. They are using statistical analysis in the way a drunk used a lamppost – for support rather than illumination.

Poor studies often used to fool policymakers

These problems are often not easily understood by scientists themselves but the situation is much worse for policymakers. They are not trained in science and don’t have the scientific or statistical experience required for a proper critically analysis of claims made to them by activists. Yet they are often called on to make decisions which rely on the acceptance, or rejection, of scientific claims (or, claims about the science).

An example of this is a draft (not peer-reviewed) paper by Grandjean et al  – A Benchmark Dose Analysis for Maternal Pregnancy Urine-Fluoride and IQ in Children.

These authors have an anti-fluoride activists position and are campaigning against community water fluoridation (CWF). Their paper uses their own studies which report very poor and rare statistical relationships of child IQ with fluoride intake as “proof” of causation sufficiently strong to advocate for regulatory guidelines. Unsurprisingly their recommended guidelines are very low – much lower than those common with CWF.

Sadly, their sciencey sounding advocacy may convince some policymakers. It is important that policymakers be exposed to a critical analysis of these studies and their arguments. The authors will obviously not do this – they are selling their own biases. I hope that any regulator or policymaker required to make decisions on these recommendations have the sense to call for an independent, objective and critical analysis of the paper’s claims.

[Note: The purpose of the medRxiv preprints of non-peer-reviewed articles is to enable and invite discussion and comments that will help in revising the article. I submitted comments on the draft article over a month ago (Comments on “A Benchmark Dose Analysis for Maternal Pregnancy Urine-Fluoride and IQ in Children”) and have had no response from the authors.  This lack of response to constructive critiques is, unfortunately, common for this group. I guess one can only comment that scientists are human.]

Observational studies – exploratory fishing expeditions

A big problem with published science today is that many studies are nothing more than observational exploratory studies using existing databases which, by their nature, cannot be used to derive causes. Yet that can easily be used to derive statistically significant links or relationships. These can be used to write scientific papers but they are simply not evidence of causes.

Properly designed studies, with proper controls and randomised populations properly representing different groups, may provide reasonable evidence of causal relationships – but most reported studies are not like this. Most observational studies use existing databases with non-random populations where selection and confounding with other factors is a huge problem. Authors are often silent about selection problems and may claim to control for important confounding factors, but it is impossible to include all confounders. The databases used may not include data for relevant confounders and authors themselves may not properly select all relevant confounders for inclusion.

This sort of situation makes some degree of data mining likely., This occurs when a number of different variables and measures of outcomes are considered in the search for statistically significant relationships. Jim Frost illustrated the problems with this sort of approach. Using a set of completely fictitious random data he was able to obtain a statistically significant relationship with very low p values and R-squared values showing the explanation of 61% of the variance (see Jim Frost – Regression Analysis: An Intuitive Guide).

That is the problem with observational studies where some degree of data mining is often involved. It is possible to find relationships wich look good, have low p-values and relatively high R-squared values, but are entirely meaningless. They represent nothing.

So readers and users of science should beware. The findings they are given may be completely false or contradictory. or at least meaningless in quantitative terms (as is the case with the relationships produced by the Grandjean et al 2020 group discussed above).

A recent scientific article provides a practical example of this problem. Different authors used the same surgical database but produced complete opposite findings (see Childers et al: 2020). Same Data, Opposite Results?: A Call to Improve Surgical Database Research). By themselves each study may have looked convincing. Both used the same large database from the same year. Over 10,000 samples were used in both cases and both studies were published in the same journal within a few months. However, the inclusion and exclusion criteria used were different. Large numbers of possible covariates were considered but these differed. Similarly, different outcome measures were used.

Readers interested in the details can read the original study or a Sceptical Scalpel blog article Dangerous pitfalls of database research. However, Childers et al (2020) describe how the number of these sort of observational studies “has exploded over the past decade.” As they say:

“The reasons for this growth are clear: these sources are easily accessible, can be imported into statistical programs within minutes, and offer opportunities to answer a diverse breadth of questions.”

However:

“With increased use of database research, greater caution must be
exercised in terms of how it is performed and documented.”

“. . . because the data are observational, they may be prone to bias from selection
or confounding.”

Problems for policymakers and regulators

Given that many scientists do not have the statistical expertise to properly assess published scientific findings it is understandable for policymakers or regulators to be at a loss unless they have proper expert advice. However, it is important that policymakers obtain objective, critical advice and not simply rely on the advocates who may well have scientific degrees. Qualifications by themselves are not evidence of objectivity and, undoubtedly, we often do face situations where scientists become advocates for a cause.

I think policymakers should consciously seek out a range of scientific expert advice, recognising that not all scientists are objective. Given the nature of current observational research, its use of existing databases and the ease with which researchers can obtain statistically significant relationships I also think policymakers should consciously seek the input of statisticians when they seek help in interpreting the science.

Surely they owe this to the people they represent.

Similar articles

Even studies from endemic fluorosis areas show fluoride is not harmful at levels used in fluoridation

Most of the claims made by anti-fluoride propagandists are simply wrong. Image source: Fluoridation and the ‘sciency’ facts of critics

Anti-fluoride propagandists continually cite studies from areas of endemic fluorosis in their arguments against community water fluoridation (CWF). But if they critically looked at the data in those papers they might get a shock. Invariably the published data, even from areas of endemic fluorosis, shows fluoride is safe at the concentrations relevant to CWF.

I have completed a detailed analysis of all the 65 studies the Fluoride Action Network (FAN) lists as evidence that community water fluoridation (CWF) is harmful to child IQ. The full analysis is available for download as the document Analysis of FAN’s 65 brain-fluoride studies.

In this article, I discuss the studies in the FAN’s list (see FLUORIDE & IQ: THE 65 STUDIES”) which report relationships between child IQ and fluoride exposure in areas of endemic fluorosis. There are eleven such studies in the FAN list but only six of them provide sufficient data to enable independent statistical analysis.

While those six studies do show a statically significant (p<0.05) negative relationship of IQ with fluoride intake those results are not relevant to CWF because the fluoride as exposure levels are much higher than ever occurs with CWF.

However, it is possible to investigate if the relationships are significant at lower concentrations more relevant to CWF. I have done this with these six studies and illustrate the result obtained with these graphs below using the data extracted from Xiang et al (2003). (This study is often used by anti-fluoride campaigners).

The red data points in the figures below are for lower concentrations of urinary F or creatinine adjusted urinary F. The range for the red points is still quite a bit larger than urinary F levels measured for children in areas where CWF is used. However, we can see that the relationships at these lower ranges are not statistically significant (results from regression analyses cited in figures).

 

This was also the case with the other studies from FAN’s list which provided sufficient data for regression analyses. I summarise the results obtained for five of these studies in the figure below.

This show that none of the studies found statistically significant relationships with fluoride exposure for the low fluoride concentration relevant to CWF. The situation is basically the same for the sixth study, Mustafa et al (2018), which reports average school subject performances for a range of subjects for children in Khartoum state, Sudan. However, it is hard to know what the safe limit for fluoride exposure is in that climate (for climatic reasons the upper permissible F level in drinking water is set at 0.33 ppm for Khartoum state) and the sample numbers are low. Interested readers should consult my report – Analysis of FAN’s 65 brain-fluoride studies.

Conclusion

Anti-fluoride campaigners often cite FAN’s list (FLUORIDE & IQ: THE 65 STUDIES”) in their attempts to argue that fluoridation is bad for the child’s brain. But in these series of articles Anti-fluoride 65 brain-fluoride studies not evidence against fluoridation, I have shown that their arguments are false.

In Child IQ in countries with endemic fluorosis imply fluoridation is safe I showed that while IQ and other health problems may occur where fluoride exposure is very high in areas of endemic fluorosis the reports themselves implicitly assume that the low fluoride exposure in the “low fluoride” areas is safe. It is the data from these areas, not the “high fluoride” areas, that are relevant to CWF. So despite the heavy use of these articles by FAN and anti-fluoride activists these studies do not prove what they claim. If anything these studies show CWF is safe.

In this article, I considered a few of these studies which included data relevant to low fluoride exposure. When the low fluoride exposure data (relevant to CWF)  from these studies were statistically analysed none of them showed significant relationships of child IQ to fluoride exposure. That confirms the implicit assumption from these studies that there is no negative effect of fluoride exposure on child IQ at these low levels.

Finally, in Canadian studies confirm findings of Broadbent et al (2015) – fluoridation has no effect on child IQ I summarise results from the only three studies where comparisons of IQ for children living in fluoridated and unfluoridated areas are compared. These studies were made in New Zealand and Canada and the results were the same. No statistically significant differences in child IQ were found.

However, the authors of the Canadian studies ignored this result and instead used questionable statistical methods to search for possible relationships between fluoride exposure and child IQ. Most of the relationships they report were not statistically significant but, nevertheless, they and their supporters have simply ignored this and concentrated on the few statically significant relationships.

Anti-fluoride activists currently rely strongly on these studies and heavily promote them. I will discuss these few studies further in my next article.

Similar articles

 

 

Canadian studies confirm findings of Broadbent et al (2015) – fluoridation has no effect on child IQ

Readers may remember the scathing reaction of anti-fluoride campaigners to the paper of Broadbent et al (2015). This was the first paper to compare child and adult IQ levels for people living in fluoridated and unfluoridated areas.

The anti-fluoride campaigners were extremely rude in their reaction – accusing the authors of fraud and claiming the paper was “fatally flawed.” Interestingly, several scientists known for their anti-fluoride bias also launched attacks – but more respectably as letters to the editor of the journal. For example, see articles by Osmunson et al (2016),  Grandjean (2015),; and Menkes et al (2014).

And why? Simply because Broadbent et al (2015) showed there was no difference in IQ of people living in fluoridated areas. That the studies from areas of endemic fluorosis used by anti-fluoride activists to argue at CWF were just not relevant (see Child IQ in countries with endemic fluorosis imply fluoridation is safe).

But isn’t it strange? Two more recent papers (Green et al 2019 & Till et al 2020) have effectively repeated the work of Broadbent et al (2015). They found the same result – no difference in IQ of children living in fluoridated and unfluoridated areas. And simply no reaction, no condemnation from anti-fluoride activists or the anti-fluoride scientists.

No condemnation because these anti-fluoride critics promote these papers for other reasons. But this underlines how biased the critics of the Broadbent et al (2015) paper were.

I have completed a detailed analysis of all the 65 studies the Fluoride Action Network (FAN) lists as evidence that community water fluoridation (CWF) is harmful to child IQ. The full analysis is available for download as the document Analysis of FAN’s 65 brain-fluoride studies.

In this article, I discuss the studies in the FAN’s list (see FLUORIDE & IQ: THE 65 STUDIES”) which compare child IQ in areas of “fluoridated” and “unfluoridated” fluoride in Canada. Only two studies – but I include that of Broadbent et al (2015) (which FAN’s list ignores) for completeness. All three studies found no difference in the IQ of children living in fluoridated and unfluoridated areas.

Comparing IQ of children in fluoridated and unfluoridated areas

The table below summarises the results reported by all three studies – Broadbent et al (2015), Green et al (2019), and Till et al (2020).

Table 1: Results from studies comparing IQ of children and adults from fluoridated and fluoridated areas

Notes:
Data from Green et al (2019) for children whose mothers lived in fluoridated or unfluoridated areas during pregnancy.
Data from Till et al (2020) for children either breastfed of formula-fed as babies while living in fluoridated or unfluoridated areas.

There is absolutely no difference in IQ due to fluoridation. Remember, the standard dedication of the values in the table are about 13 to 16 IQ points.

I have presented all the results from these papers graphically below. FSIQ is the normal IQ measurement. VIQ (Verbal IQ) and PIQ (Performance IQ) are subsets of FSIQ.

The only statistically significant differences between fluoridated and unfluoridated areas were for VIQ of breastfed babies (VIQ higher for fluoridated areas) and PIQ of formula-fed babies (PIQ lower for fluoridated areas).

Anti-fluoride campaigners and (biased scientists like Grandjean) love the Green et al (2019) and Till et al (2020) papers because they reported (very weak) negative relationships of some child cognitive measures with fluoride intake ( I discuss this in separate articles). This is largely a result of the statistical methods used – particularly resorting to several different cognitive measures and measures of fluoride exposure, as well as the separation of results according to gender. Reminds me of the old saying that one can always get the results one requires by torturing the data hard enough.

I will return to the statistical problems of these and similar papers in a separate article.

Misrepresentation by anti-fluoride activists

Anti-fluoride campaigners have latched on to the two Canadian studies – often making claims that simply are not supported. But always ignoring the data shown above.

For example – this propaganda poster from FAN promoting the Green et al (2019) study.

This completely misrepresents the results of the study. No difference was found in the IQs of children from fluoridated and unfluoridated areas. These people completely ignore that result while placing unwarranted faith in the weak relationships reported elsewhere in that paper. (In fact, Green et al (2019) found a weak significant relationship only for boys – the relationships for all children and for girls were not significant. See my articles about this statistical torture).

And this FAN propaganda poster promoting the Till et al (2020) study.

Again – completely wrong. There was no difference in IQ of formula-fed babies in fluoridated and unfluoridated areas (see Table 1 above). Even worse – FAN is misrepresenting the statistical relationships reported in this paper as there’s was no statistically significant relationship between child IQ and fluoride exposure for formula-fed our breastfed babies once the influence of outliers and/or confounders were considered.

Misrepresentation by anti-fluoride scientists

It is understandable, I guess, that the authors of the two Canadian papers make a lot of the poor statistical relationships they reported and ignored the fact that they did not see any effect of fluoridation. Perhaps they can be excused some bias due to professional ambition. But this underlines why sensible readers should always critically and intelligently read the papers in this controversial area. One should never rely on the public relations claims of authors and their institutes. But it is sad to see how scientific basis and ambitions can lead scientists to support the claims of political activists. or worse, to attack honest scientists who do post-publication peer review of the studies (see for example When scientists get political: Lead fluoride-IQ researcher launches emotional attack on her scientific critics).

I am also very critical of scientific supporters of these studies who have their own anti-fluoride motivations. Philippe Grandjean, for example, was one of the authors very critical of the Broadbent et al (2015) paper and ignored completely the fact that the Green et al (2019) and Till et al (2020) papers report exactly the same result – no effect of fluoridation on child IQ. Grandjean often makes public comments supporting the claims of anti-fluoride campaigners like FAN. He also behaved in a scientifically unethical way when he refused to allow my critique of the flawed paper by Malin & Till (2015) to be published in Environmental Health – the journal he acts as the chief editor of (see Fluoridation not associated with ADHD – a myth put to rest).

I am repeating myself but it is a matter of “reader beware.” Readers should not simply rely on the scientific “standing” of authors who are only human and suffer from the same biases as others. They should read these papers for themselves and make up their own mind about what the data actually says.

Similar articles

Child IQ in countries with endemic fluorosis imply fluoridation is safe.

Anti-fluoride activists love to point out that people living in endemic fluorosis areas in countries like China suffer all sorts of health problems, including lower IQ. But studies of these areas show no lowering of IQ in the low fluoride areas relevant to community water fluoridation.

I have completed a detailed analysis of all the 65 studies the Fluoride Action Network (FAN) lists as evidence that community water fluoridation (CWF) is harmful to child IQ. The full analysis is available for download as the document Analysis of FAN’s 65 brain-fluoride studies.

In this article, I discuss the studies in the FAN’s list (see FLUORIDE & IQ: THE 65 STUDIES”) which compare child IQ in areas of “low” and “high” fluoride in countries like China, Mexico, Iran, Egypt, and India where fluorosis is endemic. In fact, all these studies either assume or provide evidence that fluoride at the concentrations used for CWF is harmless.

IQ differences for “high” and “low” fluoride areas

FAN was really dredging through very poor research to find these studies. In fact, FAN had to go to the trouble of translating many of these studies because they were obscure and not available in English.

Of their 65 studies, 17  do not provide data for fluoride intake or for drinking water fluoride concentrations. Instead, they simply describe the “high” areas as endemic fluorosis areas or areas where people suffer severe dental or skeletal fluorosis. Several of the studies used “control” groups from areas of “slight” fluorosis or dental fluorosis in contrast to skeletal fluorosis.

Another 29 studies did provide water fluoride concentrations for the “low” fluoride and “high” fluoride areas. This data is useful as it enables us to consider how relevant the results are to CWF. I have summarised the data in Figure 1.

The take-home message from Figure 1 is that while these 29 studies do show a decrease in child IQ in areas of “high” fluoride those areas are not relevant to CWF. In fact, the only relevance to CWF are the areas of “low” fluoride where there is the implicit assumption that child IQ is not affected. We can also assume this is the case for the 17 studies which do not provide details of fluoride exposure.

Figure 1: Comparison of water fluoride levels in “high” and “low” fluoride areas of 29 of the FAN studies and in areas where CWF is used.

So these 46 studies heavily promoted by FAN over recent years do not show any harm from CWF – in fact, all these studies implicitly assume there is no negative effect on child IQ at the “low” fluoride levels studied – and these are the areas most relevant to CWF. A simple consideration of the health problems faced by people living in areas of endemic fluorosis should have made it obvious that the data for high fluoride areas is simply not relevant. Consider these figures from Das et al (2016) – one never sees people like this in areas where CWF is used:

Dental fluorosis case found in the study area (age: 12, sex: male). Das et al (2016)

Skeletal fluorosis case found in the study area (age: 17, sex: male). Das et al (2016)

FAN is simply silly to suggest these studies, and especially the results for the “high” fluoride areas, area at all relevant to CWF.

Mind you, Paul Connett, FAN Director, likes to draw attention to one of these studies where he claims the “high” fluoride area has a drinking- water fluoride concentration of 0.81 mg/L which is similar to that for CWF. He is simply dredging the data (and ignoring all the other studies he cites)  to make this claim. The study he refers to was made in an area of iodine deficiency and is extremely weak – simply and half pages in a Chinese newsletter. Have a look for yourself – Lin et al (1991).

In a future article, I will discuss the studies in FAN’s list which compare IQ for children from fluoridated and unfluoridated areas.

Similar articles