Tag Archives: fluoridation

Hip fractures in the elderly and fluoride – contradictory evidence

Room for cherry-picking to confirm a bias. Separate Swedish studies report that fluoride can either prevent or promote the risk of hip fracture in the elderly. Image credit: Are hip fracture patients treated quickly enough?

Anti-fluoride activists are promoting a recent study linking fluoride intake and bone fractures. No surprise there. But they are cherry-picking a single study to support their agenda and scientifically literate people should see the wider picture and not ignore other studies which, on the whole,  convey a different story. This issue illustrates problems with epidemiological studies producing variable results and shows why people should avoid cherry-picking and look at the full range of studies in a field.

Here I consider just two studies on fluoride intake and bone fracture which produced different conclusions. Both studies involved people from Sweden where the natural fluoride levels in drinking water vary across the country.

Drinking water fluoride may protect against hip fractures

First a study from 2013:

Näsman, P., Ekstrand, J., Granath, F., Ekbom, A., & Fored, C. M. (2013). Estimated drinking water fluoride exposure and risk of hip fracture: A cohort study. Journal of Dental Research, 92(11), 1029–1034.

The main findings are illustrated in the figure showing the calculated Hazard Ratios for people of different ages living in areas of Sweden with “very low” (less than 0.3 mg/L), “low” (0.3 – 0.69 mg/L), “medium” (0.7 -1.49 mg/L) or “High” (greater than 1.5 mg/L) fluoride levels in the drinking water. The Hazard Ratio in the figure below is a measure of the number of hip fractures at these levels compared with the number of hip fractures at “Very low” fluoride concentration. The bars represent the 95% confidence intervals. The Hazard Ratios for the “very low” group are 1.0 and Hazard Ratios statistically significantly different to 1 (no effect) are coloured red.

Considering all people there is no statistically significant increase in the number of hip fractures for any level of water fluoride concentration compared with the “very low” levels. The number of hip fractures experienced by people in the two lower age groups (less than 70 years and 70 – 80 years) was significantly lower at higher water fluoride concentrations than at the “very low” concentrations. The authors say:

this “suggests a protective effect of fluoride among the younger (age younger than 80 years): however, the majority of fractures occurred above the age of 80 years (median age at time of fracture, 82.0).”

So a study suggested that the fluoride in Swedish drinking water does not encourage bone fractures and may actually protect against them in the lower age groups.

Fluoride may encourage hip fractures

Now a study from 2021 – the one anti-fluoride activists are promoting (for obvious reasons):

Helte, E., Vargas, C. D., Kippler, M., Wolk, A., Michaëlsson, K., & Åkesson, A. (2021). Fluoride in Drinking Water , Diet , and Urine in Relation to Bone Mineral Density and Fracture Incidence in Postmenopausal Women. Environmental Health Perspectives, 129(April).

Unlike Näsman et al (2013) which used drinking water fluoride concentrations as a measure of fluoride exposure, Helte et al (2021) used urinary fluoride and estimated dietary fluoride intake as measures of fluoride exposure. The Hazard Ratios were calculated from the number of hip fractures in the Tertile 2 groups (0.88 – 1.30 mg/g urinary fluoride or 1.74 – 2.41 mg/day dietary fluoride intake) and Tertile 3 groups (1.30 – 116.51 mg/g urinary fluoride or 2.41 – 11.16 mg/day dietary fluoride intake) compared with hip fractures in the tertile 1 groups (0.14 – 0.88 mg/g urinary fluoride or 0.26 – 1.74 mg/day dietary fluoride intake).

Note: The urinary fluoride units of mg/g represent mg of urinary F/g urinary creatinine. Creatinine levels were used to correct the spot values for dilution.

The Hazard Ratios that statistically significantly different to 1 (no effect) are coloured red in the figure below.

A bit complicated I know, but what the figure shows is no statistically significant increase in hip fracture numbers for the tertile 2 groups compared with the lower F intake tertile one group. But a significant increase in fracture numbers for the tertile 3 groups except for the women exposed to constant water fluoride concentrations since 1982 in the dietary F group.

Hertle et al (2021) also considered other types of fracture. There were no statistically significant increases in fractures in the upper tertiles for either the “all fractures” or “major osteoporotic fractures” classes.

So, a bit of a mixed bag but this paper is currently being promoted by anti-fluoride activists as evidence of a harmful result from community water fluoridation (CWF).

Critically assessing the evidence for bone fractures

It is easy to see why supporters of CWF may cite Näsman et al (2013) as evidence for lack of harm and opponents may cite Helte et al (2021) as evidence of harm from CWF. But neither approach is really scientific. The methodological differences and choice of factors considered can easily explain variable results. One should critically and rationally assess both of these papers, together with the many other papers reporting similar studies, before coming to any conclusion.

On balance, the published studies probably support the findings of Näsman et al (2013) and not Helte et al (2021). In fact, a systematic review and meta-analysis published in 2015 concluded that chronic exposure to fluoride in drinking water was not associated with a significant increase in hip fracture risk. The citation for this review is:

Yin, X.-H., Huang, G.-L., Lin, D.-R., Wan, C.-C., Wang, Y.-D., Song, J.-K., & Xu, P. (2015). Exposure to Fluoride in Drinking Water and Hip Fracture Risk: A Meta-Analysis of Observational Studies. PLOS ONE, 10(5), e0126488. 

It’s worth reproducing one of the figures from that review because it illustrates how epidemiological studies may, individually, support a claim of harm but when considered as a whole these studies do not support the claim. The figure below shows the range of Hazard Ratios obtained by a number of studies.

The lesson here is to be very careful of claims made on the basis of single cherry-picked studies. Especially when those making the claim have a bias they wish to confirm. Every claim should be critically and rationally considered using all the available studies.

Similar articles

 

An open letter to Paul Connet and the anti-fluoride movement

Paul Connett and Vyvyan Howard have, through the local Fluoride Free New Zealand activist group, published an open letter addressed to NZ scientists and educators (see An Open Letter To NZ Scientists And Educators). It is strange to encourage scientific exchanges through press releases but if they are seriously interested in an exchange of informed scientific opinion on the research they mention I am all for it.

In fact, I renew my offer to Paul Connett for a new exchange on the new relevant research along the lines of the highly successful scientific exchange we had in 2013/2014 summarised in Conett & Perrott (2014) The Fluoride Debate.

Connett and Howard say they felt “let down” by the reception they received in their 2018 visit. But they should realise this sort of ridicule is inevitable when a supposedly scientific message is promoted by activist fringe groups with known funding by big business (in this case the “natural”/alternative health industry). The science should be treated more respectably and discussed in a proper scientific forum or via a proper scientific exchange rather than political style activist meetings.

It is this sort of respectable, informed and open scientific exchange I am offering to Paul Connett and Vyvyan Hoard.

Connett and Howard argue that there has recently been  “a dramatic change in the quality of these [fluoride] studies.” I agree that new research occurs all the time and there is plenty of scope upgrading of the scientific exchange we had in 2013/2014 to cover that new research. Consideration of the new research requires the objective, critical and intelligent consideration scientists are well used to and this is not helped by activist propaganda meetings. So I encourage Connett and Howard to accept my offer. after all, if they are confident in their own analysis of this research what do they have to lose?

Inaccuracies in “open letter”

One can see an “Open letter” as displaying a willingness to enter into a proper scientific exchange. However, Connett and Vivyan’s open letter includes inaccuracies and misinformation on the new research which simply demonstrates that a one-sided presentation cannot present the research findings properly.

For example, they misrepresent the 2014 New Zealand fluoridation review of Eason et al (2014). Health effects of water fluoridation: A review of the scientific evidence. Even to the extent of mistaking the authors (not Gluckman & Skegg as they claim) and misrepresenting the small mistake made in the summary which was later corrected. That attitude does not bode well for the proper consideration of the research.

Connett and Howard concentrate on new research relating child IQ to fluoride intake but they ignore completely the fact that all the research comparing IQ in fluoridated and unfluoridated areas show absolutely no effect. I have summarised the results for the three papers involve in this table.

Instead, they concentrated on a few extremely weak relationships reported in a few papers. But even here they get this wrong – for example, they say there is “a loss of about 4 IQ points in offspring for a range of 1 mg/liter of fluoride in mother’s urine.” The paper they refer to (Green et al 2019) actually found no statistically significant relationship between child IQ and maternal urinary fluoride for all children considered. The relationship Connett and Howard mention was actually for male children (no relationship for female children or for all children) and was very weak. These sort of weak relationships are commonly found in epidemiological research and are usually meaningless. In this case, Connett and Howard have simply cherry-picked one value and misrepresented it as applying to all children.

Both the Green et al (2019) and Till et al (2020) papers Connett and Howard refer to suffer from selecting a few weak statistically significant relationships and ignoring the larger number of non-significant relationships they found for the data they investigated. Connett and Howard also completely ignored the new studies that don’t fit their claims. For example that of Santa-Marina et al (2019). Fluorinated water consumption in pregnancy and neuropsychological development of children at 14 months and 4 years of age. which showed an opposite positive relationship of child IQ with maternal urinary fluoride. Similar they ignored the large Swedish study of Aggeborn & Öhman (2020). The Effects of Fluoride in the Drinking Water showing no effect of fluoride on IQ but positive effects on oral health and employment possibilities in later life.

In conclusion, I reiterate that genuine open scientific exchanges do not take place via press release and activist meetings. However, the fact that Connett and Howard have issued an “Open Letter” could be interpreted as inviting others to participate in a proper exchange. I endorse that concept and offer Connett and Howard space for a free and open exchange on the new research at this blog site.

Similar articles

 

Data dredging, p-hacking and motivated discussion in anti-fluoride paper

Image credit: Quick Data Lessons: Data Dredging

Oh dear – another scientific paper claiming evidence of toxic effects from fluoridation. But a critical look at the paper shows evidence of p-hacking, data dredging and motivated reasoning to derive their conclusions. And it was published in a journal shown to be friendly to such poor science.

The paper is:

Cunningham, J. E. A., Mccague, H., Malin, A. J., Flora, D., & Till, C. (2021). Fluoride exposure and duration and quality of sleep in a Canadian population-based sample. Environmental Health, 1–10.

Data dredging

This study used data from a Canadian database – the Canadian Health Measures Survey. Databases with large numbers of variables tempt researchers to dredge for data or relationships which confirm their biases. Despite the loss of statistical significance in this approach data dredging or data mining is quite common in epidemiological studies.

Cunningham et al (2021) looked for relationships using two separate measure of fluoride exposure and four different measures of possible sleep disturbance. They found a “statistically significant (p<0.05) relationship between lower sleep duration and water fluoride. But no relationships for higher sleep duration, trouble sleeping or daytime sleepiness with either water fluoride or urinary fluoride. Their results for logical regression analysis are summarised in this figure. (Error bars crossing an Odds Ratio value of 1.0 indicate that the relationship is not statistically significant and p<0.05).

Of the 8 relationships investigated only 1 was statistically significant.

p-hacking

I discussed the problem of p-hacking in Statistical manipulation to get publishable results.

With a large dataset, one can inevitably find relationships that satisfy the p<0.05 criteria – because this p-value value is meaningless when multiple relationships are considered. One can even find such “statistically significant relationships” when random datasets are investigated (see Science is often wrong – be criticalI don’t “believe” in science – and neither should you, The promotion of weak statistical relationships in science  and Can we trust science). Once multiple relationships are investigated the chance of finding accidental relationships is much greater than 1 in 20 signified by the p<0.05 value.

So, one of the 8 relationships above satisfied the p<0.05 criteria when considered alone. But as part of multiple investigations, the chance of finding such a relationship by chance is much greater than 1 in 20.

Motivated reasoning

This paper smacks of motivated reasoning. The authors obviously have a commitment to the concept that fluoride causes problems with the pineal gland and drag up anything they can find in the literature to support this – without critically assessing the quality of the cited work or even mentioning the fact that the cited studies were made at much higher fluoride concentration on non-human animals. In effect, they are attempting to convert very weak results, obtained by data dredging and p-hacking, to a fact. They are attempting to make a purse out of a sow’s ear.

This research group is not new to this game. I commented on this in my critique of another sleep disorder paper from the group (see ).

Many of the same researchers are listed as authors on both papers – yet Cummingham et al (2021 ) cite the previous paper as if it was an independent study. They say “As far as we are aware, this is only the second human
study investigating the effects of fluoride exposure on sleep outcomes” which is simply disingenuous considering the involvement of the same researchers in both papers.

Both these papers were also published in the same journal – Environmental Health – a pay-to publish-journal that is known to be friendly to anti-fluoride researchers and uses very sympathetic peer reviewers (see ). The Chief editor, Philippe Grandjean, is well known for his opposition to fluoridation. I commented on his refusal to consider a paper of mine that critiqued an anti-fluoride paper published in his journal (see Fluoridation not associated with ADHD – a myth put to rest).

Conclusion

Yet another very weak study, published in an anti-fluoride friendly pay-to-publish journal with poor peer review. Despite the weaknesses due to data dredging, p-hacking and motivated reasoning, anti-fluoride activists will cite the single “statistically significant” result as gospel and ignore the 7 relationships that are not significant. As for inadequate consideration of confounders or other risk-modifying factors, this study ignores completely the fact that city size and geographic factors have a strong effect on both sleep patterns and water fluoride concentrations (see Perrott 2018). Such inadequate consideration of confounders is another common problem in epidemiological studies.

Oh, well, we are not a rational species. More a rationalising one. And in such areas motivated rationalisation and confirmation bias is rife.

Similar studies

Embarrassing knock-back of second draft review of possible cognitive health effects of fluoride

We have come to expect exaggeration of scientific findings in media reports and institutional press releases. But it can also be a problem is original scientific publications where findings are reported in an unqualified or exaggerated way. Image Credit: Curbing exaggerated reporting

This is rather embarrassing for a US group attempting to get the science right about possible toxic effects of fluoride. It’s also embarrassing for the anti-fluoride activists who have “jumped the gun” and been citing the group’s draft review as if it was reliable when it is not.

The US National Academies of Sciences, Engineering, and Medicine (NAS) have released their peer-review of the revised US National Toxicity Program (NTP) draft on possible neurodevelopmental effects of fluoride (see Review of the Revised NTP Monograph on the Systematic Review of Fluoride Exposure and Neurodevelopmental and Cognitive Health Effects).

This is the second attempt by the NTP reviewers to get acceptance of their draft and it has now been knocked back by the NAS peer reviewers for a second time.

Diplomatic but damning peer-review

Of course, the NAS peer reviewers use diplomatic language but the peer review is quite damning. It criticises the NTP for ignoring some of the important recommendations in the first peer review. One which is quite critical was the lack of response to the request that NTP explains how the monograph can be used (or not) to inform water fluoridation concentrations. The second NAS peer review firmly states that the NTP:

“should make it clear that the monograph cannot be used to draw any conclusions regarding low fluoride exposure concentrations, including those typically associated with drinking-water fluoridation.”

And:

“Given the substantial concern regarding health implications of various fluoride exposures, comments or inferences that are not based on rigorous analyses It seems to me that there is soime internal politicsshould be avoided.”

It seems to me there is some internal politics involved and some of the NTP authors may be promoting their own, possibly anti-fluoride, agenda. Certainly, the revised NTP draft monograph continues to obfuscate this issue. It continues to state that “fluoride is presumed to be a cognitive neurodevelopmental hazard to humans” – a clause which anti-fluoride campaigner consistently quote out of context. Yes, it does state that this is based on findings demonstrating “that higher fluoride exposure (e.g., >1.5 mg/L in drinking water) is associated with lower IQ and other cognitive effects in children.” But this is separated from the other fact that the findings on cognitive neurodevelopment for “exposures in ranges typically found in drinking water in the United States (0.7 mg/L for optimally fluoridated community water systems)” are “are inconsistent, and therefore unclear.”

Monograph exaggerates by enabling unfair cherry-picking

So, you see the problem. The draft NTP monograph correctly refers to IQ and other cognitive effects in children exposed to excessive levels of fluoride. The draft also correctly refers to that lack of evidence for such effects at lower fluoride exposure levels typical of community water fluoridation. But in different places in the document.

The enables activist cherry-picking to support an anti-fluoride agenda and that is a fault of the document itself. It should clearly state that the monograph should not be used to draw any conclusion at these low exposure levels. This is strongly expressed in the peer-reviewers’ comments.

I find the blanket “presumed to be a hazard for humans” quite misleading. For example, no one says that calcium is “presumed to be a cardiovascular hazard to humans.” Or that selenium is “presumed to be a cardiovascular or neurological hazard to humans.” Or what about magnesium – would you accept that it is a “presumed neurological hazard to humans?” Would you accept that iron is a “presumed cardiovascular, cancer, kidney or erectile dysfunction hazard to humans?” Yet all those problems have been reported for humans at high intake levels of these elements.

No, we sensibly accept that various elements and microelements have beneficial, or essential benefits, to humans at reasonable intake levels., Then we sensibly warn that these same elements can be harmful at excessive intake. To proclaim that any of these elements are “presumed” to be hazardous – without clearly saying at excessive intake levels, is simply distorting or exaggerating the data.

What does “presumed” mean?

A lot of readers find the use of “presumed” strange. But it’s meaning is related to the levels of evidence found by reviewers.

No, don’t believe those anti-fluoride activists who falsely claim that “presumed” is the highest level of evidence and that the finding should be treated as factual. They are simply wrong.

Some idea of the word’s use is presented in this diagram from the NTP revised draft monograph.

So “presumed” means that the evidence for the effect is moderate. That the effect is not factual or known. But as further evidence comes in the ranking of fluoride as a hazard may increase, or decline.

As the monograph bases this “presumed” rating solely on evidence from areas of endemic fluorosis where fluoride intake levels are high it is correct to avoid stating the effects as factual. For example, consider these images from areas of endemic fluorosis in China (taken from a slide presentation by Xiang 2014):

Clearly, people in these areas suffer a range of health effects related to the high fluoride intake. The cognitive effects like IQ loss from these areas could result from these other health effects, not directly from fluoride (although excessive fluoride intake leads to the health effects).

So we can “presume” that fluoride (in areas of endemic fluorosis where fluoride intake is excessive) is a “cognitive neurodevelopmental hazard for humans” but we can not factually state that the neurodevelopment effects are directly caused by fluoride. That would require further scientific work to elucidate the specific mechanisms involved in creating that effect.

Similar articles

The promotion of weak statistical relationships in science

Image credit: Correlation, Causation, and Their Impact on AB Testing

Correlation is never evidence for causation – but, unfortunately, many scientific articles imply that it is. While paying lip service to the correlation-causation mantra, some (possibly many) authors end up arguing that their data is evidence for an effect based solely on the correlations they observe. This is one of the reasons for the replication crisis in science where contradictory results are being reported. Results which cannot be replicated by other workers (see I don’t “believe” in science – and neither should you).

Career prospects, institutional pressure and the need for public recognition will encourage scientists to publish poor quality work that they then use to claim that have found an effect. The problem is that the public, the news media and even many scientists simply do not properly scrutinise the published papers. In most cases they don’t have the specific skills required for this.

There is nothing wrong with doing statistical analyses and producing correlations. However such correlations should be used to suggest future more meaningful and better-designed research like randomised controlled trials (see Smith & Ebrahim 2002Data dredging, bias, or confounding. They can all get you into the BMJ and the Friday papers. ). They should never be used as “proof” for an effect, let alone argue that the correlation is evidence to support regulations and advise policymakers.

Hunting for correlations

However, researchers will continue to publish correlations and make great claims for them because they face powerful incentives to promote even unreliable research results. Scientific culture and institutional pressures provide expectations demanding academic researchers produce publishable results. This pressure is so great they will often clutch at straws to produce correlations even when the initial statistical analyst produces none. They will end up “torturing the data.”

These days epidemiological researchers use large databases and powerful statistical software in their search for correlations. Unfortunately, this leads to data mining which, by suitable selection of variables, makes the discovery of statistically significant correlations easy. The data mining approach also means that the often cite p-values are meaningless. P-values measure the probability the relationship occurs by chance and often cited as evidence of the “robustness” of the correlations. But probability is so much greater when researchers resort to checking a range of variables and that isn’t reflected properly in the p-values.

Where data mining occurs, even to a limited extent, researchers are simply attempting to make a purse out of sow’s ear when they support their correlations merely by citing a p-value < 0.05  because these values are meaningless in such cases. The fact that so many of these authors often ignore more meaningful results from their statistical analyses (like R-squared values which indicate the extent that the correlation “explain” the variation in their data) underlines their deceptive approach.

Poor statistical relationships

Consider these correlations below – two data sets are taken from a published paper – the other four use random data provided by Jim Jones in his book Regression Analysis: An Intuitive Guide.

You can probably guess which correlations were from real data (J and M) because there are so many more data points All of these have correlations low p values – but of course, those selected from random data sets resulted from data mining and the p-values are therefore meaningless because they are just a few of the many checked. Remember, a p-value < 0.05 means that the probability of a chance effect is one in twenty and more than twenty variable pairs were checked in this random dataset.

The other two correlations are taken from Bashash et al (2017). They do not give details of how many other variables were checked in the dataset used but it is inevitable that some degree of data mining occurred. So, again, the low p-values are probably meaningless.

J provides the correlation of General Cognitive Index (GCI) scores in children at age 4 years with maternal prenatal urinary fluoride and M provides the correlation of children’s IQ at age 6–12 y with maternal prenatal urinary fluoride. The paper has been heavily promoted by anti-fluoride scientists and activists. None of the promoters have made a critical, objective, analysis of the correlations reported. Paul Connett, director of the Fluoride Action Network, was merely supporting his anti-fluoride activist bias when he uncritically described the correlations as “robust.” They just aren’t.

There is a very high degree of scattering in both these correlations, and the R-squared values indicate they cannot explain any more than about 3 or 4% of the variance in the data. Hardly something to hang one’s hat on, or to be used to argue that policymakers should introduce new regulations controlling community water fluoridation or ban it altogether.

In an effort to make their correlations look better these authors imposed confidence intervals on the graphs (see below). This Xkcd cartoon on curve fitting gives a cynical take on that. The grey areas in the graphs may impress some people but it does not hide the wide scatter of the data points. The confidence intervals refer to estimates of the regression coefficient but when it comes to using the correlations to predict likely effects one must use the prediction intervals which are very large (see Paul Connett’s misrepresentation of maternal F exposure study debunked). In fact, the estimated slopes in these graphs are meaningless when it comes to predictions.

Correlations reported by Bashash et al (2017). The regressions explain very little of the variance in the data and connect be used to make meaningful predictions.

In critiquing the Bashash et al (2017) paper I must concede that at least they made their data available – the data points in the two figures. While they did not provide full or proper results from their statistical analysis (for example they didn’t cite the R-squared values) the data does at least make it possible for other researchers to check their conclusions.

Unfortunately, many authors simply cite p-values and possible confidence intervals for the estimate of the regression coefficient without providing any data or images. This is frustrating for the intelligent scientific reader attempting to critically evaluate their claims.

Conclusions

We should never forget that correlations, no matter how impressive, do not mean causation. It is very poor science to suggest they do.

Nevertheless, many research resort to correlations they have managed to glean from databases, usually resorting to some extent of data mining, to claim they have found an effect and to get published. The drive to publish means that even very poor correlations get promoted and are used by ideologically or career-minded scientists, and by activists, to attempt to convince policymakers of their cause.

Image credit: Xkcd – Correlation

Remember, correlations are never evidence of causation.

Similar articles

Can we trust science?

Image credit: Museum collections as research data

Studies based simply on statistically significant relationships found by mining data from large databases are a big problem in the scientific literature. Problematic because data mining, or worse data dredging, easily produces relationships that are statistically significant but meaningless. And problematic because authors wishing to confirm their biases and promote their hypotheses conveniently forget the warning that correlation is not evidence for causation and go on to promote their relationships as proof of effects. Often they seem to be successful in convincing regulators and policymakers that their serious relationships should result in regulations. Then there are the activists who don’t need convincing but will promote willingly and tiresomely these studies if they confirm their agendas.

Even random data can provide statistically significant relationships

The graphs below show the fallacy of relying only on statistically significant relationships as proof of an effect. The show linear regression result for a number of data sets. One data set is taken from a published paper – the rest use random data provided by Jim Jones in his book Regression Analysis: An Intuitive Guide.

All these regressions look “respectable.” They have low p values (less than the conventional 0.05 limit) and the R-squared values indicated they “explain” a large fraction of the data – up to 49%. But the regressions are completely meaningless for at least 7 of the 8 data sets because the data were randomly generated and have no relevance to real physical measurements.

This should be a warning that correlations reported in scientific papers may be quite meaningless.

Can you guess which of the graphs is based on real data? It is actually the graph E – published by members of a North American group currently publishing data which they claim shows community water fluoridation reduces child IQ. This was from one of their first papers where they claimed childhood ADHD was linked to fluoridation (see Malin, A. J., & Till, C. 2015. Exposure to fluoridated water and attention deficit hyperactivity disorder prevalence among children and adolescents in the United States: an ecological association).

The group used this paper to obtain funding for subsequent research. They obviously promoted this paper as showing real effects – and so have the anti-fluoride activists around the world, including the Fluoride Action Network (FAN) and its director Paul Connett.

But the claims made for this paper, and its promotion, are scientifically flawed:

  1. Correlation does not mean causation. Such relationships in larger datasets often occur by chance – hell they even occur with random data as the figure above shows.
  2. Yes, the authors argue there is a biologically plausible mechanism to “explain” their association. But that is merely cherry-picking to confirm a bias and there are other biologically plausible mechanisms they did not consider which would say there should not be an effect. The unfortunate problem with these sorts of arguments is that they are used to justify their findings as “proof” of an effect. To violate the warning that correlation is not causation.
  3. There is the problem of correcting for cofounders or other risk-modifying factors. While acknowledging the need for future studies considering other confounders, the authors considered their choice of socio-economic factors was sufficient and their peer reviewers limited their suggestion of other confounders to lead. However, when geographic factors were included in a later analysis of the data the reported relationship disappeared. 

Confounders often not properly considered

Smith & Ebrahim (2002) discuss this problem an article  – Data dredging, bias, or confounding. They can all get you into the BMJ and the Friday papers. The title itself indicates how the poor use of statistics and unwarranted promotion of statical analyses can be used to advance scientific careers and promote bad science in the public media.

These authors say:

“it is seldom recognised how poorly the standard statistical techniques “control” for confounding, given the limited range of confounders measured in many studies and the inevitable substantial degree of measurement error in assessing the potential confounders.”

This could be a problem even for studies where a range of confounders are included in the analyses. But Malin & Till (2015) considered the barest minimum of confounders and didn’t include ones which would be considered important to ADHD prevalence. In particular, they ignored geographic factors and these were shown to be important in another study using the same dataset. Huber et al (2015) reported a statistically significant relationship of ADHD prevalence with elevation. These relationships are shown in this figure

Of course, this is merely another statistically significant relationship – not proof of a real effect and no more justified than the one reported by Malin and Till (2015). But it does show an important confounder that Malin & Till should have included in their statistical analysis.

I did my own statistical analysis using the data set of Malin & Till (2015) and Huber et al (2015) and showed (Perrott 2018) that inclusion of geographic factors showed there was no statistically significant relationship of ADHD prevalence with fluoridation as suggest by Malin & Till (2015). Their study was flawed and it should never have been used to justify funding for future research on the effect of fluoridation. Nor should it have been used by activists promoting an anti-fluoridation agenda.

But, then again, derivation of a statistically significant relationship by Malin & Till (2o15) did get them published in the journal Environmental Health which, incidentally, has sympathetic reviewers (see Some fluoride-IQ researchers seem to be taking in each other’s laundry) and an anti-fluoridation Chief Editor – Phillipe Grandjean (see Special pleading by Philippe Grandjean on fluoride). It also enabled the promotion of their research via institutional press releases, newspaper article and the continual activity of anti-fluoridation activists. Perhaps some would argue this was a good career move!

Conclusion

OK, the faults of the Malin & Till (2015) study have been revealed – even though Perrott (2018) is studiously ignored by the anti-fluoride North American group which has continued to publish similar statistically significant relationships of measures of fluoride uptake and measures of ADH or IQ.

But there are many published papers – peer-reviewed papers – which suffer from the same faults and get similar levels of promotion. They are rarely subject to proper post-publication peer-review or scientific critique. But their authors get career advancement and scientific recognition out of their publication. And the relationships are promoted as evidence for real effects in the public media.

No wonder members of the public are so often confused by the contradictory reporting, the health scares of the week, they are exposed to.

No wonder many people feel they can’t trust science.

Similar articles

I don’t “believe” in science – and neither should you

We should be very careful about naively accepting claims made by the mainstream media – but this is also true of scientific claims. We should approach them intelligently and critical and not merely accept them on faith

I cringe every time I read an advocate of science asserting they “believe in science.” Yes, I know they may be responding to an assertion made by supporters of religion or pseudoscience. But “belief” is the wrong word because it implies trust based on faith and that is not the way science works.

Sure, those asserting this may argue that they have this belief because science is based on evidence, not faith. But that is still a copout because evidence can be used to draw conclusions or make claims that are still not true. Anyway, published evidence may be weak, misleading or poorly interpreted.

Here is an example of this dilemma taken from the Vox article Hyped-up science erodes trust. Here’s how researchers can fight back.

The figure is based on data published in Schoenfeld, J. D., & Ioannidis, J. P. A. (2013). Is everything we eat associated with cancer? A systematic cookbook review. American Journal of Clinical Nutrition, 97(1), 127–134.

It is easy to cite a scientific article, for example, as evidence that wine protected one from cancer. Or that it, in fact, causes cancer. Unfortunately, the scientific literature is full of such studies with contradictory conclusions. Usually based on real data and statistical analyses which show a significant relationship. But, if it is easy to find such studies which can be claimed as evidence of opposite effects what good is a “belief” in science? All that simple “belief” does is provide a scientific source for one’s own beliefs, an exercise in confirmation bias.

This figure should be a warning to approach published findings in fields like nutritional epidemiology and environmental epidemiology critically and intelligently. One should simply not take them as factual – we should not “believe” in them simply because they are published in scientific journals.

Schoenfeld, & Ioannidis (2013) say of the studies they investigated that:

“the large majority of these studies were interpreted by their authors as offering evidence for increased or decreased risk of cancer. However, the vast majority of these claims were based on weak statistical evidence.”

They discuss problems such as the “pressure to publish,” undervaluation or not reporting negative results, “biases in the design, execution and reporting of studies” because nutritional ingredients “viewed as “unhealthy” may be demonized.” 

The authors warn that:

“studies that narrowly meet criteria for statistical significance may represent spurious results, especially when there is large flexibility in analyses, selection of contrasts, and reporting.”

And:

” When results are overinterpreted, the emerging literature can skew perspectives and potentially obfuscate other truly significant findings.”

They warn that these sorts of problems may be:

“especially problematic in areas such as cancer epidemiology, where randomized trials may be exceedingly difficult and expensive to conduct; therefore, more reliance is placed on observational studies, but with a considerable risk of trusting false-positive”

These comments are very relevant to consideration of recent scientific studies claiming a link between community after fluoridation and cognitive deficits. Studies the are heavily promoted by the anti-fluoridation activists and, more importantly for scientific readers, by the authors of these studies themselves and their institutions. I have discussed specific problems in previous posts about the results from the Till group and their promotion by the authors.

The merging of pseudoscience with science

We seem to make an issue of countering pseudoscience with science but in the process are often oblivious to the fact these to tend to merge – even for professional scientists. After all, we are human and all have our own biases to confirm and our jobs to advance.

This is a black and white contrast of science with pseudoscience promoted by Skeptics. It’s worth comparing this with the reality of the scientific world.

Do scientist always follow the evidence? Don’t they sometimes start with the conclusion and look for evidence to support it – even clutching at the straws of weak evidence (statically weak relationships in environmental epidemiological studies which are promoted as proof of harmful effects)?

Oh for the ideal scientist who embraces criticism. Sure, they are out there but so many refuse to accept criticism, “circle the wagons” and end up unfairly and emotively attacking their critics. I describe one example in When scientists get political: Lead fluoride-IQ researcher launches emotional attack on her scientific critics.

Are claims always conservative and tentative? Especially when scientists have a career or institution to promote. And institutions with their press releases are a big part of this problem of overpromotion. Unfortunately, in environmental epidemiology, some scientists will take weak research results to argue that they prove a cause and then request regulation by policymakers. I discuss this in . Specifically, there is the case of weak scientific data from Till’s research group being used to promote regulatory actions to confirm their biases.

Unfortunately, scientists with biases to confirm find it quite easy to ignore or downgrade the evidence which doesn’t fit. They may even work to prevent publication of countering evidence (see for example Fluoridation not associated with ADHD – a myth put to rest).

Conclusion

I could go one taking each point in order. But, in reality, I think such absolute claims about science are just not realistic. The scientific world is not that perfect.

In the end, the intelligent scientific reader must approach even the published literature very critically if they are to truly sift the wheat from the chaff.

Similar articles

Science is often wrong – be critical

Activists, and unfortunately many scientists, use published scientific reports like a drunk uses a lamppost – more for support than illumination

Uncritical use of science to support a preconceived position is widespread – and it really gets up my nose. I have no respect for the person, often an activist, who uncritically cites a scientific report. Often they will cite a report which they have read only the abstract of – or not even that. Sometimes commenters will support their claims by producing “scientific evidence” which are simply lists of citations obtained from PubMed or Google Scholar.

[Yes, readers will recognise this is a common behaviour with anti-fluoride activists]

Unfortunately, this problem is not restricted to activists. Too often I read scientific papers with discussions where authors have simply cited studies that support, or they interpret as supporting, their own preconceived ideas or hypotheses. Compounding this scientific “sin” is the habit of some authors who completely refuse to cite, or even discuss, studies producing evidence that doesn’t fit their scientific prejudices.

Publication does not magically make scientific findings or ideas “true” – far from it. The serious reader of scientific literature must constantly remember that the chances are very high that published conclusions or findings are likely to be false. John Ioannidis makes this point in his article Why most published research findings are false. Ioannidis concentrates on the poor use, or misuse, of statistics. This is a constant problem in scientific writing – and it certainly underlines the fact that even scientists will consciously or unconsciously manipulate their data to confirm their biases. They are using statistical analysis in the way a drunk used a lamppost – for support rather than illumination.

Poor studies often used to fool policymakers

These problems are often not easily understood by scientists themselves but the situation is much worse for policymakers. They are not trained in science and don’t have the scientific or statistical experience required for a proper critically analysis of claims made to them by activists. Yet they are often called on to make decisions which rely on the acceptance, or rejection, of scientific claims (or, claims about the science).

An example of this is a draft (not peer-reviewed) paper by Grandjean et al  – A Benchmark Dose Analysis for Maternal Pregnancy Urine-Fluoride and IQ in Children.

These authors have an anti-fluoride activists position and are campaigning against community water fluoridation (CWF). Their paper uses their own studies which report very poor and rare statistical relationships of child IQ with fluoride intake as “proof” of causation sufficiently strong to advocate for regulatory guidelines. Unsurprisingly their recommended guidelines are very low – much lower than those common with CWF.

Sadly, their sciencey sounding advocacy may convince some policymakers. It is important that policymakers be exposed to a critical analysis of these studies and their arguments. The authors will obviously not do this – they are selling their own biases. I hope that any regulator or policymaker required to make decisions on these recommendations have the sense to call for an independent, objective and critical analysis of the paper’s claims.

[Note: The purpose of the medRxiv preprints of non-peer-reviewed articles is to enable and invite discussion and comments that will help in revising the article. I submitted comments on the draft article over a month ago (Comments on “A Benchmark Dose Analysis for Maternal Pregnancy Urine-Fluoride and IQ in Children”) and have had no response from the authors.  This lack of response to constructive critiques is, unfortunately, common for this group. I guess one can only comment that scientists are human.]

Observational studies – exploratory fishing expeditions

A big problem with published science today is that many studies are nothing more than observational exploratory studies using existing databases which, by their nature, cannot be used to derive causes. Yet that can easily be used to derive statistically significant links or relationships. These can be used to write scientific papers but they are simply not evidence of causes.

Properly designed studies, with proper controls and randomised populations properly representing different groups, may provide reasonable evidence of causal relationships – but most reported studies are not like this. Most observational studies use existing databases with non-random populations where selection and confounding with other factors is a huge problem. Authors are often silent about selection problems and may claim to control for important confounding factors, but it is impossible to include all confounders. The databases used may not include data for relevant confounders and authors themselves may not properly select all relevant confounders for inclusion.

This sort of situation makes some degree of data mining likely., This occurs when a number of different variables and measures of outcomes are considered in the search for statistically significant relationships. Jim Frost illustrated the problems with this sort of approach. Using a set of completely fictitious random data he was able to obtain a statistically significant relationship with very low p values and R-squared values showing the explanation of 61% of the variance (see Jim Frost – Regression Analysis: An Intuitive Guide).

That is the problem with observational studies where some degree of data mining is often involved. It is possible to find relationships wich look good, have low p-values and relatively high R-squared values, but are entirely meaningless. They represent nothing.

So readers and users of science should beware. The findings they are given may be completely false or contradictory. or at least meaningless in quantitative terms (as is the case with the relationships produced by the Grandjean et al 2020 group discussed above).

A recent scientific article provides a practical example of this problem. Different authors used the same surgical database but produced complete opposite findings (see Childers et al: 2020). Same Data, Opposite Results?: A Call to Improve Surgical Database Research). By themselves each study may have looked convincing. Both used the same large database from the same year. Over 10,000 samples were used in both cases and both studies were published in the same journal within a few months. However, the inclusion and exclusion criteria used were different. Large numbers of possible covariates were considered but these differed. Similarly, different outcome measures were used.

Readers interested in the details can read the original study or a Sceptical Scalpel blog article Dangerous pitfalls of database research. However, Childers et al (2020) describe how the number of these sort of observational studies “has exploded over the past decade.” As they say:

“The reasons for this growth are clear: these sources are easily accessible, can be imported into statistical programs within minutes, and offer opportunities to answer a diverse breadth of questions.”

However:

“With increased use of database research, greater caution must be
exercised in terms of how it is performed and documented.”

“. . . because the data are observational, they may be prone to bias from selection
or confounding.”

Problems for policymakers and regulators

Given that many scientists do not have the statistical expertise to properly assess published scientific findings it is understandable for policymakers or regulators to be at a loss unless they have proper expert advice. However, it is important that policymakers obtain objective, critical advice and not simply rely on the advocates who may well have scientific degrees. Qualifications by themselves are not evidence of objectivity and, undoubtedly, we often do face situations where scientists become advocates for a cause.

I think policymakers should consciously seek out a range of scientific expert advice, recognising that not all scientists are objective. Given the nature of current observational research, its use of existing databases and the ease with which researchers can obtain statistically significant relationships I also think policymakers should consciously seek the input of statisticians when they seek help in interpreting the science.

Surely they owe this to the people they represent.

Similar articles

Even studies from endemic fluorosis areas show fluoride is not harmful at levels used in fluoridation

Most of the claims made by anti-fluoride propagandists are simply wrong. Image source: Fluoridation and the ‘sciency’ facts of critics

Anti-fluoride propagandists continually cite studies from areas of endemic fluorosis in their arguments against community water fluoridation (CWF). But if they critically looked at the data in those papers they might get a shock. Invariably the published data, even from areas of endemic fluorosis, shows fluoride is safe at the concentrations relevant to CWF.

I have completed a detailed analysis of all the 65 studies the Fluoride Action Network (FAN) lists as evidence that community water fluoridation (CWF) is harmful to child IQ. The full analysis is available for download as the document Analysis of FAN’s 65 brain-fluoride studies.

In this article, I discuss the studies in the FAN’s list (see FLUORIDE & IQ: THE 65 STUDIES”) which report relationships between child IQ and fluoride exposure in areas of endemic fluorosis. There are eleven such studies in the FAN list but only six of them provide sufficient data to enable independent statistical analysis.

While those six studies do show a statically significant (p<0.05) negative relationship of IQ with fluoride intake those results are not relevant to CWF because the fluoride as exposure levels are much higher than ever occurs with CWF.

However, it is possible to investigate if the relationships are significant at lower concentrations more relevant to CWF. I have done this with these six studies and illustrate the result obtained with these graphs below using the data extracted from Xiang et al (2003). (This study is often used by anti-fluoride campaigners).

The red data points in the figures below are for lower concentrations of urinary F or creatinine adjusted urinary F. The range for the red points is still quite a bit larger than urinary F levels measured for children in areas where CWF is used. However, we can see that the relationships at these lower ranges are not statistically significant (results from regression analyses cited in figures).

 

This was also the case with the other studies from FAN’s list which provided sufficient data for regression analyses. I summarise the results obtained for five of these studies in the figure below.

This show that none of the studies found statistically significant relationships with fluoride exposure for the low fluoride concentration relevant to CWF. The situation is basically the same for the sixth study, Mustafa et al (2018), which reports average school subject performances for a range of subjects for children in Khartoum state, Sudan. However, it is hard to know what the safe limit for fluoride exposure is in that climate (for climatic reasons the upper permissible F level in drinking water is set at 0.33 ppm for Khartoum state) and the sample numbers are low. Interested readers should consult my report – Analysis of FAN’s 65 brain-fluoride studies.

Conclusion

Anti-fluoride campaigners often cite FAN’s list (FLUORIDE & IQ: THE 65 STUDIES”) in their attempts to argue that fluoridation is bad for the child’s brain. But in these series of articles Anti-fluoride 65 brain-fluoride studies not evidence against fluoridation, I have shown that their arguments are false.

In Child IQ in countries with endemic fluorosis imply fluoridation is safe I showed that while IQ and other health problems may occur where fluoride exposure is very high in areas of endemic fluorosis the reports themselves implicitly assume that the low fluoride exposure in the “low fluoride” areas is safe. It is the data from these areas, not the “high fluoride” areas, that are relevant to CWF. So despite the heavy use of these articles by FAN and anti-fluoride activists these studies do not prove what they claim. If anything these studies show CWF is safe.

In this article, I considered a few of these studies which included data relevant to low fluoride exposure. When the low fluoride exposure data (relevant to CWF)  from these studies were statistically analysed none of them showed significant relationships of child IQ to fluoride exposure. That confirms the implicit assumption from these studies that there is no negative effect of fluoride exposure on child IQ at these low levels.

Finally, in Canadian studies confirm findings of Broadbent et al (2015) – fluoridation has no effect on child IQ I summarise results from the only three studies where comparisons of IQ for children living in fluoridated and unfluoridated areas are compared. These studies were made in New Zealand and Canada and the results were the same. No statistically significant differences in child IQ were found.

However, the authors of the Canadian studies ignored this result and instead used questionable statistical methods to search for possible relationships between fluoride exposure and child IQ. Most of the relationships they report were not statistically significant but, nevertheless, they and their supporters have simply ignored this and concentrated on the few statically significant relationships.

Anti-fluoride activists currently rely strongly on these studies and heavily promote them. I will discuss these few studies further in my next article.

Similar articles

 

 

Canadian studies confirm findings of Broadbent et al (2015) – fluoridation has no effect on child IQ

Readers may remember the scathing reaction of anti-fluoride campaigners to the paper of Broadbent et al (2015). This was the first paper to compare child and adult IQ levels for people living in fluoridated and unfluoridated areas.

The anti-fluoride campaigners were extremely rude in their reaction – accusing the authors of fraud and claiming the paper was “fatally flawed.” Interestingly, several scientists known for their anti-fluoride bias also launched attacks – but more respectably as letters to the editor of the journal. For example, see articles by Osmunson et al (2016),  Grandjean (2015),; and Menkes et al (2014).

And why? Simply because Broadbent et al (2015) showed there was no difference in IQ of people living in fluoridated areas. That the studies from areas of endemic fluorosis used by anti-fluoride activists to argue at CWF were just not relevant (see Child IQ in countries with endemic fluorosis imply fluoridation is safe).

But isn’t it strange? Two more recent papers (Green et al 2019 & Till et al 2020) have effectively repeated the work of Broadbent et al (2015). They found the same result – no difference in IQ of children living in fluoridated and unfluoridated areas. And simply no reaction, no condemnation from anti-fluoride activists or the anti-fluoride scientists.

No condemnation because these anti-fluoride critics promote these papers for other reasons. But this underlines how biased the critics of the Broadbent et al (2015) paper were.

I have completed a detailed analysis of all the 65 studies the Fluoride Action Network (FAN) lists as evidence that community water fluoridation (CWF) is harmful to child IQ. The full analysis is available for download as the document Analysis of FAN’s 65 brain-fluoride studies.

In this article, I discuss the studies in the FAN’s list (see FLUORIDE & IQ: THE 65 STUDIES”) which compare child IQ in areas of “fluoridated” and “unfluoridated” fluoride in Canada. Only two studies – but I include that of Broadbent et al (2015) (which FAN’s list ignores) for completeness. All three studies found no difference in the IQ of children living in fluoridated and unfluoridated areas.

Comparing IQ of children in fluoridated and unfluoridated areas

The table below summarises the results reported by all three studies – Broadbent et al (2015), Green et al (2019), and Till et al (2020).

Table 1: Results from studies comparing IQ of children and adults from fluoridated and fluoridated areas

Notes:
Data from Green et al (2019) for children whose mothers lived in fluoridated or unfluoridated areas during pregnancy.
Data from Till et al (2020) for children either breastfed of formula-fed as babies while living in fluoridated or unfluoridated areas.

There is absolutely no difference in IQ due to fluoridation. Remember, the standard dedication of the values in the table are about 13 to 16 IQ points.

I have presented all the results from these papers graphically below. FSIQ is the normal IQ measurement. VIQ (Verbal IQ) and PIQ (Performance IQ) are subsets of FSIQ.

The only statistically significant differences between fluoridated and unfluoridated areas were for VIQ of breastfed babies (VIQ higher for fluoridated areas) and PIQ of formula-fed babies (PIQ lower for fluoridated areas).

Anti-fluoride campaigners and (biased scientists like Grandjean) love the Green et al (2019) and Till et al (2020) papers because they reported (very weak) negative relationships of some child cognitive measures with fluoride intake ( I discuss this in separate articles). This is largely a result of the statistical methods used – particularly resorting to several different cognitive measures and measures of fluoride exposure, as well as the separation of results according to gender. Reminds me of the old saying that one can always get the results one requires by torturing the data hard enough.

I will return to the statistical problems of these and similar papers in a separate article.

Misrepresentation by anti-fluoride activists

Anti-fluoride campaigners have latched on to the two Canadian studies – often making claims that simply are not supported. But always ignoring the data shown above.

For example – this propaganda poster from FAN promoting the Green et al (2019) study.

This completely misrepresents the results of the study. No difference was found in the IQs of children from fluoridated and unfluoridated areas. These people completely ignore that result while placing unwarranted faith in the weak relationships reported elsewhere in that paper. (In fact, Green et al (2019) found a weak significant relationship only for boys – the relationships for all children and for girls were not significant. See my articles about this statistical torture).

And this FAN propaganda poster promoting the Till et al (2020) study.

Again – completely wrong. There was no difference in IQ of formula-fed babies in fluoridated and unfluoridated areas (see Table 1 above). Even worse – FAN is misrepresenting the statistical relationships reported in this paper as there’s was no statistically significant relationship between child IQ and fluoride exposure for formula-fed our breastfed babies once the influence of outliers and/or confounders were considered.

Misrepresentation by anti-fluoride scientists

It is understandable, I guess, that the authors of the two Canadian papers make a lot of the poor statistical relationships they reported and ignored the fact that they did not see any effect of fluoridation. Perhaps they can be excused some bias due to professional ambition. But this underlines why sensible readers should always critically and intelligently read the papers in this controversial area. One should never rely on the public relations claims of authors and their institutes. But it is sad to see how scientific basis and ambitions can lead scientists to support the claims of political activists. or worse, to attack honest scientists who do post-publication peer review of the studies (see for example When scientists get political: Lead fluoride-IQ researcher launches emotional attack on her scientific critics).

I am also very critical of scientific supporters of these studies who have their own anti-fluoride motivations. Philippe Grandjean, for example, was one of the authors very critical of the Broadbent et al (2015) paper and ignored completely the fact that the Green et al (2019) and Till et al (2020) papers report exactly the same result – no effect of fluoridation on child IQ. Grandjean often makes public comments supporting the claims of anti-fluoride campaigners like FAN. He also behaved in a scientifically unethical way when he refused to allow my critique of the flawed paper by Malin & Till (2015) to be published in Environmental Health – the journal he acts as the chief editor of (see Fluoridation not associated with ADHD – a myth put to rest).

I am repeating myself but it is a matter of “reader beware.” Readers should not simply rely on the scientific “standing” of authors who are only human and suffer from the same biases as others. They should read these papers for themselves and make up their own mind about what the data actually says.

Similar articles