Tag Archives: Connett

Anti-fluoridationist Paul Connett misrepresents NZ data

Slide 110 from Paul Connett’s presentation prepared for his planned meeting at Parliament Buildings last February

Here is another post in my series critiquing a PowerPoint presentation of Paul Connett – a leading US anti-fluoridation activist.

Paul prepared this for a meeting in New Zealand Parliament buildings last February. Although only three MPs turned up his presentation is important as it summarises almost all the arguments used by anti-fluoridation activists.

Connett claims NZ data shows fluoridation ineffective

Connett argues the evidence community water fluoridation (CWF) is effective in reducing tooth decay is weak. He covers this in slides 96-110 but in this post I will deal only with the New Zealand evidence he uses (slides 108-110).  Paul’s presentation can be downloaded for those wishing to look at it in detail – see Prof Paul Connett Power Point Presentation to Parliament 22nd Feb 2018.

The total New Zealand evidence Connett presents for this is a graphic obtained from his NZ offsiders, Fluoride Free NZ (FFNZ):

We know how unreliable FFNZ is as a source and the data is obviously cherry-picked. But what is the truth? What do the NZ School Dental statistics really say about the oral health of children in NZ?

I have covered this before – FFNZ misrepresentation of the MoH data is an annual event occurring each time the Ministry of Health adds its annual summary of the data to their web pages.

For a change, here is a breakdown and discussion of the 2016 data prepared by Environmental Health Indicators NZ in association with Massey University:

“Children in fluoridated areas generally have better oral health”

“Children living in communities with fluoridated drinking-water generally had better oral health than children living in non-fluoridated communities.

In 2016, around 60 percent of 5-year-olds were caries-free in their primary teeth. Rates were similar in fluoridated communities (60 percent) and non-fluoridated communities (60 percent) (Figure 1).

More Māori and Pacific Island 5-year-olds were caries-free in fluoridated communities than in non-fluoridated communities in 2016. The largest difference can be seen for Māori children.

5-year-olds had on average 1.8 decayed, missing or filled primary teeth in 2016. Children living in fluoridated communities had less decayed, missing or filled teeth than children living in non-fluoridated communities (Figure 2).

This difference is particular large for Māori children. 5-year old Māori children had on average 2.5 decayed, missing or filled teeth in fluoridated communities compared to 3.3 decayed, missing or filled teeth in non-fluoridated communities in 2016.”

I am unable to embed the Environmental Health Indicators NZ graphs, but they are essentially the same I presented in my article Anti-fluoridationists misrepresent New Zealand dental data – an annual event so I reproduce that section of the article below:


What does the new data really say?

Let’s look at a summary of the data – for 5-year-olds and year 8 children – and for the different ethnic groups listed – Māori, Pacific Island and “other”(mainly Pakeha and Asian).  You can download the spreadsheets contain the data from the MoH web page – Age 5 and Year 8 oral health data from the Community Oral Health ServiceWe will look at the % of these children that a free from caries as well as the mean decayed, missing and filled teeth (dmft and DMFT) for each group.

5-year-olds

Notice the FFNZ cherry picking? Yes, the “Total” figures show very little difference but if they had dared look at different ethnic groups their argument would not have looked so great. Fluoridation appears to be associated with an improvement of dental health from about 6% (for “Other”) to 23% (for Māori)

Year 8 children

You can see why  FFNZ chose the 5-year-olds instead of year 8 children. Even the misleading data for the “Total” group suggests an almost 20% improvement of dental health in fluoridated areas.  Fluoridation appears to be associated with an improvement of dental health from about 18% (for “Other”) to 30% (for Māori).


What’s the problem with the 2009 Oral Health Survey?

Anti-fluoride activists love to hate this survey because it concluded:

“Overall, children and adults living in fluoridated areas had significantly lower lifetime experience of dental decay (ie, lower dmft/DMFT) than those in non-fluoridated areas. There was a very low overall prevalence of moderate fluorosis (about 2%; no severe fluorosis was found), and no significant difference in the prevalence of moderate fluorosis (or any of the milder.

“These findings support international evidence that water fluoridation has oral health benefits for both adults and children. In addition, these findings should provide reassurance that moderate fluorosis is very rare in New Zealand, and that the prevalence of any level of fluorosis was not significantly different for people living in fluoridated and non-fluoridated areas.”

Yes, it covers only the period up to 2008 and it would be good to get more recent high-quality data from a similar study.

But Connett’s accusation of “cherry-picked data” is simply wrong – and dishonest. In fact, scientific principles were used to obtain a representative sample for the survey – recognising that oral health is strongly influenced by ethnic, regional and fluoridation differences.

The methods used are explained in 22 pages of the report –  MoH. (2010). Our Oral Health Key findings of the 2009 New Zealand Oral Health Survey

In contrast, the annual School Dental Data is simply a record of overall findings. There is no attempt to standardise diagnostic and reporting methods to the standard of the Oral Health Survey or scientific studies.

But, of course, it provides a lot of data which can be cherry-picked to support a specific argument or confirm a bias. FFNSZ and Paul Connett have ignored all the known ethnic, social and regional differences in their cherry-picking. Consequently, their reported “findings” do not have credibility.

Conclusion

I think it is somewhat disrespectful of Paul Connett to include such a shonky bit of misrepresentation in a presentation prepared for members of parliament. It is also disrespectful in that he relies on his scientific qualifications, his Ph. D. to give “respectability” to a scientific argument which is so easily shown to be false.

Surely our members of parliament deserve something better than this.

Although, even with members of parliament, I guess the old adage “reader (or listener) beware” applies. Sensible MPs will not accept such assurances at face value and will seek out adive=ce on such matters from their officials and experts.

I guess we should feel pretty confident that most of our MPs are sensible in this repect. The fact they did not turn up to a meeting to hear someone well-known for misrepresenting the science is telling – and this despite the fact that anti-fluoride activists were exerting strong pressure on MPs to attend.

Politicians have experienced, and learned from, excessive lobbying, pressuring and untruthful submissions precisely because of their targeting by anti-science activist groups like FFNZ. They know this is why local councils wanted the central government to take over fluoridation decisions.

I suspect our parliamentary politicians are a little more mature than our local body politicians and now  treat such organised campaigns like water off a duck’s back.

Similar articles

 

Advertisements

Anti-fluoridationists rejection of IQ studies in fluoridated area.

US anti-fluoride activist Paul Connett claims studies cannot detect an IQ effect from fluoridated water because total fluoride intake is the real problem – but still campaigns against community water fluoridation. Image credit: MSoF “Activist Spouts Nonsense – The Evidence Supports Fluoridation”

This is another article in my critique of the presentation Paul Connett prepared to present to a meeting at Parliament in February.

I deal with his coverage of the studies of IQ effects where community water fluoridation (CWF) is used. There are now actually three such studies (Broadbent et al. 2015, Barberio et al. 2017  and  Aggeborn & Öhman 2016), but Connett pretends there is only one – the Broadbent et al. (2015) New Zealand study.

Maybe because it was the first one to provide evidence challenging his extrapolation of the fluoride/IQ studies (see The 52 IQ studies used by anti-fluoride campaigners) results in areas of endemic fluorosis to areas where CWF is used. It is also the study which seems to have resulted in the most hostility from anti-fluoride campaigners.

So here I will just be sticking with his criticism of the New Zealand study Broadbent et al (2015):

Slide 76 from Paul Connett’s presentation prepared for his February meeting at  parliament buildings

Broadbent’s findings do not “negate all other human studies”

Paul allows emotion to get the better of him as no one is suggesting this at all. The studies Connett refers to are all from areas of endemic fluorosis (see  The 52 IQ studies used by anti-fluoride campaigners), not from areas of CWF.

Broadbent et al (2015) simply concluded that their “findings do not support the assertion that fluoride in the context of CWF programmes is neurotoxic.”  That is a modest statement and Broadbent et al. (2015) simply do not draw any conclusions about the studies Connett relies on. But, of course, Connett is upset because this and similar studies just do not support his attempt to extrapolate results from areas of endemic fluorosis to areas of CWF.

The health problems suffered by people in areas of endemic fluorosis are real and it is right they should be studied and attempts made to alleviate them. But this has absolutely nothing to do with CWF.

“Fatally flawed” charge is itself fatally flawed

Again, Paul has allowed emotions to get the upper hand. It is possible, and necessary, to critique published papers – but critiques should be evidence-based and realistic. Paul’s “fatally flawed” charge (slides 77 & 78) simply displays how much this paper has put his nose out of joint.

But let’s look at the specific “flaws” Paul (and other critics associated with the Fluoride Action Network) claim.

The two villages mindset: Paul alleges that the Broadbent et al (2015) study “essentially compared two groups.” He is stuck in the mindset of most of his 52  studies from areas of endemic fluorosis (see  Fluoride & IQ: The 52 Studies). The mindset of simply comparing the IQ levels of children in a village suffering endemic fluorosis with the IQ levels of children in a village not suffering endemic fluorosis. This simple approach can identify statistically significant differences between the villages but provides little information on causes. For example, most of these studies used drinking water fluoride as a parameter but there could be a whole range of other causes related to health problems of fluorosis.

Professor Richie Poulton, current Director of the Dunedin Multidisciplinary Health and Development Research Unit

In contrast, Broadbent et al. (2015) used “General Linear models to assess the association between CWF and IQ in childhood and adulthood, after adjusting for potential confounders.” The statistical analysis involved includes accounting for a range of possible risk-modifying factors besides CWF., This was possible because the study was part of the Dunedin Multidisciplinary Health and Development Study. This is a highly reputable long-running cohort study of 1037 people born in 1972/1973 with information covering many areas.

The fluoride tablets argument: Connett and other critics always raise this issue – the fact that “In New Zealand during the 1970s, when the study children were young, F supplements were often prescribed to those living in unfluoridated areas.” Often they will go further to claim that all the children in the unfluoridated area of this study were receiving fluoride tablets – something they have no way of knowing.

But the fact remains that fluoride tablets were included in the statistical analysis. No statistically significant effect was seen for them.  Overlap of use of fluoride tablets with residence in fluoridated or unfluoridated areas will have occurred and their influence would be reflected in the results found. Presumably, the effect would be to increase the confidence intervals. As the critics, Menkes et al. (2014), say “comparing groups with overlapping exposure thus compromises the study’s statistical power to determine the single effect of CWF.”  I agree. But this does not negate the findings which are reported with the appropriate confidence intervals (see below).

The point is that the simplistic argument that effects of fluoride tablets were ignored is just not correct. Their effect is reflected in the results obtained.

Potential confounders: Many poor quality studies have ignored possible confounders, or considered only a few. This is a general problem with these sort of studies – and even when attempts are made to include all that the researchers consider important a critic can always claim there may be others – especially if they do not like the results. Claims of failing to consider confounders can often be simply the last resort of armchair critics.

In this case, there is no actual reported association to be confounded (unlike my identification of this problem with the Malin & Till 2015 ADHD study – see Perrott 2017). However, Osmunson et al. (2016) specifically raised possibilities of confounding by lead, manganese, mother’s IQ and rural vs urban residence. Mekes et al. (2014) also raised the rural vs urban issue as well as a possible effect from breastfeeding reducing fluoride intake by children in fluoridated areas.  In their response, Broadbent et al (2015b & 2016) reported that a check showed no significant effect of lead or distance from the city centre and pointed out that manganese levels were too low to have an effect. Broadbent et al (2015b) also reported no significant breastfeeding-fluoride interaction occurred.

Numbers involved: Connett claims the study was fatally flawed because “it had very few controls: 991 lived in the fluoridated area, and only 99 in non-fluoridated” (Slide 77). But the numbers are simply given by the longer term Dunedin study themselves – they weren’t chosen by Broadbent and his co-workers. That is the real world and is hardly a “fatal flaw.”

The 95% confidence intervals

Yes, statisticians always love to work with the large numbers but in the real world, we take what we have. Smaller numbers mean less statistical confidence in the result – but given that Broadbent et al (2015) provides the results, together with confidence intervals, it is silly to describe this as fatally flawed. These were the results given in the paper for the parameter estimate of the factors of interest:

Factor Parameter estimate 95% Confidence interval p-value
Area of residence -0.01 -3.22 to 3.20 .996
Fluoride toothpaste use 0.70 -1.03 to 2.43 .428
Fluoride tablets 1.55 -0.38 to 3.49 .116

Connett did not refer to the confidence intervals reported by Broadbent et al (2015). However, Grandjean and Choi (2015) did describe them as “wide” – probably because they were attempting to excuse the extrapolation of “fluoride as a potential neurotoxic hazard” from areas of endemic fluorosis to CWF.

The argument over confidence intervals can amount to straw clutching – a “yes but” argument which says “the effect is still there but is small and your study was not large enough to find it.” That argument can be never ending but it is worth noting that Aggeborn & Öhman (2016) made a similar comment about wide confidence intervals for all fluoride/IQ studies, including that of Broadbent et al. (2015).  Aggeborn & Öhman (2016) had a very large sample (almost 82,000 were involved in the cognitive ability comparisons) and reported confidence intervals of -0.18 to 1.03 IQ points (compared with -3.22 to 3.20 IQ points reported by Broadbent et al 2015). Based on this they commented, “we are confident to claim that we have estimated a zero-effect on cognitive ability.”

The “yes but” argument about confidence intervals may mean one is simply expressing faith in an effect so small as to be meaningless.

Total fluoride exposure should have been used: Connett says (slide 77) “Broadbent et al did not use the proper measure of fluoride exposure. They should have used total F exposure.  Instead, they used only exposure from fluoridated water.” Osmunson et al. (2016) make a similar point, claiming that the study should not have considered drinking water fluoride concentration but total fluoride intake. They go so far as to claim “the question is not whether CWF reduces IQ, but whether or not total fluoride intake reduces IQ.”

This smacks of goalpost moving – especially as the argument has specifically been about drinking water fluoride and most of the studies they rely on from areas of endemic fluorosis specifically used that parameter.

In their response to this criticism Broadbent et al (2016) calculated estimates for total daily fluoride intake and used them in their analysis which “resulted in no meaningful change of significance, effect size, or direction in our original findings.”

It’s interesting to note that Connett and his co-workers appear to miss completely the point about “wide” confidence intervals made by Grandjean and Choi (2015). Instead, they have elevated their argument to the claim that fluoride intake is almost the same in both fluoridated and unfluoridated areas so that any study will not be able to detect a difference in IQ. Essentially they are claiming that we are all going to suffer IQ deficits whether we live in fluoridated or unfluoridated areas.

This is the central argument of their paper – Hirzy et al (2016). However, the whole argument relies on their own estimates of dietary intakes – a clear example where motivated analysts will make the assumptions that fit and support their own arguments. This argument also fails to explain why the Dunedin study found lower tooth decay in fluoridated areas.

Last time I checked the anti-fluoride campaigners, including Connett, were still focusing on CWF – fluoride in drinking water. One would think if they really believed their criticism that they would have given up that campaign and instead devoted their energies to the total fluoride intake alone.

Conclusions

All studies have limitations and of course, Broadbent et al. (2015) is no exception. However, the specific criticisms made by Connett and his fellow critics do not stand up to scrutiny. Most have been responded to and shown wrong – mind you this does not stop these critics from continuing to repeat them and disregard the responses.

I believe the relatively wide confidence intervals could be a valid criticism – although it does suggest a critic who is arguing for very small effects. A critic who may always find the confidence intervals still exclude their very small effect – no matter how large the study is.

In effect, the narrow confidence intervals reported by Aggeborn & Öhman (2016) should put that argument to rest for any rational person.

References

Aggeborn, L., & Öhman, M. (2016). The Effects of Fluoride In The Drinking Water

Barberio, A. M., Quiñonez, C., Hosein, F. S., & McLaren, L. (2017). Fluoride exposure and reported learning disability diagnosis among Canadian children: Implications for community water fluoridation. Can J Public Health, 108(3),

Broadbent, J. M., Thomson, W. M., Ramrakha, S., Moffitt, T. E., Zeng, J., Foster Page, L. A., & Poulton, R. (2015). Community Water Fluoridation and Intelligence: Prospective Study in New Zealand. American Journal of Public Health, 105(1), 72–76.

Broadbent, J. M., Thomson, W. M., Moffitt, T., Poulton, R., & Poulton, R. (2015b). Health effects of water fluoridation: a response to the letter by Menkes et al. NZMJ, 128(1410), 73–74.

Broadbent, J. M., Thomson, W. M., Moffitt, T. E., & Poulton, R. (2016). BROADBENT ET AL. RESPOND. American Journal of Public Health, 106(2), 213–214. https://doi.org/10.2105/AJPH.2015.302918

Grandjean, P., Choi, A. (2015). Letter: Community Water Fluoridation and Intelligence. Am J Pub Health, 105(4).

Hirzy, J. W., Connett, P., Xiang, Q., Spittle, B. J., & Kennedy, D. C. (2016). Developmental neurotoxicity of fluoride: a quantitative risk analysis towards establishing a safe daily dose of fluoride for children. Fluoride, 49(December), 379–400.

Malin, A. J., & Till, C. (2015). Exposure to fluoridated water and attention deficit hyperactivity disorder prevalence among children and adolescents in the United States: an ecological association. Environmental Health, 14.

Menkes, D. B., Thiessen, K., & Williams, J. (2014). Health effects of water fluoridation — how “ effectively settled ” is the science? NZ Med J, 127(1407), 84–86.

Osmunson, B., Limeback, H., & Neurath, C. (2016). Study incapable of detecting IQ loss from fluoride. American Journal of Public Health, 106(2), 212–2013.

Perrott, K. W. (2017). Fluoridation and attention deficit hyperactivity disorder – a critique of Malin and Till ( 2015 ). Br Dent J.

Similar articles

 

 

A conference paper on the maternal neonatal urinary fluoride/child IQ study has problems

Image credit: Do new mothers doing a Ph.D. get enough support?

The anti-fluoride movement has certainly mobilised over the neonatal maternal urinary fluoride study which reported an association with child IQ. They see it as the best thing since sliced bread and believe it should lead to the end of fluoridation worldwide.

They also seem to be putting all their eggs in this one basket and have started a campaign aimed at stopping pregnant women from drinking fluoridated water (See Warning to Pregnant Women: Do Not Drink Fluoridated Water).

So I was not surprised to see a newsletter this morning from the Fluoride Action Network reporting another output from this study – a conference paper (most likely a poster) presented at the  3rd Early Career Researchers Conference on Environmental Epidemiology. The meeting was in Freising, Germany, on 19-20 March 2018.

I had been aware of the poster for the last week so had expected FAN to gleefully jump on it and start promoting it in their campaigns.

Here is a link to the abstract:

Thomas, D., Sanchez, B., Peterson, K., Basu, N., Angeles Martinez-Mier, E., Mercado-Garcia, A., … Tellez-Rojo, M. M. (2018). Prenatal fluoride exposure and neurobehavior among children 1-3 years of age in Mexico. Occupational and Environmental Medicine, 75(Suppl 1), A10–A10.

It’s only an abstract and it may be some time before a formal paper is published, if at all. Posters do not get much in the way of peer review and often not followed by formal papers.  So I can’t say much about the poster at this stage as I never like to make an assessment of studies on the basis of abstracts alone.

But, in this case, I have Deena Thomas’s Ph.D. thesis which was the first place the work was reported. If you are interested you can access it from this link:

Thomas, D. B. (2014). Fluoride exposure during pregnancy and its effects on childhood neurobehavior: a study among mother-child pairs from Mexico City, Mexico. University of Michigan.

I will wait for a formal paper before properly critiquing the poster, but at the moment I find a big discrepancy between the Thesis conclusions and the conclusions presented in the poster abstract.

Thesis conclusions

In her work, Deena Thomas used the Mental Development Index (MDI) which is an appropriate way of determining neurobehavioral effects in young children.

She concluded in her thesis (page 37):

“Neither maternal urinary or plasma fluoride was associated with offspring MDI scores”

And (page 38):

“This analysis suggests that maternal intake of fluoride during pregnancy does not have a strong impact on offspring cognitive development in the first three years of life.”

And further (page 48):

“Maternal intake of fluoride during pregnancy does not have any measurable effects on cognition in early life.”

So – no association found of child MDI score with maternal neonatal urinary F concentrations.

Poster conclusions

But the poster tells a different story.

The abstract concluded:

“Our findings add to our team’s recently published report on prenatal fluoride and cognition at ages 4 and 6–12 years by suggesting that higher in utero exposure to F has an adverse impact on offspring cognitive development that can be detected earlier, in the first three years of life.”

So her conclusions reported in her thesis are exactly the opposite of the conclusions reported in her conference poster!

What the hell is going on?

The data

Obviously, I do not have access to the data and she does not provide it in her thesis. But from her descriptions of the data in her thesis and her poster perhaps we can draw some tentative conclusions.

The table below displays the data description, and a description of the best-fit line determined by statistical analysis, in her thesis and her poster.

Information on data Thomas Ph.D. Thesis Conference abstract
Number of mother/child pairs 431 401
Maternal Urinary F range (mg/L) 0.110 – 3.439 0.195 – 3.673
Mean maternal urinary F (mg/L) 0.896 0.835
Model β* -0.631 -2.40
Model p-value 0.391 – Not significant
95% CI for β -4.38 to -0.40

*β is the coefficient, or slope, of the best-fit line

Conclusions

Apparently at least 30 data pairs have been removed from her thesis data to produce the dataset used for her poster. Perhaps even some data pairs were added (the maximum urinary F value is higher in the smaller data set used for the poster).

This sort of change in the data selected for the statistical analysis could easily swing the conclusion from no effect to a statistically significant effect. So the reasons for the changes to the dataset are of special interest.

Paul Connett claims this poster “strengthens” the findings reported in the Bashash paper.  He adds:

“This finding adds strength to the rapidly accumulating evidence that a pregnant woman’s intake of fluoride similar to that from artificially fluoridated water can cause a large loss of IQ in the offspring.”

But this comes only by apparently removing the conflicting conclusions presented in Deela Thomas’s Ph.D. thesis. We are still left with the need to explain this conflict and why a significant section of the data was removed.

To be clear – I am not accusing Thomas et al. (2018) of fiddling the data to get the result they did. Just that, given the different conclusions in her thesis and the poster,  there is a responsibility to explain the changes made to the dataset.

From the limited information presented in the poster abstract, I would think the scatter in the data could be like that seen in the Bashash et al. (2017) paper. The coefficient of the best fit line (β) is relatively small and while the 95% CI indicates the fit is statistically significant its closeness to zero suggest that it is a close thing.

However, let’s look forward to getting better information on this particular study either through correspondence or formal publication of a research paper.

Other articles on the Mexican study

Fluoride, pregnancy and the IQ of offspring,
Maternal urinary fluoride/IQ study – an update,
Anti-fluoridation campaigners often use statistical significance to confirm bias,
Paul Connett “updates” NZ MPs about fluoride?
Paul Connett’s misrepresentation of maternal F exposure study debunked,
Mary Byrne’s criticism is misplaced and avoids the real issues

Similar articles

The 52 IQ studies used by anti-fluoride campaigners

Slide number 30 from Paul Connett presentation prepared for a talk at NZ Parliament buildings in February 2018.

Continuing my critique of the presentation prepared by Paul Connett for his much-publicised meeting at Parliament Building in February. The meeting attracted only three MPs but his presentation is useful as it presents all the arguments anti-fluoride campaigners rely on at the moment.

My previous articles on this presentation are Anti-fluoride activist commits “Death by PowerPoint” and Paul Connett’s misrepresentation of maternal F exposure study debunked.

In this article, I deal with the argument presented in the slide above. it is an argument repeated again and again by activists. Connett has posted a more detailed list of these studies and his description of them in Fluoride & IQ: The 52 Studiesat the Fluoride Action Network website.

Studies in areas of endemic fluorosis

All the 52 studies comment refers to are from regions of endemic fluorosis in countries like India, China, Mexico and Iran where dietary fluoride intake is above the recommended maximum level. People in these areas suffer a range of health problems and studies show cognitive deficits as one of them. However, a quick survey of Google Scholar shows this concern is well down the list (See Endemic fluorosis and its health effects). Only 5% of the Google Scholar hits related to health effects of endemic fluorosis considered IQ effects.

People in high fluoride areas where fluorosis is endemic suffer a range of health problems. Credit: Xiang (2014)

In, most, but not all, cases the major source of fluoride in the diet is drinking water with high fluoride levels (above the WHO recommended 1.5 mg/L). Paul Connett’s logic is simply to extrapolate to low drinking water fluoride concentrations typical of community water fluoridation (CWF). However, we do not see the other health effects like severe dental fluorosis, skeletal fluorosis, etc., where CWF is used.

His logic also ignores the possibility that cognitive deficits may result from other health problems common in areas of endemic fluorosis. Problems such as premature births and low birth weight, skeletal fluorosis or even the psychological effect of unsightly teeth due to severe dental fluorosis.

Comparing “high” fluoride villages with “low” fluoride villages

This approach is simplistic as it simply compares a population suffering fluorosis with another population not. Yes, the underlying problem is the high dietary intake (mainly from drinking water) in the high fluoride villages – but that does not prove fluoride in drinking water is the direct cause of a problem. The examples discussed above, eg., low birth weights or premature births, could be the direct cause.

It is easy to show statistically significant differences of drinking water fluoride and a whole host of fluorosis related diseases between two villages but that, in itself, does not prove that drinking water fluoride is the direct cause. Nor does it justify extrapolating such results to other low concentrations situations typical of CWF.

Paul Connett’s logic ignores the fact that in most of these studies the “low” fluoride villages (which the studies were treating as the control or normal situations where IQ deficits did not occur) had drinking water fluoride concentrations like that used in CWF. It also ignores, or unjustly attempts to dismiss) studies which show no cognitive deficits related to CWF.

A low fluoride concentration study showing an IQ effect

After making a big thing about the large numbers of studies and being challenged by the high fluoride concentrations involved Connett normally goes into a “yes, but” mode and attempts to transfer that credibility of “large numbers” to the very few studies which report effects at low fluoride concentrations.

He usually makes a big thing of the study by Lin et al (1991):

Lin Fa-Fu, Aihaiti, Zhao Hong-Xin, Lin Jin, Jiang Ji-Yong, M. (1991). THE RELATIONSHIP OF A LOW-IODINE AND HIGH- FLUORIDE .ENVIRONMENT TO SUBCLINICAL CRETINISM lN XINJIANG. Iodine Deficiency Disorder Newsletter, 24–25.

Connett claims this study shows a lower IQ when the drinking water F concentration was 0.88 ppm, but the areas suffered from iodine deficiency which is related to cognitive deficits.

The study I reviewed recent by Bashash et al (2017) (see Paul Connett’s misrepresentation of maternal F exposure study debunked) is also on Connett’s list. He doesn’t mention, however, that while an association of child IQ with prenatal maternal urinary fluoride was reported the paper also reported there was no observed association of child IQ with child urinary fluoride concentrations.

Studies not showing an effect

Connett lists 7 studies which showed no effect on IQ. One of these was the well-known Broadbent et al., (2014) study from New Zealand, which he, of course, proceeds to debunk in an irrational and not very truthful manner.

He does not mention the studies from Canada (Barberio et al. 2017 ) and Sweden (Aggeborn & Öhman 2016) which also show no effect of CWF on IQ.

The 6 other studies listed are all Chinese, and not translated. Interesting because Connett’s Fluoride Action Network invested money and time into translating obscure Chinese papers that could support their argument of harm. They obviously did not bother translating those papers which did not confirm their bias.

Conclusion

So, Connett’s 52 studies are rather a waste of time. Based in areas of endemic fluorosis their findings are not transferable to areas where CWF is used. The quality of most papers is low and, usually, the studies are simply a comparison of two villages, one where fluorosis is endemic and the “control” village where it isn’t but drinking water concentrations are like that used in CWF.

Connett simply is not able to properly evaluate, or in some cases even consider, studies which show no effect of fluoride on IQ or were made in areas where CWF exists and no effects are shown.

Similar articles

 

Mary Byrne’s criticism is misplaced and avoids the real issues

Image credit: BuildGreatMinds.Com

First, thanks to Mary Byrne and FFNZ for this response (see Anti-fluoride group coordinator responds to my article). Hopefully, this will help encourage some good faith scientific discussion of the issues involved in my original article (Paul Connett’s misrepresentation of maternal F exposure study debunked). I am pleased to promote such scientific exchange.

I will deal with the issues Mary raised point by point. But first, let’s correct some misunderstandings. Mary claimed I am a “fluoride promoter” and had “sought to discredit the study via his blog posts and tweets.”

  1. I do not “promote fluoride.” My purpose on this issue has always been to expose the misinformation and distortion of the science surrounding community water fluoridation (CWF). I leave promotion of health policies to the health experts and authorities.
  2. I have not “sought to discredit the study.” The article Mary responded to was a critique of the misrepresentation of that study by Paul Connett – not an attack on the study itself. This might become clear in my discussion below of the study and how it was misrepresented.

The study

The paper we are discussing is:

Bashash, M., Thomas, D., Hu, H., Martinez-mier, E. A., Sanchez, B. N., Basu, N., … Hernández-avila, M. (2016). Prenatal Fluoride Exposure and Cognitive Outcomes in Children at 4 and 6 – 12 Years of Age in Mexico.Environmental Health Perspectives, 1, 1–12.

Anti-fluoride activists have leaped on it to promote their cause – Paul Connett, for example, claimed it should lead to the end of community water fluoridation throughout the world! But this is not the way most researchers, including the paper’s authors, see the study. For example, Dr. Angeles Martinez-Mier, co-author and one of the leading researchers,  wrote this:

1. “As an individual, I am happy to go on the record to say that I continue to support water fluoridation”
2. “If I were pregnant today I would consume fluoridated water, and that if I lived in Mexico I would limit my salt intake.”
3.  “I am involved in this research because I am committed to contribute to the science to ensure fluoridation is safe for all.”

Was the reported association statistically significant?

Mary asserts:

“Perrott claims that the results were not statistically significant but his analysis is incorrect.”

That is just not true. I have never claimed their reported association was not statistically significant.

I extracted the data they presented in their Figures 2 and 3A and performed my own regression analysis on the data. This confirmed that the associations were statistically significant (something I never questioned). The figures below illustrating my analysis were presented in a previous article (Maternal urinary fluoride/IQ study – an update). These results were close to those reported by Bashash et al., (2017).

For Fig. 2:

My comment was – “Yes, a “statistically significant” relationship (p = 0.002) but it explains only 3.3% of the variation in GCI (R-squared = 0.033).”

For Fig 3A:

My comment was – “Again, “statistically significant” (p = 0.006) but explaining only 3.6% of the variation in IQ (R-squared = 0.0357).”

So I in no way disagreed with the study’s conclusions quoted by Mary that:

” higher prenatal fluoride exposure, in the general range of exposures reported for other general population samples of pregnant women and nonpregnant adults, was associated with lower scores on tests of cognitive function in the offspring at age 4 and 6–12 y.”

I agree completely with that conclusion as it is expressed. But what Mary, Paul Connett and all other anti-fluoride activists using this study ignore is the real relevance of this reported association. The fact that it explains only about 3% of the IQ variance. I discussed this in the section The small amount of variance explained in my article.

This is a key issue which should have been clear to any reader or objective attendee of Paul Connett’s meeting where the following slide was presented:

Just look at that scatter. It is clear that the best-fit line explains very little of it.  And the 95% confidence interval for that line (the shaded area) does not represent the data as a whole. The comments on the statistical significance and confidence intervals regarding to the best-fit line do not apply to the data as a whole.

Finally, yes I did write (as Mary quotes) in my introductory summary that “the study has a high degree of uncertainty.” Perhaps I should have been more careful – but my article certainly makes clear that I am referring to the data as a whole – not to the best fit line that Connett and Mary concentrate on. The regression analyses indicate the uncertainty in that data by the low amount of IQ variance explained (the R squared values) and the standard error of the estimate (about 12.9 and 9.9 IQ points for Fig 2 and  Fig 3A respectively).

The elephant in the room – unexplained variance

Despite being glaringly obvious in the scatter, this is completely ignored by Mary, Paul Connett and other anti-fluoride activists using this study. Yet it is important for two reasons:

  • It brings into question the validity of the reported statistically significant association
  • It should not be ignored when attempting to apply these findings to other situations like CWF in New Zealand and the USA.

Paul Connett actually acknowledged (in a comment on his slides) I was correct about the association explaining such small amount of the variance but argued:

  • Other factors will be “essentially random with respect to F exposure,” and
  • The observed relationship will not be changed by the inclusion of these other factors.

I explained in my article Paul Connett’s misrepresentation of maternal F exposure study debunked how both these assumptions were wrong. In particular, using as one example the ADHD-fluoridation study I have discussed elsewhere (see Perrott, 2017). I hope Mary will refer to my article and discussion in her response to this post.

While ignoring the elephant in the room – the high degree of scattering, Mary and others have limited their consideration to the statistical significance and confidence intervals of the reported association – the association which, despite being statistically significant, explains only 3% of the variation (obvious from the slide above.

For example, Mary quotes from the abstract of the Bashash et al., (2017) paper:

“In multivariate models we found that an increase in maternal urine fluoride of 0.5mg/L (approximately the IQR) predicted 3.15 (95% CI: −5.42, −0.87) and 2.50 (95% CI −4.12, −0.59) lower offspring GCI and IQ scores, respectively.”

I certainly agree with this statement – but please note it refers only to the model they derived, not the data as a whole. Specifically, it applies to the best-fit lines shown in Fig 2 and Fig 3A as illustrated above. The figures in this quote relate to the coefficient, or slope, of the best fit line.

Recalculating from 0.5 mg/L to 1 mg/L this simply says the 95% of the coefficient values, or slopes, of the best fit lines resulting from different resampling should be in the range  -10.84 to -1.74 CGI (Fig 2) and -8.24 to 1.18 IQ (Fig 3A).

[Note – these are close to the CIs produced in my regression analyses described above – an exact correspondence was not expected because digital extraction of data from an image is never perfect and a simple univariate model was used]

The cited CI figures relate only to the coefficient – not the data as a whole. And, yes, the low p-value indicates the chance of the coefficient, or slope, of the best-fit line being zero is extremely remote. The best fit line is highly significant, statistically. But it is wrong to say the same thing about its representation of the data as a whole.

This best-fit line explains only 3% of the variance in IQ – and a simple glance at the figures shows the cited confidence intervals for that line simply do not apply to the data as a whole.

The misrepresentation

That brings us back to the problem of misrepresentation. We should draw any conclusions about the relevance of the data in the Bashash et al., (2017) study from the data as a whole – not just from the small fraction with an IQ variance explained by the fitted line.

Paul Connett claimed:

“The effect size is very large (decrease by 5-6 IQ points per 1 mg/L increase in urine F) and is highly statistically significant.”

But this would only be true if the model used (the best-fit line) truly represented all the data. A simple glance at Fig 2 in the slide above shows that any prediction from that data with such a large scatter is not going to be “highly statistically significant.” Instead of relying on the CIs for the coefficient or slope of the line, Connett should have paid attention to the standard error for estimates from the data as a whole given in the Regression statistics of the Summary output. – For Fig. 2, this is 12.9 IQ points. This would have produced an estimate of “5-6 ± 36 IQ points which is not statistically significantly different to zero IQ points,”  as I described in my article

Confusion over confidence intervals

Statistical analyses can be very confusing, even (or especially) to the partially initiated. We should be aware of the specific data referred to when we cite confidence intervals (CIs).

For example, Mary refers to the CI values for the coefficients, or slopes, of the best fit lines.

Figs 2 and 3A in the Bashash et al., (2017) paper include confidence intervals (shaded areas) for the best fit lines (these take into account the CIs of the constants as well as the CIs of the coefficients). That confidence interval describes the region of 95% probability for where the best-fit line will be.

Neither of those confidence intervals applies to the data as a whole as a simple glance at Figs 2 and 3A will show. In contrast, the “prediction interval” I referred to in my article, does. This is based on the standard error of the estimate listed in the Regression statistics. Dr. Gerard Verschuuren demonstrated this in this figure from his video presentation.

Mary is perfectly correct to claim “it is the average effect on the population that is of interest” – but that is only half the story as we are also interested in the likely accuracy of that prediction. The degree of scatter in the data is also relevant because it indicates how useful this average is to any prediction we make.

Given the model described by Bashash et al., (2017) explained only 3% of the IQ variance, while the standard error of the estimate was relatively large, it is misleading to suggest any “effect size” predicted by that model would be “highly significant” as this ignores the true variability in the reported data. When this is considered the effect size (and 95% CIs) is actually “5-6 ± 36 IQ points which is not statistically significantly different to zero IQ points,”

Remaining issues

I will leave these for now as they belong more to a critique of the paper itself (all published papers can be critiqued) rather than the misrepresentation of the paper by Mary Byrne and Paul Connett. Mary can always raise them again if she wishes.

So, to conclude, Mary Byrne is correct to say that the model derived by Bashash et al., (2017) predicts that an increase of “fluoride level in urine of 1 mg/L could result in a loss of 5-6 IQ points” – on average. But she is wrong to say this prediction is relevant to New Zealand, or anywhere else, because when we consider the data as a whole that loss is “5-6 ± 36 IQ points.”

I look forward to Mary’s response.

Similar articles

Anti-fluoride activist commits “Death by PowerPoint”

We have all sat through boring, and counterproductive, PowerPoint presentations. Boring because the presenter breaks all the rule relevant to the preparation of visual displays. And counterproductive because, in the end, the audience does not remember any of the information the presenter attempts to convey.

David JP Phillips gives some relevant advice on PowerPoint preparation in the video above and similar advice is available online.  All this advice is very helpful for anyone preparing a presentation – although constant reminders of the points and frequent practice or experience are needed to take it on board. The PowerPoint programme seems to tempt even the best presenter to make fundamental mistakes which can reduce the effectiveness of their visual material.

Learning from bad examples

Examples of bad PowerPoint presentations are ubiquitous – but I urge readers to critically consider this recent example. The PowerPoint presentation the anti-fluoride campaigner, Paul Connett, prepared for his recent presentation to a meeting in the NZ Parliament buildings. Fluoride Free NZ (FFNZ) has provided a link to Connett’s presentation – Prof Paul Connett Power Point Presentation to Parliament 22nd Feb 2018.

It has 155 slides for presentation with another 24 extra slides to be held in reserve if he had time. Just the sheer number of slides, let alone the extreme detail on individual slides, violates a basic presentation rule to start with.

Well, I say “prepared” but the recent Fluoride Free NZ newsletter describes it as “The Power Point presentation that Prof Connett showed” to the MPs meeting. I find that hard to believe as only three MPs turned up to the meeting. In such situations, a reasonable person gives up on a detailed presentation and resorts to having a chat with the people who did turn up.

An example of what not to do in a PowerPoint presentation – source  Prof Paul Connett Power Point Presentation to Parliament 22nd Feb 2018

I urge interested readers to download it and have a look. Critique it from the point of view of the advice given by David JP Phillips above. It really is a bad presentation and I don’t believe any objective person could have taken anything meaningful from it. Treat this as a learning exercise.

Mind you, these presentations are usually simply “singing to the choir” – presented to true believers. All indications are that the three MPs who attended that meeting can be described that way. Other MPs were probably well aware that Connett’s presentations given on his recent speaking tour had no relevance to their work – and probably most were aware of his bias and unreliability as a source of scientific information, anyway.

Second reading of fluoridation bill

Parliament will shortly undertake the second reading of the Health (Fluoridation of Drinking Water) Amendment Bill. It is currently 15th on the order paper.  This bill does not deal with the science of fluoridation – parliament wisely leaves that to the experts who can advise them when necessary. The bill simply concerns the procedure for decision-making – specifically suggesting transferring the decision from councils to District Health Boards. The Parliamentary Health Committee has already consulted widely on this – and FFNZ and Paul Connett have had every opportunity to present their views. In fact, Paul Connett and other opponents of fluoridation gerrymandered the system to get much longer presentation times than other submitters. I guess they have plenty of experience of making submissions and know all the tricks.

Note:

Here I am simply treating Paul Connett’s PowerPoint presentation as an example of how not to use PowerPoint. Later I will probably return to his presentation and deal with specific areas where he misrepresents the science.

Similar articles

New fluoride debate falters

Characters debate the “fluoride conspiracy” in Kubrick’s Dr Strangelove

What is it with these anti-fluoride campaigners – and particularly their leaders? They make a song and dance about having “science on their side.” They will heavily promote the latest research and papers if they can argue that they confirm their bias. And they will email politicians or make submissions to local bodies making scientific claims – often with citations and long lists of references.

But we simply can not get them to enter into a good faith scientific discussion of the sort I suggested in Do we need a new fluoride debate?

I thought this was going to happen. Bill Osmunson, the current Direct of the Fluoride Action Network (FAN), had agreed and even produced an initial article for posting. But he has now pulled out and asked me not to post his article. Apparently, my critique of a recent paper by him and his colleagues from FAN (see Flaw and porkie in anti-fluoride report claiming a flaw in Canadian study) was the straw that broke the camels back as far as he was concerned.

Talk about tiptoeing around a discussion partner. How can one have a discussion with someone this sensitive?

Excuses, excuses!

This is the explanation he gives for his withdrawal from the planned exchange:

“I have second thoughts about a discussion with you.  Do not publish my comments.*

After reading your comments in response to Neurath, it became obvious that you have no interest in discovering the truth or protecting the public.  Nor do you have reasonable judgment to evaluate research.

You do have good mechanical skills, but not judgment.

You correctly take weaker arguments and point out they are weak.  But you do not comment or appreciate the main more powerful issues.  Your comments make it sound like there is no value because some points have lower value.  Only a person who carefully rereads McLaren and Neurath, and then your comments understands some of your points are valid and you have missed others which are powerful.

In addition, you use derogatory, unprofessional mocking terms to attack the person instead of the issues.  I’m not interested in being your porky or sparky or pimp.

You are unprofessional and are not worth the time.”

  • The “comments” Bill refers to are a 55-page pdf file he sent me as the first post in our exchange. We were discussing a shorter form more suitable for a blog post when he decided to back out.

Mind you, in a previous email he had acknowledged that his mates (presumably in FAN) were unhappy about him participating in this good-faith scientific exchange. He wrote:

“Several people have told me not to respond to you, because you are unprofessional with your statements and comments.  You attack the messenger instead of the message and you have such severe bias and faith in fluoride that you must have worked for the tobacco companies to learn your strident blind bias.  
OK, I gave you a try once before and found you to be violent with your personal attacks and lack of judgment.”
 Sounds like “excuses, excuses,” to me. Surely I am not such a horrible person? I asked Bill to identify anything in my exchange with Paul Connett (see The Fluoride Debate) where I had behaved in the way he charged. He couldn’t. And I challenge anyone else to identify such behaviour on my part in that exchange.

Bill Osmunson and his mates claim I behaved badly in this exchange with Paul Connett – but they refuse to give a single example

 I can only conclude that the people at FAN are unable to provide good scientific arguments to support their case. They may well produce documents with lists of citations and references with “sciency” sounding claims. But they will not allow their claims to undergo the sort of critique normal in the scientific community.
Still – I am willing to be proven wrong. if Bill feels that he doesn;t have the scientific background for this sort of exchange perhaps Chris Neurath, Harvey Limeback or one of the other authors from FAN of the article I critiqued in Flaw and porkie in anti-fluoride report claiming a flaw in Canadian study) could take his place.
The offer is open.

Debunking a “classic” fluoride-IQ paper by leading anti-fluoride propagandists

epa-meeting-sept5-2014

Three of the paper’s authors – Quanyong Xiang (1st Left), Paul Connett (2nd Left) and Bill Hirzy (far right) – preparing to bother the EPA.

Anti-fluoride groups and “natural”/alternative health groups and websites are currently promoting a new paper by several leading anti-fluoride propagandists. For two reasons:

  1. It’s about fluoride and IQ. The anti-fluoride movement recently decided to give priority to this issue in an attempt to get recognition of possible cognitive deficits, rather than dental fluorosis,  as the main negative health effect of community water fluoridation. They want to use the shonky sort of risk analysis presented in this paper to argue that harmful effects occur at much lower concentrations than currently accepted scientifically. Anti-fluoride guru, Paul Connett, has confidently predicted that this tactic will cause the end of community water fluoridation very soon!
  2. The authors are anti-fluoride luminaries – often described (by anti-fluoride activists) as world experts on community water fluoridation and world-class scientists. However, the scientific publication record for most of them is sparse and this often self-declared expertise is not actually recognised in the scientific community.

This is the paper – it is available to download as a pdf:

Hirzy, J. W., Connett, P., Xiang, Q., Spittle, B. J., & Kennedy, D. C. (2016). Developmental neurotoxicity of fluoride: a quantitative risk analysis towards establishing a safe daily dose of fluoride for children. Fluoride, 49(December), 379–400.

bruce-spittle

Co-author Bruce Spittle – Chief Editor of Fluoride – the journal of the International Society for Fluoride Research

I have been expecting publication of this paper for some time – Paul Connett indicated he was writing this paper during our debate in 2013/2014. FAN newsletters have from time to time lamented at the difficulty he and Bill Hirzy were having getting a journal to accept the paper. Connett felt reviewers’ feedback from these journals was biased. In the end, he has lumped for publication in Fluoride – which has a poor reputation because of its anti-fluoride bias and poor peer review. But, at last Connett and Hirzy have got their paper published and we can do our own evaluation of it.

The authors are:

david-c-kennedy

Co-author David C. Kennedy – past president of the International Academy of Oral Medicine and Toxicology – an alternative dentist’s group.

Bill Hirzy, Paul Connett and Bruce Spittle are involved with the Fluoride Action Network (FAN), a political activist group which receives financial backing from the “natural”/alternative health industry. Bruce Spittle is also the  Chief Editor of Fluoride – the journal of the International Society for Fluoride Research Inc. (ISFR). David Kennedy is a Past President of the International Academy of Oral Medicine and Toxicology which is opposed to community water fluoridation.

Quanyong Xiang is a Chinese researcher who has published a number of papers on endemic fluorosis in China. He participated in the 2014 FAN conference where he spoke on endemic fluorosis in China.

xiang-Endemic fluorosis

Much of the anti-fluoridation propaganda used by activists relies on studies done in areas of endemic fluorosis. Slide from a presentation by Q. Xiang to an anti-fluoride meeting organised by Paul Connett’s Fluoride Action Network in 2014.

Critique of the paper

I have submitted a critique of this paper to the journal involved. Publication obviously takes some time (and, of course, it may be rejected).

However, if you want to read a draft of my submitted critique you can download a copy from Researchgate – Critique of a risk analysis aimed at establishing a safe dose of fluoride for children.  I am always interested in feedback – even (or especially) negative feedback – and you can give that in the comments section here or at Researchgate.

(Please note – uploading a document to Researchgate does not mean publication. It is simply an online place where documents can be stored. I try to keep copies of my documents there – unpublished as well as published. It is very convenient).

In my critique I deal with the following issues:

The authors have not established that fluoride is a cause of the cognitive deficits reported. What is the point in doing this sort of risk analysis if you don’t actually show that drinking water F is the major cause of cognitive deficits? Such an analysis is meaningless – even dangerous, as it diverts attention away from the real causes we should be concerned about.

All the reports of cognitive deficits cited by the authors are from areas of endemic fluorosis where drinking water fluoride concentrations are higher than where community water fluoridation is used. There are a whole range of health problems associated with dental and skeletal fluorosis of the severity found in areas of endemic fluorosis. These authors are simply extrapolating data from endemic areas without any justification.

The only report of negative health effects they cite from an area of community water fluoridation relates to attention deficit hyperactivity disorder (ADHD) and that paper does not consider important confounders. When these are considered the paper’s conclusions are found to be wrong – see ADHD linked to elevation not fluoridation, and ADHD link to fluoridation claim undermined again.

The data used by the Hirzy et al. (2016) are very poor. Although they claim that a single study from an area of endemic fluorosis shows a statistically significant correlation between IQ and drinking water fluoride that is not supported by any statistical analysis.

The statistically significant correlation of IQ with urinary fluoride they cite from that study explains only a very small fraction of the variability in IQ values (about 3%) suggesting that fluoride is not the major, or maybe not even a significant, factor for IQ. It is very likely that the correlation between IQ and water F would be any better.

Confounders like iodine, arsenic, lead, child age, parental income and parental education have not been properly considered – despite the claims made by Hirzy et al. (2016)

The authors base their analysis on manipulated data which disguises the poor relations of IQ to water fluoride. I have discussed this further in Connett fiddles the data on fluorideConnett & Hirzy do a shonky risk assessment for fluoride, and Connett misrepresents the fluoride and IQ data yet again.

Hirzy et al. (2016) devote a large part of their paper to critiquing Broadbent et al (2014) which showed no evidence of fluoride causing a decrease in IQ  using data from the Dunedin Multidisciplinary Health and Development Study. They obviously see it as a key obstacle to their analysis. Hirzy et al (2016) argue that dietary fluoride intake differences between the fluoridated and unfluoridated areas were too small to show an IQ effect. However, Hirzy et al (2016) rely on a motivated and speculative estimate of dietary intakes for their argument. And they ignore the fact the differences were large enough to show a beneficial effect of fluoride on oral health.

Conclusion

I conclude the authors did not provide sufficient evidence to warrant their calculation of a “safe dose.” They relied on manipulated data which disguised the poor relationship between drinking water fluoride and IQ. Their arguments for their “safe dose,” and against a major study showing no effect of community water fluoridation on IQ, are highly speculative and motivated.

Similar articles

 

 

Rejection of scientific studies in online discussions

giphy

Sometimes the on-line discussion of scientific issues  looks like a citation battle. People take sides, battle lines are drawn and struggle commences. Each side fires barrages of citations “proving” their own argument.

The battle progresses in real-time – the proferred citations are immediately rejected and alternatives offered. One would think the other side would take time out to actually read the offered citations – but no they are usually quickly rejected as unreliable.  I also get the impression that in many cases the side offering the citation  has also not bothered to read it – usually relying on its use by an ally or its coverage in a friendly on-line magazine.

OK, it natural to be lazy but wouldn’t we all learn a lot more by actually reading the citations being thrown around. And doesn’t it discredit one’s position to reject a citation out of hand for unjustified reasons?

The Logic of Science recently posted an analysis of the bad reasons people use for rejecting citations – 12 bad reasons for rejecting scientific studies. It is well worth a read – we will recognise these 12 reasons and hopefully learn not to use them ourselves in future.

Here are the 12 bad reasons:

Bad reason #1: Galileo/Columbus

“When faced with results that they don’t like, many people will invoke Galileo or Columbus and claim that they defied the mainstream view and people thought that they were crazy, but they turned out to be right. . . [However] no one thought that Galileo was crazy. He presented facts and careful observations, not conspiracies and conjecture. He did not blindly reject the science of his day, rather he made meticulous observations and presented data that discredited the common views. That is not in any way shape or form the same as arrogantly and ignorantly rejecting a paper just because you disagree with it.”

Yes, the Galileo claim always come across to me as very arrogant and crazy – yet it’s a common excuse. An Australian climate change denial group even incorporated Galileo into its title – poor old Galileo must be turning in his grave.

Bad reason #2: science has been wrong in the past

“[P]eople often make the broad claim that science shouldn’t be trusted because it has been wrong before. . . . . First, it is true that science has been wrong, but it has always been other scientists who have figured out that it was wrong. Further, it is logically invalid to blindly assume that it is wrong just because it has been wrong before.

Additionally, although there have been plenty of minor hypotheses which have been discredited, there have been very few core ideas that have been rejected in the past century. In other words, ideas which are supported by thousands of studies have rarely been rejected, and very few central ideas have been overthrown in recent decades.

Finally, attacking science by asserting that it has been wrong before is utterly absurd because science is inherently a process of modifying our understanding of the world. In other words, science is self correcting. This is one of it’s greatest strengths. . . . . It constantly replaces erroneous ideas as new evidence comes to light (the same can’t be said for anti-science views which rigidly cling to their positions no matter how much evidence opposes them). Therefore, the fact that science has been wrong is actually a good thing, because if there were no instances where we had discovered that a previous idea was wrong, that would mean that science hadn’t advanced.”

Scientific knowledge is always incomplete – with time it becomes more and more correct in its description of reality, but there is always room for improvement, for deepening of specific knowledge and refinement of theories.

It seems to me very crass to use this inherent property of good science against science itself.

Bad reason #3: it’s all about the money

Ironically, this excuse is commonly used by people allied with movements funded by big business who are campaigning against scientific findings they feel challenged by.

“This is probably the most common response to papers on climate change, vaccines, GMOs, etc., and it’s often simply untrue. The scientific community is massive, and there are thousands of independent scientists doing research. Further, all scientific publications require authors to declare any conflicts of interest, so you can actually check and see if a paper was paid for by a major company, and if you did that, you would find that many of the papers supporting GMOs, vaccines, etc. have no conflicts of interest. Anti-scientists, of course, have no interest in actually looking at the paper. They would rather just assume that it was paid off because that fits with their world-view.

. . . even if a paper does have a conflict of interest, that doesn’t give you carte blanche to ignore it. The fact that someone works for a pharmaceutical company, for example, does not automatically mean that they biased or falsified their data. If a paper has a conflict of interest, then you should certainly give it extra scrutiny, and you should be suspicious if it disagrees with other papers or has questionable statistics, but you cannot automatically assume that it is flawed.”

Wise words. We should always read scientific papers critically and intelligently – especially when there may be a conflict of interest. But it is neither critical or intelligent to reject them out of hand in this way.

Bad reason #4: there are other results that I disagree with

Someone will say, “I reject the science of X because science also says Y and I disagree with Y.” We can rephrase this as, “I reject science because I reject science.” I would not, for example, accept water fluoridation as evidence that it’s ok to reject the science of vaccines unless I had already rejected the science of fluoridation. In other words, you have to justify your rejection of the science of Y before you can use it as evidence that we shouldn’t trust the science of X. Further, even if you could demonstrate that the science of Y (in this example fluoridation) was wrong, that still would not in any way shape or form prove that the science of X (in this example vaccines) is wrong. In fact, this entire line of reasoning is just a special case of the logical fallacy known as guilt by association. If are going to say that a scientific result is incorrect, you have to provide actual evidence that the specific result that you are talking about is incorrect.”

Yes, this tactic is a red-herring, often used as a diversionary device, and very lazy as it shows an unwillingness to consider properly the issue at hand.

Bad reason #5: gut feelings/parental instincts

know I am right

” . . . .  show someone the scientific evidence for vaccines, and they respond with, “well as a parent only I know what is best for my child.” Similarly, when I show people the evidence for GMOs, they often respond with something like, “well I just have a gut feeling that manipulating genes is bad.” I do not give a flying crap about your instincts or gut feelings. The entire reason that we do science is because instincts and feelings are unreliable. When someone presents you with a carefully conducted, properly controlled study, you absolutely cannot reject it just because you have a gut feeling that it’s wrong. Doing that makes no sense whatsoever. It is the most blatant form of willful ignorance imaginable. Don’t get me wrong, intuition is a good thing, and gut feelings can certainly help you in many situations, but they are not an accurate way to determine scientific facts.”

Our feelings and instincts are very strong and will often divert our attempts at rational considerations. I think such factors are  often behind the rejection of scientific studies – even when this reason is not given. But:

“Gut feelings simply aren’t reliable. That’s why we do science.”

Bad reason #6: I’m entitled to my opinion/belief

“This is another very common response, and it is very similar to #5. Science deals with facts, not opinions or beliefs. When multiple scientific studies all agree that X is correct, it is no longer a matter of opinion. If you think that X is incorrect, that’s not your opinion, you’re just wrong. Think about the relationship between smoking and lung cancer again. What if someone said, “well everyone is entitled to their opinion, and my opinion is that it’s safe.” Do you see the problem? Scientists don’t have an opinion or belief that smoking is dangerous; rather, it is a scientific fact that it is dangerous, and if you think that it is safe, you are simply in denial. Similarly, you don’t get to have an “opinion” that the earth is young, or vaccines don’t work, or climate change isn’t true, or GMOs are dangerous, etc. All of those topics have been rigorously tested and the tests have yielded consistent results. It is a fact that we are changing the climate, a fact that vaccines work, a fact that the earth is old, etc. If you reject those, you are expressing willful ignorance, not an opinion or belief.”

Hear, hear!

Bad reason #7: I’ve done my research/an expert agrees with me

” . . . . . if your “research” disagrees with properly conducted, carefully controlled studies, then your research is wrong (or at the very least, must be rejected pending future data). There, it’s that simple. The only exception would be if your research is actually a large set of properly controlled studies which have directly refuted the study in question (e.g., if you have a meta-analysis vs. a single study, then, all else being equal, go with the meta-analysis). It’s also worth pointing out that having a few people with advanced degrees on your side does not justify your position (that’s a logically fallacy known as an appeal to authority). No matter what crackpot position you believe, you can find someone somewhere with an advanced degree who thinks you’re right.”

This appeal to authority is commonly used – nothing seems to offend an anti-fluoride campaigner more than to refer to their ideological leader, Paul Connett, without reference to his degree of former university title! Such people also often show the converse – refusing to use titles when referring to the work of someone they disagree with.

Bad reason #8: scientific dogma

“This response basically states that all scientists are forced to follow the “dogma” of their fields, and anyone who dares to question that dogma is quickly ridiculed and silenced. . . . .  In short, that’s simply not how science works. Nothing makes a scientist happier than discovering that something that we thought was true is actually false. In fact, that is how you make a name for yourself in science. No one was ever considered a great scientist for simply agreeing with everything that we already knew. Rather, the great scientists are the ones who have shown that our current understanding is wrong and a different paradigm provides a better understanding of the universe. To be clear, if you are going to defeat a well established idea, you are going to have to have some very strong evidence.”

A related claim is that the “scientific establishment” prevents publication -often used to explain why many of the authorities used by people rejecting scientific studies do not have a credible publication record.

I would be the last person to deny human jealousies and defense of peer-reviewers and scientific editors can be a problem with specific journals – but there is many alternative journals willing to accept papers.

But this does raise another issue to be wary of – there are some journals which have incredibly poor peer review and often accept papers because of the authors’ willingness to pay a publication fee. Publication in such journals should definitely be seen as a warning sign – but as in all other cases judgment should be based on a critical and sensible analysis of the paper  itself.

Bad reason #9: distrust of governments/media

” Many people, however, take it even a step further. On numerous occasions, I have shown someone a study which was not in anyway affiliated with a government agency, yet they still responded with a lengthy rant about corrupt governments or the media. The basic idea of their argument seems to boil down to, “the government/media agree with these results, therefore they must be false.” This line of reasoning is, however, clearly fallacious (in fact it’s a logical fallacy known as guilt by association). Governments and the media will lie to push their own agendas, I’m certainly not denying that, but that fact does not automatically mean that everything that they say is a lie. . . . . . . It’s fine to be skeptical of what you are told by the government/media. In fact it is a good thing, but when you are presented with scientific evidence, then it’s not a matter of trusting the government/media. Rather, it is a matter of whether or not you accept science. In other words, I don’t need to trust the government or media in order to accept the results of a carefully controlled study.”

A related reason is to imply that any scientific study is not independent because the researcher are paid. Rather silly, considering we all have to live and researchers are no different. These people will instead cite articles written by activists or journalists working for magazines financed by an industry like the “natural”/alternative health industry. Or claim the financing of activist organisation is by “donation” so it doesn’t count

Bad reason #10: it’s a conspiracy

“This one is very closely related to #8 and 9, but it takes things a step further. It proposes that there is a massive conspiracy and scientists are being paid by governments/big companies to falsify results. . . . . the scope of this conspiracy would be impossibly huge. The scientific community consists of millions of people from all over the world working out of thousands of universities, institutes, non-profits, corporations, agencies, etc. It includes people from countless religions, cultures, political ideologies, etc. There is no way that you could possibly get that many people to agree on a massive deception like this. Just think about what is being proposed here. Do you honestly think that nearly all of the world’s climate scientists have been bought off? . . . . . .  Do you honestly think that all of those different organizations (many of whom compete with each other and have different goals and purposes) have all managed to come together to make one unified conspiracy? That’s just nuts. The same problems exist for governments. . . . . Honestly ask yourself the following question: which is more plausible, that countless governments, companies, non-profits, etc. have all come together to create the world’s largest conspiracy and buy off virtually every scientist on the planet, or that the thousands of independent scientists who have devoted their lives to science are actually doing real research?”

Personally I think this reason should be considered as an immediate acceptance that the commenter has lost or that they have disqualified themselves – like Godwin’s law for the first person to bring up Hitler or the Nazis.

godwins-law1

Bad reason #11: anecdotes

“Anecdotes do not matter in science, because anecdotes don’t allow us to establish causation. Let me give an example. Suppose that someone takes treatment X and has a heart attack 5 minutes later. Can we conclude from that anecdote that treatment X causes heart attacks? NO! It is entirely possible that the heart attack was totally unrelated to the treatment and they just happened to coincide with one another. Indeed, I once heard a doctor describe a time where he was preparing to vaccinate a child, and while preparing the vaccine, the child began having a seizure (to be clear, he hadn’t vaccinate the child yet). He realized that if he had given the vaccine just 60 seconds earlier, it would have looked for all the world like the vaccine had caused the seizure when in fact the kid just happened to have a seizure at the same time that a vaccine was being administered.

. . . . it should be clear that anecdotes are worthless because they cannot establish causal relationships (in technical terms, using them to establish causation is a logical fallacy known as post hoc ergo propter hoc fallacies [i.e., A happened before B, therefore A caused B]). Properly controlled studies, however, do allow us to establish causation.”

Yet commenters again and again fall back on anecdotes – even after launching a citation attack the anecdotal evidence seems to have much more relevance than anything reported in scientific studies.

Bad reason #12: a scientific study found that most scientific studies are wrong

“This argument is fascinatingly ironic because it uses a scientific paper to say that we shouldn’t trust scientific papers, but let’s look closer because this argument actually has some merit. The paper being references is, “Why most published research findings are false” by John Ioannidis, and it is actually a very useful and informative work, but it often gets misused.”

The Ioannidis paper describes several reasons why individual papers may be wrong. Issues like small sample size and publication bias in its many forms.

Researcher who often search the literature are aware of these problems and they are aware of the advice to approach all papers critically and intelligently – even the ones which present results you find favourable.

But as the article points out:

“. . .all of that may sound very bleak, but it should not make you lose all confidence in the scientific process because of a very important component of scientific inquiry: replication. Ioannidis’s work applies mostly to single paper studies. . . . .  So, this paper shouldn’t make you question the safety of vaccines, the effects we are having on the climate, etc. It should, however, make you skeptical of the one or two anti-vaccine papers that you occasionally see, or the one paper supporting some “miracle cure,” or the occasional paper on homeopathy, acupuncture, etc. Those studies almost always have tiny sample sizes and countless other studies have failed to replicate their results. This is why it is so important to look at the entire body of literature not just a single study.”

Conclusion

The Logic of Science article concludes:

” . . . no matter how you cut it, many of you wouldn’t be alive today if it wasn’t for science. Science clearly works and you need an extremely strong justification for rejecting scientific results.

To be fair, some scientists are corrupt and bad science does occasionally get published, but bad research tends to be identified and discredited by other researchers. In other words, there may be a high probability of a single paper being wrong, but when lots of different studies have all arrived at the same conclusion, you can be very confident in that conclusion. Perhaps most importantly, you cannot simply assume that a paper is bad just because you disagree with its results. You need to present actual evidence that it is flawed or biased before you can reject it.”

Good advice. When you enter a discussion you should actually read the citations you use – and insist you discussion partner readers theirs. And read them critically and intelligently.

Similar articles

Alternative reality of anti-fluoride “science”

Paul Connett made many unsupported claims in his presentation against community water fluoridation (CWF) to Denver Water. Here I debunk a claim where he rejects most scientific studies on the cost-effectiveness of CWF.

Different grades of dental fluorosis

Connett asserted two things in his presentation:

  1. Previous research showing the cost effectiveness of community water fluoridation (CWF) has been made obsolete by a single new paper.
  2. Something about this new paper (Ko & Theissen, 2014) makes it more acceptable to him than previous research – and he implies you

Plenty of research shows CWF is cost-effective

Connett has cherry-picked just one paper, refused to say why and, by implication, denigrated any other research results. And there are quite a few studies around.

Here are a just a few readers could consult:

Of course, the actual figures vary from study to study, and various figures are used by health authorities. But generally CWF is found cost-effective over a large spectrum of water treatment plant sizes and social situation.

Connett relies on a flawed study

Connett relies, without justification,  on a single cherry-picked study:

Ko, L., & Thiessen, K. M. (2014). A critique of recent economic evaluations of community water fluoridation. International Journal of Occupational and Environmental Health, 37(1), 91–120.

This is a very long paper which might impress the uninitiated. To give it credit, it does make lengthy critiques of previous studies on cost effectiveness. But it has a huge flaw – its treatment of the cost of dental fluorosis.

It rejects warranted assumptions made by most studies that the adverse effects of CWF on dental fluorosis are negligible: They say:

“It is inexplicable that neither Griffin et al. nor other similar studies mention dental fluorosis, defective enamel in permanent teeth due to childhood overexposure to fluoride. Community water fluoridation, in the absence of other fluoride sources, was expected to result in a prevalence of mild-to-very mild (cosmetic) dental fluorosis in about 10% of the population and almost no cases of moderate or severe dental fluorosis. However, in the 1999–2004 NHANES survey, 41% of U.S. children ages 12–15 years were found to have dental fluorosis, including 3.6% with moderate or severe fluorosis.”

Two problems with that statement:

  1. The prevalence of “cosmetic” dental fluorosis may be about 10% but this cannot be attributed to CWF as non-fluoridated areas have a similar prevalence. For example, in the recent Cochrane estimates show “cosmetic” dental fluorosis was about 12% in  fluoridated areas but 10% in non-fluoridated areas (see Cochrane fluoridation review. III: Misleading section on dental fluorosis).This is a common, probably intentional, mistake made by anti-fluoride campaigners – to attribute the whole prevalence to CWF and ignore the prevalence in non-fluoridated areas. This highly exaggerates the small effect of CWF on the prevalence of “cosmetic” dental fluorosis – which in  any case does not need treatment. “Cosmetic” dental fluorosis is often considered positively by children and parents.
  2. The small numbers of children with moderate and severe dental fluorosis (due to high natural fluoride levels, industrial contamination or excessive consumption of fluoridated toothpaste) is irrelevant as CWF does not cause these forms. Their prevalence is not influenced by CWF.

So Ko and Theissen (2014) produce a different cost anlaysis because :

“. . . the primary cost-benefit analysis used to support CWF in the U.S. assumes negligible adverse effects from CWF and omits the costs of treating dental fluorosis, of accidents and overfeeds, of occupational exposures to fluoride, of promoting CWF, and of avoiding fluoridated water.”

We could debate all the other factors, which they acknowledge have minimal effects, but they rely mainly on the dental expenses of treating dental fluorosis:

“Minimal correction of methodological problems in this primary analysis of CWF gives results showing substantially lower benefits than typically claimed. Accounting for the expense of treating dental fluorosis eliminates any remaining benefit.”

They managed to produce this big reduction in cost-effectiveness by estimating costs for treating children with moderate and severe dental fluorosis – finding:

“the lifetime cost of veneers for a child with moderate or severe fluorosis would be at least $4,434.”

And:

“For our calculations, we have assumed that 5% of children in fluoridated areas have moderate or severe fluorosis.”

See the  trick?

They attribute all the moderate and severe forms of dental fluorosis to CWF. Despite the fact that research shows this is not caused by CWF and their prevalence would be the same in non-fluoridated areas!

The authors’ major effect – which they rely on to reduce the estimated benefits of CWF – is not caused by CWF.

Connett is promoting an alternative “scientific” reality

The Ko & Theissen (2014) paper is one of a list of papers anti-fluoridation propagandists have come to rely on in their claims that the science is opposed to CWF. In effect, this means they exclude, or downplay, the majority of research reports on the subject – treating them like the former Index Librorum Prohibitorum, or “Index of Forbidden Books,” an official list of books which Catholics were not permitted to read.

The Ko & Theissen (2014) paper is firmly on the list of the approved studies for the anti-fluoride faithful. A few others are Peckham & Awofeso (2014), Peckham et al., (2015)Sauerheber (2013) and, of course, Choi et al., (2012) and Grandjean & Landrigan (2014).  You will see these papers cited and linked to on many anti-fluoride social media posts – as if they were gospel – while all other studies are ignored.

These papers make claims that contradict the findings of many other studies. They are all oriented towards an anti-fluoridation bias. And most of them are written by well-known anti-fluoride activists or scientists.

In effect, by considering and using studies from their own approved list and ignoring or denigrating studies that don’t fit their biases, they are operating in an alternative reality. A reality which may be more comfortable for them – but a reality which exposes their scientific weaknesses.

Lessons for Connett

I know Paul Connett is now a lost cause – he will continue to cite these papers from his approved list and make these claims no matter how many times they are debunked. But, in the hope of perhaps helping others who are susceptible to his claims, here are some lessons from this exercise. If anti-fluoride activists wish to support their claims by citing scientific studies they should take them on board.

Lesson 1: Make an intelligent assessment of all the relevant papers – don’t uncritically rely on just one.

Lesson 2: Don’t just accept the findings of each paper – interpret the results critically and intelligently. How else can one make a sensible choice of relevant research and draw the best conclusions.

Lesson 3: Beware of occupying an alternative reality where credence is given only to your own mates and everyone else is disparaged. That amounts to wearing blinkers and is a sure way of coming to incorrect conclusions. It also means your conclusions have a flimsy basis and you are easily exposed.

Lessons for everyone susceptible to confirmation bias.

Similar articles