Censorship: Thinking you are right – even if you’re wrong

I posted this video four years ago – but repost it now because the message is still very valid (see Are you really right? and Warriors, scouts, Trump’s election and your news media). In fact, it’s more valid today than it was four years ago. Things have got worse – far worse than I expected they would.

To recap – Julia describes the two different mindsets required in fighting a war:

  • The  “Warrior mindset” – emotively based and fixated on success. Not interested in stopping to think about the real facts or rationally analyse the situation.
  • The “Scout mindset” – objective and rational, ready to consider the facts (in fact, searching them out) and logically consider possibilities.

Unfortunately the “warrior mindset” – emotively based and not considering the facts or rationally analysing the situation – may be required in war but now seems to be the standard approach in politics (maybe even in science for some people). The “scout mindset” is unfortunately rare – and actually disapproved of when it occurs. Things have got worse in that respect.

There is another unfortunate dimension to this. Just as people have become convinced that they themselves are the fountains of truth, and their opponents fountains of untruth, there is now the drive to censor. Many people are arguing that people they disagree with should be denied access to the media, especially social media. And they jump on the media when their opponents are given space.

Hell, the media itself is encouraging this. Nowadays we have silicon valley corporations, which control social media, determining who should have a voice. And some people who should know better are applauding them for this. Applauding selfishly because they want to see their discussion opponents denied a voice. This is not just cowardly but extremely short-sighted of them – what will they do when they, in turn, are denied a voice by these very same corporations?

Whatever happened to the old adage – “I Disapprove of What You Say, But I Will Defend to the Death Your Right to Say It?” I grew up thinking this was a widely approved ideal but now find the people who I had considered “right-minded,” “free thinkers,” and “liberals” have almost completely abandoned it. They seem to be the first in line to impose censorship and the last to complain when censorship happens.

The folly of censorship

Censorship, the attempt to close down a rational discussion, hardly gives the impression that the supporters of censorship have truth on their side. If anything it suggests they do not have the arguments and this, in effect, hands victory to the ones censored.

It is short-sighted and cowardly and does not close down the discussion – it simply moves it elsewhere. And usually to a forum in which the instigators or supporters of the censorship have little input. 

Censorship usually hands the moral high ground to the ones being censored – and cheaply as they have gained this without even having their ideas (rational or irrational) tested in meaningful discussion.

There is a lot of truth in the old saying that sunlight is the best disinfectant – and refusal to allow this only encourages bad ideas to proliferate unchecked.

That is not to say all the ideas being censored are “bad” – many plainly aren’t. But will those who censor or support censorship (or indulge in self-censorship –  another common problem with people I used to respect) ever know? In the end, the censors and their supporters simply end up living in their own silo or bubble, seemingly oblivious to many of the ideas circulating in society or the world. Ideas they could perhaps have learned something from. Even if the ideas one wishes to censor are based on misinformation or misunderstanding the exercise of debating them can teach things to both sides. Censorship, and especially self-censorship, prevents self-development.

Arbitrary or ideologically motivated censorship?

Often censorship by social media appears arbitrary, perhaps driven by algorithms. For example, Twitter recently blocked the official account of Russian Arms Control delegation in Vienna, engaged in negotiations on the Open Skies Treaty and other important issues. It was later reinstated – without explanation or apology – but followers were lost. Arbitrary or not, should such an important body, involved in negotiations critical for the whole planet, be censored?

Are those algorithm innocent or objective? These days we see social media like Twitter and Facebook employing staff with political backgrounds. Even people who have previously worked in, or still work in, bodies like the Atlantic Council which is connected with NATO. And the revolving door by which ex-politicians and intelligence staff get employed in the mainstream media is an open secret. These people openly describe information coming from “the other side” as “disinformation,” “fake news,” or “state-supported propaganda” so have no scruples about censoring it or otherwise working to discredit it (for example labelling news media as state-controlled – but only for the “other side” – eg RT, but not the BBC or Voice of America).

Ben Nimmo, a member of the Atlantic Council, and well-known for his aggressive political views, recently announced his move to Facebook

This biased approach to information or social discussion is strongly driven by an “official” narrative. A narrative promoted by military blocks, their governments and their political leaders. But even unaffiliated persons approaching social discussion can be, and usually are, driven by a narrative. A narrative that they often strongly and emotionally adhere to. Juna Galef’s “warrior mindset.” That is only human. It’s a pity, but probably few participants in social discussions get beyond this mindset and adopt the far more useful (for obtaining objective truth) “scout mindset” which is objective and rational, ready to consider the facts (in fact, searching them out) and logically considers possibilities.

But let’s face it. If you support censorship, or even instigate censorship in social media you control, how likely is it that you can get beyond the “warrior mindset?” The “scout mindset” by necessity requires open consideration of views and facts you may initially disagree with. Censorship, especially self-censorship, prevents that. It prevents personal growth.

Similar articles

 

Embarrassing knock-back of second draft review of possible cognitive health effects of fluoride

We have come to expect exaggeration of scientific findings in media reports and institutional press releases. But it can also be a problem is original scientific publications where findings are reported in an unqualified or exaggerated way. Image Credit: Curbing exaggerated reporting

This is rather embarrassing for a US group attempting to get the science right about possible toxic effects of fluoride. It’s also embarrassing for the anti-fluoride activists who have “jumped the gun” and been citing the group’s draft review as if it was reliable when it is not.

The US National Academies of Sciences, Engineering, and Medicine (NAS) have released their peer-review of the revised US National Toxicity Program (NTP) draft on possible neurodevelopmental effects of fluoride (see Review of the Revised NTP Monograph on the Systematic Review of Fluoride Exposure and Neurodevelopmental and Cognitive Health Effects).

This is the second attempt by the NTP reviewers to get acceptance of their draft and it has now been knocked back by the NAS peer reviewers for a second time.

Diplomatic but damning peer-review

Of course, the NAS peer reviewers use diplomatic language but the peer review is quite damning. It criticises the NTP for ignoring some of the important recommendations in the first peer review. One which is quite critical was the lack of response to the request that NTP explains how the monograph can be used (or not) to inform water fluoridation concentrations. The second NAS peer review firmly states that the NTP:

“should make it clear that the monograph cannot be used to draw any conclusions regarding low fluoride exposure concentrations, including those typically associated with drinking-water fluoridation.”

And:

“Given the substantial concern regarding health implications of various fluoride exposures, comments or inferences that are not based on rigorous analyses It seems to me that there is soime internal politicsshould be avoided.”

It seems to me there is some internal politics involved and some of the NTP authors may be promoting their own, possibly anti-fluoride, agenda. Certainly, the revised NTP draft monograph continues to obfuscate this issue. It continues to state that “fluoride is presumed to be a cognitive neurodevelopmental hazard to humans” – a clause which anti-fluoride campaigner consistently quote out of context. Yes, it does state that this is based on findings demonstrating “that higher fluoride exposure (e.g., >1.5 mg/L in drinking water) is associated with lower IQ and other cognitive effects in children.” But this is separated from the other fact that the findings on cognitive neurodevelopment for “exposures in ranges typically found in drinking water in the United States (0.7 mg/L for optimally fluoridated community water systems)” are “are inconsistent, and therefore unclear.”

Monograph exaggerates by enabling unfair cherry-picking

So, you see the problem. The draft NTP monograph correctly refers to IQ and other cognitive effects in children exposed to excessive levels of fluoride. The draft also correctly refers to that lack of evidence for such effects at lower fluoride exposure levels typical of community water fluoridation. But in different places in the document.

The enables activist cherry-picking to support an anti-fluoride agenda and that is a fault of the document itself. It should clearly state that the monograph should not be used to draw any conclusion at these low exposure levels. This is strongly expressed in the peer-reviewers’ comments.

I find the blanket “presumed to be a hazard for humans” quite misleading. For example, no one says that calcium is “presumed to be a cardiovascular hazard to humans.” Or that selenium is “presumed to be a cardiovascular or neurological hazard to humans.” Or what about magnesium – would you accept that it is a “presumed neurological hazard to humans?” Would you accept that iron is a “presumed cardiovascular, cancer, kidney or erectile dysfunction hazard to humans?” Yet all those problems have been reported for humans at high intake levels of these elements.

No, we sensibly accept that various elements and microelements have beneficial, or essential benefits, to humans at reasonable intake levels., Then we sensibly warn that these same elements can be harmful at excessive intake. To proclaim that any of these elements are “presumed” to be hazardous – without clearly saying at excessive intake levels, is simply distorting or exaggerating the data.

What does “presumed” mean?

A lot of readers find the use of “presumed” strange. But it’s meaning is related to the levels of evidence found by reviewers.

No, don’t believe those anti-fluoride activists who falsely claim that “presumed” is the highest level of evidence and that the finding should be treated as factual. They are simply wrong.

Some idea of the word’s use is presented in this diagram from the NTP revised draft monograph.

So “presumed” means that the evidence for the effect is moderate. That the effect is not factual or known. But as further evidence comes in the ranking of fluoride as a hazard may increase, or decline.

As the monograph bases this “presumed” rating solely on evidence from areas of endemic fluorosis where fluoride intake levels are high it is correct to avoid stating the effects as factual. For example, consider these images from areas of endemic fluorosis in China (taken from a slide presentation by Xiang 2014):

Clearly, people in these areas suffer a range of health effects related to the high fluoride intake. The cognitive effects like IQ loss from these areas could result from these other health effects, not directly from fluoride (although excessive fluoride intake leads to the health effects).

So we can “presume” that fluoride (in areas of endemic fluorosis where fluoride intake is excessive) is a “cognitive neurodevelopmental hazard for humans” but we can not factually state that the neurodevelopment effects are directly caused by fluoride. That would require further scientific work to elucidate the specific mechanisms involved in creating that effect.

Similar articles

The promotion of weak statistical relationships in science

Image credit: Correlation, Causation, and Their Impact on AB Testing

Correlation is never evidence for causation – but, unfortunately, many scientific articles imply that it is. While paying lip service to the correlation-causation mantra, some (possibly many) authors end up arguing that their data is evidence for an effect based solely on the correlations they observe. This is one of the reasons for the replication crisis in science where contradictory results are being reported. Results which cannot be replicated by other workers (see I don’t “believe” in science – and neither should you).

Career prospects, institutional pressure and the need for public recognition will encourage scientists to publish poor quality work that they then use to claim that have found an effect. The problem is that the public, the news media and even many scientists simply do not properly scrutinise the published papers. In most cases they don’t have the specific skills required for this.

There is nothing wrong with doing statistical analyses and producing correlations. However such correlations should be used to suggest future more meaningful and better-designed research like randomised controlled trials (see Smith & Ebrahim 2002Data dredging, bias, or confounding. They can all get you into the BMJ and the Friday papers. ). They should never be used as “proof” for an effect, let alone argue that the correlation is evidence to support regulations and advise policymakers.

Hunting for correlations

However, researchers will continue to publish correlations and make great claims for them because they face powerful incentives to promote even unreliable research results. Scientific culture and institutional pressures provide expectations demanding academic researchers produce publishable results. This pressure is so great they will often clutch at straws to produce correlations even when the initial statistical analyst produces none. They will end up “torturing the data.”

These days epidemiological researchers use large databases and powerful statistical software in their search for correlations. Unfortunately, this leads to data mining which, by suitable selection of variables, makes the discovery of statistically significant correlations easy. The data mining approach also means that the often cite p-values are meaningless. P-values measure the probability the relationship occurs by chance and often cited as evidence of the “robustness” of the correlations. But probability is so much greater when researchers resort to checking a range of variables and that isn’t reflected properly in the p-values.

Where data mining occurs, even to a limited extent, researchers are simply attempting to make a purse out of sow’s ear when they support their correlations merely by citing a p-value < 0.05  because these values are meaningless in such cases. The fact that so many of these authors often ignore more meaningful results from their statistical analyses (like R-squared values which indicate the extent that the correlation “explain” the variation in their data) underlines their deceptive approach.

Poor statistical relationships

Consider these correlations below – two data sets are taken from a published paper – the other four use random data provided by Jim Jones in his book Regression Analysis: An Intuitive Guide.

You can probably guess which correlations were from real data (J and M) because there are so many more data points All of these have correlations low p values – but of course, those selected from random data sets resulted from data mining and the p-values are therefore meaningless because they are just a few of the many checked. Remember, a p-value < 0.05 means that the probability of a chance effect is one in twenty and more than twenty variable pairs were checked in this random dataset.

The other two correlations are taken from Bashash et al (2017). They do not give details of how many other variables were checked in the dataset used but it is inevitable that some degree of data mining occurred. So, again, the low p-values are probably meaningless.

J provides the correlation of General Cognitive Index (GCI) scores in children at age 4 years with maternal prenatal urinary fluoride and M provides the correlation of children’s IQ at age 6–12 y with maternal prenatal urinary fluoride. The paper has been heavily promoted by anti-fluoride scientists and activists. None of the promoters have made a critical, objective, analysis of the correlations reported. Paul Connett, director of the Fluoride Action Network, was merely supporting his anti-fluoride activist bias when he uncritically described the correlations as “robust.” They just aren’t.

There is a very high degree of scattering in both these correlations, and the R-squared values indicate they cannot explain any more than about 3 or 4% of the variance in the data. Hardly something to hang one’s hat on, or to be used to argue that policymakers should introduce new regulations controlling community water fluoridation or ban it altogether.

In an effort to make their correlations look better these authors imposed confidence intervals on the graphs (see below). This Xkcd cartoon on curve fitting gives a cynical take on that. The grey areas in the graphs may impress some people but it does not hide the wide scatter of the data points. The confidence intervals refer to estimates of the regression coefficient but when it comes to using the correlations to predict likely effects one must use the prediction intervals which are very large (see Paul Connett’s misrepresentation of maternal F exposure study debunked). In fact, the estimated slopes in these graphs are meaningless when it comes to predictions.

Correlations reported by Bashash et al (2017). The regressions explain very little of the variance in the data and connect be used to make meaningful predictions.

In critiquing the Bashash et al (2017) paper I must concede that at least they made their data available – the data points in the two figures. While they did not provide full or proper results from their statistical analysis (for example they didn’t cite the R-squared values) the data does at least make it possible for other researchers to check their conclusions.

Unfortunately, many authors simply cite p-values and possible confidence intervals for the estimate of the regression coefficient without providing any data or images. This is frustrating for the intelligent scientific reader attempting to critically evaluate their claims.

Conclusions

We should never forget that correlations, no matter how impressive, do not mean causation. It is very poor science to suggest they do.

Nevertheless, many research resort to correlations they have managed to glean from databases, usually resorting to some extent of data mining, to claim they have found an effect and to get published. The drive to publish means that even very poor correlations get promoted and are used by ideologically or career-minded scientists, and by activists, to attempt to convince policymakers of their cause.

Image credit: Xkcd – Correlation

Remember, correlations are never evidence of causation.

Similar articles

Can we trust science?

Image credit: Museum collections as research data

Studies based simply on statistically significant relationships found by mining data from large databases are a big problem in the scientific literature. Problematic because data mining, or worse data dredging, easily produces relationships that are statistically significant but meaningless. And problematic because authors wishing to confirm their biases and promote their hypotheses conveniently forget the warning that correlation is not evidence for causation and go on to promote their relationships as proof of effects. Often they seem to be successful in convincing regulators and policymakers that their serious relationships should result in regulations. Then there are the activists who don’t need convincing but will promote willingly and tiresomely these studies if they confirm their agendas.

Even random data can provide statistically significant relationships

The graphs below show the fallacy of relying only on statistically significant relationships as proof of an effect. The show linear regression result for a number of data sets. One data set is taken from a published paper – the rest use random data provided by Jim Jones in his book Regression Analysis: An Intuitive Guide.

All these regressions look “respectable.” They have low p values (less than the conventional 0.05 limit) and the R-squared values indicated they “explain” a large fraction of the data – up to 49%. But the regressions are completely meaningless for at least 7 of the 8 data sets because the data were randomly generated and have no relevance to real physical measurements.

This should be a warning that correlations reported in scientific papers may be quite meaningless.

Can you guess which of the graphs is based on real data? It is actually the graph E – published by members of a North American group currently publishing data which they claim shows community water fluoridation reduces child IQ. This was from one of their first papers where they claimed childhood ADHD was linked to fluoridation (see Malin, A. J., & Till, C. 2015. Exposure to fluoridated water and attention deficit hyperactivity disorder prevalence among children and adolescents in the United States: an ecological association).

The group used this paper to obtain funding for subsequent research. They obviously promoted this paper as showing real effects – and so have the anti-fluoride activists around the world, including the Fluoride Action Network (FAN) and its director Paul Connett.

But the claims made for this paper, and its promotion, are scientifically flawed:

  1. Correlation does not mean causation. Such relationships in larger datasets often occur by chance – hell they even occur with random data as the figure above shows.
  2. Yes, the authors argue there is a biologically plausible mechanism to “explain” their association. But that is merely cherry-picking to confirm a bias and there are other biologically plausible mechanisms they did not consider which would say there should not be an effect. The unfortunate problem with these sorts of arguments is that they are used to justify their findings as “proof” of an effect. To violate the warning that correlation is not causation.
  3. There is the problem of correcting for cofounders or other risk-modifying factors. While acknowledging the need for future studies considering other confounders, the authors considered their choice of socio-economic factors was sufficient and their peer reviewers limited their suggestion of other confounders to lead. However, when geographic factors were included in a later analysis of the data the reported relationship disappeared. 

Confounders often not properly considered

Smith & Ebrahim (2002) discuss this problem an article  – Data dredging, bias, or confounding. They can all get you into the BMJ and the Friday papers. The title itself indicates how the poor use of statistics and unwarranted promotion of statical analyses can be used to advance scientific careers and promote bad science in the public media.

These authors say:

“it is seldom recognised how poorly the standard statistical techniques “control” for confounding, given the limited range of confounders measured in many studies and the inevitable substantial degree of measurement error in assessing the potential confounders.”

This could be a problem even for studies where a range of confounders are included in the analyses. But Malin & Till (2015) considered the barest minimum of confounders and didn’t include ones which would be considered important to ADHD prevalence. In particular, they ignored geographic factors and these were shown to be important in another study using the same dataset. Huber et al (2015) reported a statistically significant relationship of ADHD prevalence with elevation. These relationships are shown in this figure

Of course, this is merely another statistically significant relationship – not proof of a real effect and no more justified than the one reported by Malin and Till (2015). But it does show an important confounder that Malin & Till should have included in their statistical analysis.

I did my own statistical analysis using the data set of Malin & Till (2015) and Huber et al (2015) and showed (Perrott 2018) that inclusion of geographic factors showed there was no statistically significant relationship of ADHD prevalence with fluoridation as suggest by Malin & Till (2015). Their study was flawed and it should never have been used to justify funding for future research on the effect of fluoridation. Nor should it have been used by activists promoting an anti-fluoridation agenda.

But, then again, derivation of a statistically significant relationship by Malin & Till (2o15) did get them published in the journal Environmental Health which, incidentally, has sympathetic reviewers (see Some fluoride-IQ researchers seem to be taking in each other’s laundry) and an anti-fluoridation Chief Editor – Phillipe Grandjean (see Special pleading by Philippe Grandjean on fluoride). It also enabled the promotion of their research via institutional press releases, newspaper article and the continual activity of anti-fluoridation activists. Perhaps some would argue this was a good career move!

Conclusion

OK, the faults of the Malin & Till (2015) study have been revealed – even though Perrott (2018) is studiously ignored by the anti-fluoride North American group which has continued to publish similar statistically significant relationships of measures of fluoride uptake and measures of ADH or IQ.

But there are many published papers – peer-reviewed papers – which suffer from the same faults and get similar levels of promotion. They are rarely subject to proper post-publication peer-review or scientific critique. But their authors get career advancement and scientific recognition out of their publication. And the relationships are promoted as evidence for real effects in the public media.

No wonder members of the public are so often confused by the contradictory reporting, the health scares of the week, they are exposed to.

No wonder many people feel they can’t trust science.

Similar articles

January ’21 – NZ blogs sitemeter ranking

Image credit: New Zealand Privacy Act 2020: standout features, fines and global comparisons

I notice a few regulars no longer allow public access to the site counters. This may happen accidentally when the blog format is altered. If your blog is unexpectedly missing or the numbers seem very low please check this out. After correcting send me the URL for your site meter and I can correct the information in the database.

Similarly, if your blog data in this list seems out of whack, please check your site meter. Usually, the problem is that for some reason your site meter is no longer working.

Sitemeter is no longer working so the total number of NZ blogs in this list has been drastically reduced. I recommend anyone with Sitemeter consider transferring to one of the other meters. See  NZ Blog Rankings FAQ.

This list is compiled automatically from the data in the various site meters used. If you feel the data in this list is wrong could you check to make sure the problem is not with your own site meter? I am of course happy to correct any mistakes that occur in the automatic transfer of data to this list but cannot be responsible for the site meters themselves. They do play up.

Every month I get queries from people wanting their own blog included. I encourage and am happy to respond to queries but have prepared a list of frequently asked questions (FAQs) people can check out. Have a look at NZ Blog Rankings FAQ. This is particularly helpful to those wondering how to set up sitemeters. Please note, the system is automatic and relies on blogs having sitemeters which allow public access to the stats.

Here are the rankings of New Zealand blogs with publicly available statistics for January 2021. Ranking is by visit numbers. I have listed the blogs in the table below, together with monthly visits and page view numbers. Meanwhile, I am still keen to hear of any other blogs with publicly available sitemeter or visitor stats that I have missed. Contact me if you know of any or wish help adding publicly available stats to your bog.

You can see data for previous months at Blog Ranks

Subscribe to NZ Blog Rankings Subscribe to NZ blog rankings by Email Find out how to get Subscription & email updates

Continue reading

I don’t “believe” in science – and neither should you

We should be very careful about naively accepting claims made by the mainstream media – but this is also true of scientific claims. We should approach them intelligently and critical and not merely accept them on faith

I cringe every time I read an advocate of science asserting they “believe in science.” Yes, I know they may be responding to an assertion made by supporters of religion or pseudoscience. But “belief” is the wrong word because it implies trust based on faith and that is not the way science works.

Sure, those asserting this may argue that they have this belief because science is based on evidence, not faith. But that is still a copout because evidence can be used to draw conclusions or make claims that are still not true. Anyway, published evidence may be weak, misleading or poorly interpreted.

Here is an example of this dilemma taken from the Vox article Hyped-up science erodes trust. Here’s how researchers can fight back.

The figure is based on data published in Schoenfeld, J. D., & Ioannidis, J. P. A. (2013). Is everything we eat associated with cancer? A systematic cookbook review. American Journal of Clinical Nutrition, 97(1), 127–134.

It is easy to cite a scientific article, for example, as evidence that wine protected one from cancer. Or that it, in fact, causes cancer. Unfortunately, the scientific literature is full of such studies with contradictory conclusions. Usually based on real data and statistical analyses which show a significant relationship. But, if it is easy to find such studies which can be claimed as evidence of opposite effects what good is a “belief” in science? All that simple “belief” does is provide a scientific source for one’s own beliefs, an exercise in confirmation bias.

This figure should be a warning to approach published findings in fields like nutritional epidemiology and environmental epidemiology critically and intelligently. One should simply not take them as factual – we should not “believe” in them simply because they are published in scientific journals.

Schoenfeld, & Ioannidis (2013) say of the studies they investigated that:

“the large majority of these studies were interpreted by their authors as offering evidence for increased or decreased risk of cancer. However, the vast majority of these claims were based on weak statistical evidence.”

They discuss problems such as the “pressure to publish,” undervaluation or not reporting negative results, “biases in the design, execution and reporting of studies” because nutritional ingredients “viewed as “unhealthy” may be demonized.” 

The authors warn that:

“studies that narrowly meet criteria for statistical significance may represent spurious results, especially when there is large flexibility in analyses, selection of contrasts, and reporting.”

And:

” When results are overinterpreted, the emerging literature can skew perspectives and potentially obfuscate other truly significant findings.”

They warn that these sorts of problems may be:

“especially problematic in areas such as cancer epidemiology, where randomized trials may be exceedingly difficult and expensive to conduct; therefore, more reliance is placed on observational studies, but with a considerable risk of trusting false-positive”

These comments are very relevant to consideration of recent scientific studies claiming a link between community after fluoridation and cognitive deficits. Studies the are heavily promoted by the anti-fluoridation activists and, more importantly for scientific readers, by the authors of these studies themselves and their institutions. I have discussed specific problems in previous posts about the results from the Till group and their promotion by the authors.

The merging of pseudoscience with science

We seem to make an issue of countering pseudoscience with science but in the process are often oblivious to the fact these to tend to merge – even for professional scientists. After all, we are human and all have our own biases to confirm and our jobs to advance.

This is a black and white contrast of science with pseudoscience promoted by Skeptics. It’s worth comparing this with the reality of the scientific world.

Do scientist always follow the evidence? Don’t they sometimes start with the conclusion and look for evidence to support it – even clutching at the straws of weak evidence (statically weak relationships in environmental epidemiological studies which are promoted as proof of harmful effects)?

Oh for the ideal scientist who embraces criticism. Sure, they are out there but so many refuse to accept criticism, “circle the wagons” and end up unfairly and emotively attacking their critics. I describe one example in When scientists get political: Lead fluoride-IQ researcher launches emotional attack on her scientific critics.

Are claims always conservative and tentative? Especially when scientists have a career or institution to promote. And institutions with their press releases are a big part of this problem of overpromotion. Unfortunately, in environmental epidemiology, some scientists will take weak research results to argue that they prove a cause and then request regulation by policymakers. I discuss this in . Specifically, there is the case of weak scientific data from Till’s research group being used to promote regulatory actions to confirm their biases.

Unfortunately, scientists with biases to confirm find it quite easy to ignore or downgrade the evidence which doesn’t fit. They may even work to prevent publication of countering evidence (see for example Fluoridation not associated with ADHD – a myth put to rest).

Conclusion

I could go one taking each point in order. But, in reality, I think such absolute claims about science are just not realistic. The scientific world is not that perfect.

In the end, the intelligent scientific reader must approach even the published literature very critically if they are to truly sift the wheat from the chaff.

Similar articles

December ’20 – NZ blogs sitemeter ranking

I notice a few regulars no longer allow public access to the site counters. This may happen accidentally when the blog format is altered. If your blog is unexpectedly missing or the numbers seem very low please check this out. After correcting send me the URL for your site meter and I can correct the information in the database.

Similarly, if your blog data in this list seems out of whack, please check your site meter. Usually, the problem is that for some reason your site meter is no longer working.

Sitemeter is no longer working so the total number of NZ blogs in this list has been drastically reduced. I recommend anyone with Sitemeter consider transferring to one of the other meters. See  NZ Blog Rankings FAQ.

This list is compiled automatically from the data in the various site meters used. If you feel the data in this list is wrong could you check to make sure the problem is not with your own site meter? I am of course happy to correct any mistakes that occur in the automatic transfer of data to this list but cannot be responsible for the site meters themselves. They do play up.

Every month I get queries from people wanting their own blog included. I encourage and am happy to respond to queries but have prepared a list of frequently asked questions (FAQs) people can check out. Have a look at NZ Blog Rankings FAQ. This is particularly helpful to those wondering how to set up sitemeters. Please note, the system is automatic and relies on blogs having sitemeters which allow public access to the stats.

Here are the rankings of New Zealand blogs with publicly available statistics for December 2020. Ranking is by visit numbers. I have listed the blogs in the table below, together with monthly visits and page view numbers. Meanwhile, I am still keen to hear of any other blogs with publicly available sitemeter or visitor stats that I have missed. Contact me if you know of any or wish help adding publicly available stats to your bog.

You can see data for previous months at Blog Ranks

Subscribe to NZ Blog Rankings Subscribe to NZ blog rankings by Email Find out how to get Subscription & email updates

Continue reading

Science is often wrong – be critical

Activists, and unfortunately many scientists, use published scientific reports like a drunk uses a lamppost – more for support than illumination

Uncritical use of science to support a preconceived position is widespread – and it really gets up my nose. I have no respect for the person, often an activist, who uncritically cites a scientific report. Often they will cite a report which they have read only the abstract of – or not even that. Sometimes commenters will support their claims by producing “scientific evidence” which are simply lists of citations obtained from PubMed or Google Scholar.

[Yes, readers will recognise this is a common behaviour with anti-fluoride activists]

Unfortunately, this problem is not restricted to activists. Too often I read scientific papers with discussions where authors have simply cited studies that support, or they interpret as supporting, their own preconceived ideas or hypotheses. Compounding this scientific “sin” is the habit of some authors who completely refuse to cite, or even discuss, studies producing evidence that doesn’t fit their scientific prejudices.

Publication does not magically make scientific findings or ideas “true” – far from it. The serious reader of scientific literature must constantly remember that the chances are very high that published conclusions or findings are likely to be false. John Ioannidis makes this point in his article Why most published research findings are false. Ioannidis concentrates on the poor use, or misuse, of statistics. This is a constant problem in scientific writing – and it certainly underlines the fact that even scientists will consciously or unconsciously manipulate their data to confirm their biases. They are using statistical analysis in the way a drunk used a lamppost – for support rather than illumination.

Poor studies often used to fool policymakers

These problems are often not easily understood by scientists themselves but the situation is much worse for policymakers. They are not trained in science and don’t have the scientific or statistical experience required for a proper critically analysis of claims made to them by activists. Yet they are often called on to make decisions which rely on the acceptance, or rejection, of scientific claims (or, claims about the science).

An example of this is a draft (not peer-reviewed) paper by Grandjean et al  – A Benchmark Dose Analysis for Maternal Pregnancy Urine-Fluoride and IQ in Children.

These authors have an anti-fluoride activists position and are campaigning against community water fluoridation (CWF). Their paper uses their own studies which report very poor and rare statistical relationships of child IQ with fluoride intake as “proof” of causation sufficiently strong to advocate for regulatory guidelines. Unsurprisingly their recommended guidelines are very low – much lower than those common with CWF.

Sadly, their sciencey sounding advocacy may convince some policymakers. It is important that policymakers be exposed to a critical analysis of these studies and their arguments. The authors will obviously not do this – they are selling their own biases. I hope that any regulator or policymaker required to make decisions on these recommendations have the sense to call for an independent, objective and critical analysis of the paper’s claims.

[Note: The purpose of the medRxiv preprints of non-peer-reviewed articles is to enable and invite discussion and comments that will help in revising the article. I submitted comments on the draft article over a month ago (Comments on “A Benchmark Dose Analysis for Maternal Pregnancy Urine-Fluoride and IQ in Children”) and have had no response from the authors.  This lack of response to constructive critiques is, unfortunately, common for this group. I guess one can only comment that scientists are human.]

Observational studies – exploratory fishing expeditions

A big problem with published science today is that many studies are nothing more than observational exploratory studies using existing databases which, by their nature, cannot be used to derive causes. Yet that can easily be used to derive statistically significant links or relationships. These can be used to write scientific papers but they are simply not evidence of causes.

Properly designed studies, with proper controls and randomised populations properly representing different groups, may provide reasonable evidence of causal relationships – but most reported studies are not like this. Most observational studies use existing databases with non-random populations where selection and confounding with other factors is a huge problem. Authors are often silent about selection problems and may claim to control for important confounding factors, but it is impossible to include all confounders. The databases used may not include data for relevant confounders and authors themselves may not properly select all relevant confounders for inclusion.

This sort of situation makes some degree of data mining likely., This occurs when a number of different variables and measures of outcomes are considered in the search for statistically significant relationships. Jim Frost illustrated the problems with this sort of approach. Using a set of completely fictitious random data he was able to obtain a statistically significant relationship with very low p values and R-squared values showing the explanation of 61% of the variance (see Jim Frost – Regression Analysis: An Intuitive Guide).

That is the problem with observational studies where some degree of data mining is often involved. It is possible to find relationships wich look good, have low p-values and relatively high R-squared values, but are entirely meaningless. They represent nothing.

So readers and users of science should beware. The findings they are given may be completely false or contradictory. or at least meaningless in quantitative terms (as is the case with the relationships produced by the Grandjean et al 2020 group discussed above).

A recent scientific article provides a practical example of this problem. Different authors used the same surgical database but produced complete opposite findings (see Childers et al: 2020). Same Data, Opposite Results?: A Call to Improve Surgical Database Research). By themselves each study may have looked convincing. Both used the same large database from the same year. Over 10,000 samples were used in both cases and both studies were published in the same journal within a few months. However, the inclusion and exclusion criteria used were different. Large numbers of possible covariates were considered but these differed. Similarly, different outcome measures were used.

Readers interested in the details can read the original study or a Sceptical Scalpel blog article Dangerous pitfalls of database research. However, Childers et al (2020) describe how the number of these sort of observational studies “has exploded over the past decade.” As they say:

“The reasons for this growth are clear: these sources are easily accessible, can be imported into statistical programs within minutes, and offer opportunities to answer a diverse breadth of questions.”

However:

“With increased use of database research, greater caution must be
exercised in terms of how it is performed and documented.”

“. . . because the data are observational, they may be prone to bias from selection
or confounding.”

Problems for policymakers and regulators

Given that many scientists do not have the statistical expertise to properly assess published scientific findings it is understandable for policymakers or regulators to be at a loss unless they have proper expert advice. However, it is important that policymakers obtain objective, critical advice and not simply rely on the advocates who may well have scientific degrees. Qualifications by themselves are not evidence of objectivity and, undoubtedly, we often do face situations where scientists become advocates for a cause.

I think policymakers should consciously seek out a range of scientific expert advice, recognising that not all scientists are objective. Given the nature of current observational research, its use of existing databases and the ease with which researchers can obtain statistically significant relationships I also think policymakers should consciously seek the input of statisticians when they seek help in interpreting the science.

Surely they owe this to the people they represent.

Similar articles

November ’20 – NZ blogs sitemeter ranking

Image credit: Blogging for business: Your questions answered

I notice a few regulars no longer allow public access to the site counters. This may happen accidentally when the blog format is altered. If your blog is unexpectedly missing or the numbers seem very low please check this out. After correcting send me the URL for your site meter and I can correct the information in the database.

Similarly, if your blog data in this list seems out of whack, please check your site meter. Usually, the problem is that for some reason your site meter is no longer working.

Sitemeter is no longer working so the total number of NZ blogs in this list has been drastically reduced. I recommend anyone with Sitemeter consider transferring to one of the other meters. See  NZ Blog Rankings FAQ.

This list is compiled automatically from the data in the various site meters used. If you feel the data in this list is wrong could you check to make sure the problem is not with your own site meter? I am of course happy to correct any mistakes that occur in the automatic transfer of data to this list but cannot be responsible for the site meters themselves. They do play up.

Every month I get queries from people wanting their own blog included. I encourage and am happy to respond to queries but have prepared a list of frequently asked questions (FAQs) people can check out. Have a look at NZ Blog Rankings FAQ. This is particularly helpful to those wondering how to set up sitemeters. Please note, the system is automatic and relies on blogs having sitemeters which allow public access to the stats.

Here are the rankings of New Zealand blogs with publicly available statistics for November 2020. Ranking is by visit numbers. I have listed the blogs in the table below, together with monthly visits and page view numbers. Meanwhile, I am still keen to hear of any other blogs with publicly available sitemeter or visitor stats that I have missed. Contact me if you know of any or wish help adding publicly available stats to your bog.

You can see data for previous months at Blog Ranks

Subscribe to NZ Blog Rankings Subscribe to NZ blog rankings by Email Find out how to get Subscription & email updates

Rank Blog Visits/month Page Views/month
1 The Daily Blog 64740 293747
2 Liturgy 21061 26380
3 13th Floor 19650 22831
4 Tikorangi: The Jury Garden 15987 17942
5 Sacraparental 14855 18613
6 Cycling in Christchurch 12379 13879
7 Creative Maths 11743 13740
8 Bill Bennett 9716 10281
9 Homepaddock 8368 9015
10 Free range statistics 6501 9716
11 SciBlogs 5669 14257
12 Lost in silver fern 4842 7850
13 Offsetting behaviour 4601 4077
14 Music of sound 4195 5299
15 Open Parachute 4007 4251
16 Woodleigh Nursery 2953 4654
17 No Minister 2852 3161
18 Jontynz 2765 3633
19 Nom Nom Panda 2722 2901
20 A communist at large 2537 2962
21 Reading the maps 1740 2107
22 The Meaning of Trees 1670 2823
23 Anglican down under 1531 1770
24 The Woolshed Wargamer 1460 2419
25 Talking Auckland 1447 1667
26 Sarah the Gardener 1324 1766
27 Look, Think, Make 1307 1499
28 Fields of Blood 1206 1426
29 The Global Couple 1098 1315
30 Muffin-Mum 1079 1140
31 TVHE 1032 1142
32 Hot Topic 975 1152
33 Rodney’s Aviation Ramblings 849 1075
34 Vomkrieg 814 1009
35 Quote Unquote 703 731
36 Aughts and Oughtisms 624 868
37 Off the couch 576 683
38 Home education Foundation 513 573
39 Pdubyah – a life just as ordinary 511 564
40 Stratford Aerodrome 483 553
41 Kiwi Cakes 409 524
42 Climate Justice Taranaki 406 474
43 Fun with Allergy Kids 401 494
44 Economics New Zealand 371 516
45 New Zealand Conservative 355 364
46 AmeriNZ 338 434
47 Mrs Cake 287 353
48 Perissodactyla 253 277
49 Sparrowhawk/Karearea 240 322
50 Tales from a Caffeinated Weka 239 277
51 Tauranga Blog 222 232
52 Aotearoa: A wider perspective 216 237
53 Communication, Church, Society 214 248
54 Media Sport and Other Rantings 200 204
55 Keeping Stock 199 199
56 Eye on the ICR 187 211
57 Social Media & the 2014 Election 180 194
58 Cambridge NZ 167 190
59 Undeniably Atheist 153 160
60 goNZo Freakpower Brains Trust 124 124
61 Room One @ Auroa School 122 186
62 Creative Voice# 118 181
62 AnneKcam
118 136
64 Misses Mac 112 135
65 Save our Schools NZ 108 120
66 Ideologically impure 99 105
67 Anne Free Spirit 98 129
68 Exile in New zealand 94 98
69 My thinks 93 98
70 The Catalyst 92 109
70 Room 5 @ Melville Intermediate School 92 106
72 Put ’em all on an island 86 109
73 Cut your hair 83 85
74 Taradale Blog# 82 83
75 Quietly in the backgroud 79 88
76 Utopia – you are standing in it 77 77
77 In the back of the net 76 76
78 John Macilree’s Weblog# 69 85
79 Right Reason 66 66
80 Software development and stuff 65 65
81 Mountains of Our Minds# 63 70
82 Dad4justice 60 64
83 Glennis’s Blog Page# 56 148
84 Samuel Dennis 47 48
85 Family integrity 46 47
86 Wokarella 43 54
87 Get Out Gertrude! 42 103
88 Nelsonian’s life 39 46
89 ElephaNZa 37 39
90 James McKerrow – Surveyor 1834-1919# 36 40
91 Unity Blog 33 40
91 bread and pomegranates 33 63
93 Glenview 9 31 32
94 Socialist Aotearoa 30 30
95 kiwiincanberra 28 28
96 ah! New Year’s Resolution 24 24
97 New Zealand Indian Fine Arts Society 23 24
97 University of Otago, Law Library Blog 23 28
97 Wellington Chic 23 23
100 The Official Ebenezer Teichelmann Blog# 21 22
101 The Well read Kitty 19 19
101 kiwi simplexity 19 19
101 Sharlene says 19 19
104 MartinIsti Blog 18 19
104 John Macilree’s Blog 18 18
104 Room 24, 2012 18 19
107 Carolyn’s blog 16 17
108 New Zealand female Firefighter calendar 14 14
108 Episto 14 14
110 Journey to a mini me 13 13
110 Helen Heath 13 13
112 Four seasons in one 11 11
113 High voltage learning during the Christchurch earthquakes 10 10
114 Money can buy me happiness 9 9
115 SmallTorque 8 8
116 Warrington Taylor# 7 8
117 Chris Jillet – Mountaineer# 6 6
117 Einstein Music Journal 6 6
119 sticK 4 4
120 The Little Waaagh! That Could 3 3
120 Albom Adventures 3 3
122 Bob McKerrow – Wayfarer 1 1
122 The IT Countrey Justice 1 1
122 Spratts 1 1
122 Blogger at Large 1 1
122 Lulastic 1 1

Hyping it up over fluoridation

In my time as a scientific researcher, honest scientists used to condemn colleagues who over-hyped their science. To our mind there should have been a special place in hell for scientists who misrepresented their findings or dishonestly described their significance.

That sort of self-promotional behaviour is probably understandable for reasons of ambition – or even the attempt to secure future funding. And these self-promoting scientists usually moved on rather quicker into higher-paid administrative jobs. Not exactly to that special place in hell – but maybe their promotion away from active research reduced the damage their personal self-hyping could do to science in the long run (although I did often wonder about the damage they did to science with their administrative decisions).

A recent article (Hype isn’t just annoying, it’s harmful to science and innovation) got me thinking of this problem again – and to realise we are facing a classic case of this self-promotional over-hyping in recent science related to community water fluoridation (CWF).

Readers may pick up that I am referring to the behaviours of a north-American research group which has been indulging in a wave of self-promotion – a promotion which involves misrepresenting of their own findings and the significance of those findings. I have discussed the research findings of this group in a number of posts – including the following:

More recently they have produced a video promoting and misrepresenting the significance of their work. A video which is being gleefully used by anti-fluoride activists in their propaganda (there has been a bit of a dance over this video which has been roundly criticised scientifically and taken down or moved several times so links often don’t work. But a recent appearance was on the New Zealand anti-fluoride Facebook page).

Group members have also attacked, in a very unprofessional way, fellow scientists who have critiqued their work (see for example When scientists get political: Lead fluoride-IQ researcher launches emotional attack on her scientific critics). On social media, they have attempted to close down any critical discussion of their work – and in a similar manner, they purposely ignore, or even attempt to hide, studies that do not support their claims. (At the personal level I have had a member of this group refuse to fulfil their prior undertaking to do a peer review on a draft paper of mine – presumably because on reading it she became aware that my paper discussed flaws in their work).

In support of my contention that this group is over-hyping their findings, and unprofessionally using this misrepresentation to give support to anti-fluoride activists, I will briefly list below what their findings were.

No effect of CWF on child IQ

While claiming their findings support the claim that CWF is harmful to the brains of children they actually refused to even discuss their own reported data which shows this is not the case. In fact, the data in their two main papers (Green et al 2019 & Till et al 2020) show no effect of CWF on the IQ of children. This confirms the finding of Broadbent et al (2015) – the only other study comparing IQs of children from fluoridated and unfluoridated areas (see Canadian studies confirm findings of Broadbent et al (2015) – fluoridation has no effect on child IQ).

The table below lists their data together with that of Broadbent et al (2015)

A comparison of IQ for children and adults living in fluoridated and unfluoridated areas in countries where CWF is used

I think it unprofessional for this group to ignore their own data while at the same time lending support to activists who are claiming that CWF harms child’s brains. Perhaps they assume that this finding could not be hyped to promote their standing and ambitions. So, instead, they have diverted attention to another part of their work – the relationship between child IQ and measures of fluoride consumption.

Occasional weak relationships of child IQ with fluoride intake

While ignoring some other data – which is unprofessional in itself – they have devoted their promotional material to just one part of their findings – the few cases when they are able to demonstrate a relationship, albeit only a weak relationship, of child IQ with fluoride intake as measured by drinking water fluoride content, estimated fluoride intake or urinary fluoride levels.

I have discussed problems with this approach in my articles listed above but will stress here that the relationships are usually not statistically significant, or very weak when significant (explaining only a few per cent of the variance in IQ), and suffer from inadequate consideration of possible important confounders or other risk-modifying factors. A common problem with the sort of “fishing expedition” involving statistical searching of existing databases in an attempt to confirm a bias.

The figure below shows the relationships considered in the two studies. Most simply are not statistically significant. In a recent article (see Perrott, K. W. (2020). Health effects of fluoridation on IQ are unproven. New Zealand Medical Journal, 133(1522), 177–179) I describe it this way:

“Multiple measures for both cognitive factors and of fluoride exposure are used producing many relationships. Only four of the ten relationships reported by Green et al were statistically significant (p<0.5). Similarly, only three of the twelve relationships reported by Till et al were statistically significant. There is a danger that reported relationships could be misleading – as the proverb says, “If you torture your data long enough, they will tell you whatever you want to hear.” “

Relationships of cognitive measures with exposure to fluoride obtained by linear regression analyses using Canadian MIREC database. Red data points statistically significant (p<0.05) and green data points not significant. Bars represent 95% confidence intervals.

Even if the reported relationships correctly reflected reality (being a “fishing expedition” the chances are they don’t) their concentration on such weak relationships (explaining only a few per cent of the data) could be actively diverting attention away from the factors which are more important. Although this group has been very shy about making their data available for other researchers to check, the data they have published indicate that regional and ethnic differences may be making a much bigger contribution to child IQ.

A big problem (always glossed over by those promoting this work) is that the studies are exploratory, using existing data bases rather than experiments specifically designed to answer the relevant questions. Reported relationships may support preconceived beliefs but it is easy to ignore important confounders or risk-modifying factors (which properly designed experiments would attempt to minimize).

I highlighted the problem of inadequate consideration of other factors in my article critiquing an early paper from this group (see Perrott, K. W. (2018). Fluoridation and attention deficit hyperactivity disorder a critique of Malin and Till (2015). British Dental Journal, 223(11), 819–822). In this case, I showed that when regional factors (in thas case elevation) were included in the statistical analysis the relationship of ADHD prevalence with the extent of fluoridation that Malin & Till (2015) reported simply disappeared.

It is worth adding that in subsequent reports from this group my critique has been completely ignored and they still reported the flawed Malin & Till (20915) as being reliable. I think that is very unprofessional but it does align with the tactics of self promotion and over-hyping of their work.

The down-side of self-promotional hype in science

The article I introduced at the beginning (Hype isn’t just annoying, it’s harmful to science and innovation) finished by concluding:

“Acting this way has a cost. It’s not just about allowing people to feel awe: it’s about empowering those who are not professional scientists or technologists to be able to participate, instead of being spoon-fed a whizz-bang watered-down version of science as cheap entertainment. Hype doesn’t just obscure the reality of what’s going on in science and technology – it makes it less interesting. It’s time we start to look past it and delight in what lies beyond.”

So as honest working researchers we were right to resent self-promotional hype and, perhaps, to wish that a place in hell was reserved for these ambitious self-promoters.

But, looking back, I can recognize that scientists are human and, like everyone else, fallible. It is easy to see how people will place ambition over the truth and why that should resort to hyping their science for ambitious reasons. I can also recognise, as Ioannidis (2015) reported, that “Most Published Research Findings Are False.” I believe Ioannis is basically correct and there are big problems with the scientific literature which contains reports from so many studies based on an exploratory statistical analysis of the sort indulged in by this North American group.

It’s inevitable that such poor science will be seized on by those with political, commercial and ideological agendas to support their claims. This has been done by the anti-fluoride activist groups. For the rest of us it is matter of reading the scientific literature intelligently and critically. And I mean all the literature, not just that related to fluoridation, vaccination and similar “hot topics.”

And, in the end, the truth will out. Poor science and self-promoting ambition and hype do get exposed. The faults in the promotional messages do get exposed. And, new research and data usually provide context for a proper evaluation of the claims made by those which currently hype their work.

Similar articles