February ’21 – NZ blogs sitemeter ranking

Image credit: POLITICAL BLOG

I notice a few regulars no longer allow public access to the site counters. This may happen accidentally when the blog format is altered. If your blog is unexpectedly missing or the numbers seem very low please check this out. After correcting send me the URL for your site meter and I can correct the information in the database.

Similarly, if your blog data in this list seems out of whack, please check your site meter. Usually, the problem is that for some reason your site meter is no longer working.

Sitemeter is no longer working so the total number of NZ blogs in this list has been drastically reduced. I recommend anyone with Sitemeter consider transferring to one of the other meters. See  NZ Blog Rankings FAQ.

This list is compiled automatically from the data in the various site meters used. If you feel the data in this list is wrong could you check to make sure the problem is not with your own site meter? I am of course happy to correct any mistakes that occur in the automatic transfer of data to this list but cannot be responsible for the site meters themselves. They do play up.

Every month I get queries from people wanting their own blog included. I encourage and am happy to respond to queries but have prepared a list of frequently asked questions (FAQs) people can check out. Have a look at NZ Blog Rankings FAQ. This is particularly helpful to those wondering how to set up sitemeters. Please note, the system is automatic and relies on blogs having sitemeters that allow public access to the stats.

Here are the rankings of New Zealand blogs with publicly available statistics for February 2021. The ranking is by visit numbers. I have listed the blogs in the table below, together with monthly visits and page view numbers. Meanwhile, I am still keen to hear of any other blogs with publicly available sitemeter or visitor stats that I have missed. Contact me if you know of any or wish help adding publicly available stats to your bog.

You can see data for previous months at Blog Ranks

Subscribe to NZ Blog Rankings Subscribe to NZ blog rankings by Email Find out how to get Subscription & email updates

Continue reading

Data dredging, p-hacking and motivated discussion in anti-fluoride paper

Image credit: Quick Data Lessons: Data Dredging

Oh dear – another scientific paper claiming evidence of toxic effects from fluoridation. But a critical look at the paper shows evidence of p-hacking, data dredging and motivated reasoning to derive their conclusions. And it was published in a journal shown to be friendly to such poor science.

The paper is:

Cunningham, J. E. A., Mccague, H., Malin, A. J., Flora, D., & Till, C. (2021). Fluoride exposure and duration and quality of sleep in a Canadian population-based sample. Environmental Health, 1–10.

Data dredging

This study used data from a Canadian database – the Canadian Health Measures Survey. Databases with large numbers of variables tempt researchers to dredge for data or relationships which confirm their biases. Despite the loss of statistical significance in this approach data dredging or data mining is quite common in epidemiological studies.

Cunningham et al (2021) looked for relationships using two separate measure of fluoride exposure and four different measures of possible sleep disturbance. They found a “statistically significant (p<0.05) relationship between lower sleep duration and water fluoride. But no relationships for higher sleep duration, trouble sleeping or daytime sleepiness with either water fluoride or urinary fluoride. Their results for logical regression analysis are summarised in this figure. (Error bars crossing an Odds Ratio value of 1.0 indicate that the relationship is not statistically significant and p<0.05).

Of the 8 relationships investigated only 1 was statistically significant.

p-hacking

I discussed the problem of p-hacking in Statistical manipulation to get publishable results.

With a large dataset, one can inevitably find relationships that satisfy the p<0.05 criteria – because this p-value value is meaningless when multiple relationships are considered. One can even find such “statistically significant relationships” when random datasets are investigated (see Science is often wrong – be criticalI don’t “believe” in science – and neither should you, The promotion of weak statistical relationships in science  and Can we trust science). Once multiple relationships are investigated the chance of finding accidental relationships is much greater than 1 in 20 signified by the p<0.05 value.

So, one of the 8 relationships above satisfied the p<0.05 criteria when considered alone. But as part of multiple investigations, the chance of finding such a relationship by chance is much greater than 1 in 20.

Motivated reasoning

This paper smacks of motivated reasoning. The authors obviously have a commitment to the concept that fluoride causes problems with the pineal gland and drag up anything they can find in the literature to support this – without critically assessing the quality of the cited work or even mentioning the fact that the cited studies were made at much higher fluoride concentration on non-human animals. In effect, they are attempting to convert very weak results, obtained by data dredging and p-hacking, to a fact. They are attempting to make a purse out of a sow’s ear.

This research group is not new to this game. I commented on this in my critique of another sleep disorder paper from the group (see ).

Many of the same researchers are listed as authors on both papers – yet Cummingham et al (2021 ) cite the previous paper as if it was an independent study. They say “As far as we are aware, this is only the second human
study investigating the effects of fluoride exposure on sleep outcomes” which is simply disingenuous considering the involvement of the same researchers in both papers.

Both these papers were also published in the same journal – Environmental Health – a pay-to publish-journal that is known to be friendly to anti-fluoride researchers and uses very sympathetic peer reviewers (see ). The Chief editor, Philippe Grandjean, is well known for his opposition to fluoridation. I commented on his refusal to consider a paper of mine that critiqued an anti-fluoride paper published in his journal (see Fluoridation not associated with ADHD – a myth put to rest).

Conclusion

Yet another very weak study, published in an anti-fluoride friendly pay-to-publish journal with poor peer review. Despite the weaknesses due to data dredging, p-hacking and motivated reasoning, anti-fluoride activists will cite the single “statistically significant” result as gospel and ignore the 7 relationships that are not significant. As for inadequate consideration of confounders or other risk-modifying factors, this study ignores completely the fact that city size and geographic factors have a strong effect on both sleep patterns and water fluoride concentrations (see Perrott 2018). Such inadequate consideration of confounders is another common problem in epidemiological studies.

Oh, well, we are not a rational species. More a rationalising one. And in such areas motivated rationalisation and confirmation bias is rife.

Similar studies

Censorship: Thinking you are right – even if you’re wrong

I posted this video four years ago – but repost it now because the message is still very valid (see Are you really right? and Warriors, scouts, Trump’s election and your news media). In fact, it’s more valid today than it was four years ago. Things have got worse – far worse than I expected they would.

To recap – Julia describes the two different mindsets required in fighting a war:

  • The  “Warrior mindset” – emotively based and fixated on success. Not interested in stopping to think about the real facts or rationally analyse the situation.
  • The “Scout mindset” – objective and rational, ready to consider the facts (in fact, searching them out) and logically consider possibilities.

Unfortunately the “warrior mindset” – emotively based and not considering the facts or rationally analysing the situation – may be required in war but now seems to be the standard approach in politics (maybe even in science for some people). The “scout mindset” is unfortunately rare – and actually disapproved of when it occurs. Things have got worse in that respect.

There is another unfortunate dimension to this. Just as people have become convinced that they themselves are the fountains of truth, and their opponents fountains of untruth, there is now the drive to censor. Many people are arguing that people they disagree with should be denied access to the media, especially social media. And they jump on the media when their opponents are given space.

Hell, the media itself is encouraging this. Nowadays we have silicon valley corporations, which control social media, determining who should have a voice. And some people who should know better are applauding them for this. Applauding selfishly because they want to see their discussion opponents denied a voice. This is not just cowardly but extremely short-sighted of them – what will they do when they, in turn, are denied a voice by these very same corporations?

Whatever happened to the old adage – “I Disapprove of What You Say, But I Will Defend to the Death Your Right to Say It?” I grew up thinking this was a widely approved ideal but now find the people who I had considered “right-minded,” “free thinkers,” and “liberals” have almost completely abandoned it. They seem to be the first in line to impose censorship and the last to complain when censorship happens.

The folly of censorship

Censorship, the attempt to close down a rational discussion, hardly gives the impression that the supporters of censorship have truth on their side. If anything it suggests they do not have the arguments and this, in effect, hands victory to the ones censored.

It is short-sighted and cowardly and does not close down the discussion – it simply moves it elsewhere. And usually to a forum in which the instigators or supporters of the censorship have little input. 

Censorship usually hands the moral high ground to the ones being censored – and cheaply as they have gained this without even having their ideas (rational or irrational) tested in meaningful discussion.

There is a lot of truth in the old saying that sunlight is the best disinfectant – and refusal to allow this only encourages bad ideas to proliferate unchecked.

That is not to say all the ideas being censored are “bad” – many plainly aren’t. But will those who censor or support censorship (or indulge in self-censorship –  another common problem with people I used to respect) ever know? In the end, the censors and their supporters simply end up living in their own silo or bubble, seemingly oblivious to many of the ideas circulating in society or the world. Ideas they could perhaps have learned something from. Even if the ideas one wishes to censor are based on misinformation or misunderstanding the exercise of debating them can teach things to both sides. Censorship, and especially self-censorship, prevents self-development.

Arbitrary or ideologically motivated censorship?

Often censorship by social media appears arbitrary, perhaps driven by algorithms. For example, Twitter recently blocked the official account of Russian Arms Control delegation in Vienna, engaged in negotiations on the Open Skies Treaty and other important issues. It was later reinstated – without explanation or apology – but followers were lost. Arbitrary or not, should such an important body, involved in negotiations critical for the whole planet, be censored?

Are those algorithm innocent or objective? These days we see social media like Twitter and Facebook employing staff with political backgrounds. Even people who have previously worked in, or still work in, bodies like the Atlantic Council which is connected with NATO. And the revolving door by which ex-politicians and intelligence staff get employed in the mainstream media is an open secret. These people openly describe information coming from “the other side” as “disinformation,” “fake news,” or “state-supported propaganda” so have no scruples about censoring it or otherwise working to discredit it (for example labelling news media as state-controlled – but only for the “other side” – eg RT, but not the BBC or Voice of America).

Ben Nimmo, a member of the Atlantic Council, and well-known for his aggressive political views, recently announced his move to Facebook

This biased approach to information or social discussion is strongly driven by an “official” narrative. A narrative promoted by military blocks, their governments and their political leaders. But even unaffiliated persons approaching social discussion can be, and usually are, driven by a narrative. A narrative that they often strongly and emotionally adhere to. Juna Galef’s “warrior mindset.” That is only human. It’s a pity, but probably few participants in social discussions get beyond this mindset and adopt the far more useful (for obtaining objective truth) “scout mindset” which is objective and rational, ready to consider the facts (in fact, searching them out) and logically considers possibilities.

But let’s face it. If you support censorship, or even instigate censorship in social media you control, how likely is it that you can get beyond the “warrior mindset?” The “scout mindset” by necessity requires open consideration of views and facts you may initially disagree with. Censorship, especially self-censorship, prevents that. It prevents personal growth.

Similar articles

 

Embarrassing knock-back of second draft review of possible cognitive health effects of fluoride

We have come to expect exaggeration of scientific findings in media reports and institutional press releases. But it can also be a problem is original scientific publications where findings are reported in an unqualified or exaggerated way. Image Credit: Curbing exaggerated reporting

This is rather embarrassing for a US group attempting to get the science right about possible toxic effects of fluoride. It’s also embarrassing for the anti-fluoride activists who have “jumped the gun” and been citing the group’s draft review as if it was reliable when it is not.

The US National Academies of Sciences, Engineering, and Medicine (NAS) have released their peer-review of the revised US National Toxicity Program (NTP) draft on possible neurodevelopmental effects of fluoride (see Review of the Revised NTP Monograph on the Systematic Review of Fluoride Exposure and Neurodevelopmental and Cognitive Health Effects).

This is the second attempt by the NTP reviewers to get acceptance of their draft and it has now been knocked back by the NAS peer reviewers for a second time.

Diplomatic but damning peer-review

Of course, the NAS peer reviewers use diplomatic language but the peer review is quite damning. It criticises the NTP for ignoring some of the important recommendations in the first peer review. One which is quite critical was the lack of response to the request that NTP explains how the monograph can be used (or not) to inform water fluoridation concentrations. The second NAS peer review firmly states that the NTP:

“should make it clear that the monograph cannot be used to draw any conclusions regarding low fluoride exposure concentrations, including those typically associated with drinking-water fluoridation.”

And:

“Given the substantial concern regarding health implications of various fluoride exposures, comments or inferences that are not based on rigorous analyses It seems to me that there is soime internal politicsshould be avoided.”

It seems to me there is some internal politics involved and some of the NTP authors may be promoting their own, possibly anti-fluoride, agenda. Certainly, the revised NTP draft monograph continues to obfuscate this issue. It continues to state that “fluoride is presumed to be a cognitive neurodevelopmental hazard to humans” – a clause which anti-fluoride campaigner consistently quote out of context. Yes, it does state that this is based on findings demonstrating “that higher fluoride exposure (e.g., >1.5 mg/L in drinking water) is associated with lower IQ and other cognitive effects in children.” But this is separated from the other fact that the findings on cognitive neurodevelopment for “exposures in ranges typically found in drinking water in the United States (0.7 mg/L for optimally fluoridated community water systems)” are “are inconsistent, and therefore unclear.”

Monograph exaggerates by enabling unfair cherry-picking

So, you see the problem. The draft NTP monograph correctly refers to IQ and other cognitive effects in children exposed to excessive levels of fluoride. The draft also correctly refers to that lack of evidence for such effects at lower fluoride exposure levels typical of community water fluoridation. But in different places in the document.

The enables activist cherry-picking to support an anti-fluoride agenda and that is a fault of the document itself. It should clearly state that the monograph should not be used to draw any conclusion at these low exposure levels. This is strongly expressed in the peer-reviewers’ comments.

I find the blanket “presumed to be a hazard for humans” quite misleading. For example, no one says that calcium is “presumed to be a cardiovascular hazard to humans.” Or that selenium is “presumed to be a cardiovascular or neurological hazard to humans.” Or what about magnesium – would you accept that it is a “presumed neurological hazard to humans?” Would you accept that iron is a “presumed cardiovascular, cancer, kidney or erectile dysfunction hazard to humans?” Yet all those problems have been reported for humans at high intake levels of these elements.

No, we sensibly accept that various elements and microelements have beneficial, or essential benefits, to humans at reasonable intake levels., Then we sensibly warn that these same elements can be harmful at excessive intake. To proclaim that any of these elements are “presumed” to be hazardous – without clearly saying at excessive intake levels, is simply distorting or exaggerating the data.

What does “presumed” mean?

A lot of readers find the use of “presumed” strange. But it’s meaning is related to the levels of evidence found by reviewers.

No, don’t believe those anti-fluoride activists who falsely claim that “presumed” is the highest level of evidence and that the finding should be treated as factual. They are simply wrong.

Some idea of the word’s use is presented in this diagram from the NTP revised draft monograph.

So “presumed” means that the evidence for the effect is moderate. That the effect is not factual or known. But as further evidence comes in the ranking of fluoride as a hazard may increase, or decline.

As the monograph bases this “presumed” rating solely on evidence from areas of endemic fluorosis where fluoride intake levels are high it is correct to avoid stating the effects as factual. For example, consider these images from areas of endemic fluorosis in China (taken from a slide presentation by Xiang 2014):

Clearly, people in these areas suffer a range of health effects related to the high fluoride intake. The cognitive effects like IQ loss from these areas could result from these other health effects, not directly from fluoride (although excessive fluoride intake leads to the health effects).

So we can “presume” that fluoride (in areas of endemic fluorosis where fluoride intake is excessive) is a “cognitive neurodevelopmental hazard for humans” but we can not factually state that the neurodevelopment effects are directly caused by fluoride. That would require further scientific work to elucidate the specific mechanisms involved in creating that effect.

Similar articles

The promotion of weak statistical relationships in science

Image credit: Correlation, Causation, and Their Impact on AB Testing

Correlation is never evidence for causation – but, unfortunately, many scientific articles imply that it is. While paying lip service to the correlation-causation mantra, some (possibly many) authors end up arguing that their data is evidence for an effect based solely on the correlations they observe. This is one of the reasons for the replication crisis in science where contradictory results are being reported. Results which cannot be replicated by other workers (see I don’t “believe” in science – and neither should you).

Career prospects, institutional pressure and the need for public recognition will encourage scientists to publish poor quality work that they then use to claim that have found an effect. The problem is that the public, the news media and even many scientists simply do not properly scrutinise the published papers. In most cases they don’t have the specific skills required for this.

There is nothing wrong with doing statistical analyses and producing correlations. However such correlations should be used to suggest future more meaningful and better-designed research like randomised controlled trials (see Smith & Ebrahim 2002Data dredging, bias, or confounding. They can all get you into the BMJ and the Friday papers. ). They should never be used as “proof” for an effect, let alone argue that the correlation is evidence to support regulations and advise policymakers.

Hunting for correlations

However, researchers will continue to publish correlations and make great claims for them because they face powerful incentives to promote even unreliable research results. Scientific culture and institutional pressures provide expectations demanding academic researchers produce publishable results. This pressure is so great they will often clutch at straws to produce correlations even when the initial statistical analyst produces none. They will end up “torturing the data.”

These days epidemiological researchers use large databases and powerful statistical software in their search for correlations. Unfortunately, this leads to data mining which, by suitable selection of variables, makes the discovery of statistically significant correlations easy. The data mining approach also means that the often cite p-values are meaningless. P-values measure the probability the relationship occurs by chance and often cited as evidence of the “robustness” of the correlations. But probability is so much greater when researchers resort to checking a range of variables and that isn’t reflected properly in the p-values.

Where data mining occurs, even to a limited extent, researchers are simply attempting to make a purse out of sow’s ear when they support their correlations merely by citing a p-value < 0.05  because these values are meaningless in such cases. The fact that so many of these authors often ignore more meaningful results from their statistical analyses (like R-squared values which indicate the extent that the correlation “explain” the variation in their data) underlines their deceptive approach.

Poor statistical relationships

Consider these correlations below – two data sets are taken from a published paper – the other four use random data provided by Jim Jones in his book Regression Analysis: An Intuitive Guide.

You can probably guess which correlations were from real data (J and M) because there are so many more data points All of these have correlations low p values – but of course, those selected from random data sets resulted from data mining and the p-values are therefore meaningless because they are just a few of the many checked. Remember, a p-value < 0.05 means that the probability of a chance effect is one in twenty and more than twenty variable pairs were checked in this random dataset.

The other two correlations are taken from Bashash et al (2017). They do not give details of how many other variables were checked in the dataset used but it is inevitable that some degree of data mining occurred. So, again, the low p-values are probably meaningless.

J provides the correlation of General Cognitive Index (GCI) scores in children at age 4 years with maternal prenatal urinary fluoride and M provides the correlation of children’s IQ at age 6–12 y with maternal prenatal urinary fluoride. The paper has been heavily promoted by anti-fluoride scientists and activists. None of the promoters have made a critical, objective, analysis of the correlations reported. Paul Connett, director of the Fluoride Action Network, was merely supporting his anti-fluoride activist bias when he uncritically described the correlations as “robust.” They just aren’t.

There is a very high degree of scattering in both these correlations, and the R-squared values indicate they cannot explain any more than about 3 or 4% of the variance in the data. Hardly something to hang one’s hat on, or to be used to argue that policymakers should introduce new regulations controlling community water fluoridation or ban it altogether.

In an effort to make their correlations look better these authors imposed confidence intervals on the graphs (see below). This Xkcd cartoon on curve fitting gives a cynical take on that. The grey areas in the graphs may impress some people but it does not hide the wide scatter of the data points. The confidence intervals refer to estimates of the regression coefficient but when it comes to using the correlations to predict likely effects one must use the prediction intervals which are very large (see Paul Connett’s misrepresentation of maternal F exposure study debunked). In fact, the estimated slopes in these graphs are meaningless when it comes to predictions.

Correlations reported by Bashash et al (2017). The regressions explain very little of the variance in the data and connect be used to make meaningful predictions.

In critiquing the Bashash et al (2017) paper I must concede that at least they made their data available – the data points in the two figures. While they did not provide full or proper results from their statistical analysis (for example they didn’t cite the R-squared values) the data does at least make it possible for other researchers to check their conclusions.

Unfortunately, many authors simply cite p-values and possible confidence intervals for the estimate of the regression coefficient without providing any data or images. This is frustrating for the intelligent scientific reader attempting to critically evaluate their claims.

Conclusions

We should never forget that correlations, no matter how impressive, do not mean causation. It is very poor science to suggest they do.

Nevertheless, many research resort to correlations they have managed to glean from databases, usually resorting to some extent of data mining, to claim they have found an effect and to get published. The drive to publish means that even very poor correlations get promoted and are used by ideologically or career-minded scientists, and by activists, to attempt to convince policymakers of their cause.

Image credit: Xkcd – Correlation

Remember, correlations are never evidence of causation.

Similar articles

Can we trust science?

Image credit: Museum collections as research data

Studies based simply on statistically significant relationships found by mining data from large databases are a big problem in the scientific literature. Problematic because data mining, or worse data dredging, easily produces relationships that are statistically significant but meaningless. And problematic because authors wishing to confirm their biases and promote their hypotheses conveniently forget the warning that correlation is not evidence for causation and go on to promote their relationships as proof of effects. Often they seem to be successful in convincing regulators and policymakers that their serious relationships should result in regulations. Then there are the activists who don’t need convincing but will promote willingly and tiresomely these studies if they confirm their agendas.

Even random data can provide statistically significant relationships

The graphs below show the fallacy of relying only on statistically significant relationships as proof of an effect. The show linear regression result for a number of data sets. One data set is taken from a published paper – the rest use random data provided by Jim Jones in his book Regression Analysis: An Intuitive Guide.

All these regressions look “respectable.” They have low p values (less than the conventional 0.05 limit) and the R-squared values indicated they “explain” a large fraction of the data – up to 49%. But the regressions are completely meaningless for at least 7 of the 8 data sets because the data were randomly generated and have no relevance to real physical measurements.

This should be a warning that correlations reported in scientific papers may be quite meaningless.

Can you guess which of the graphs is based on real data? It is actually the graph E – published by members of a North American group currently publishing data which they claim shows community water fluoridation reduces child IQ. This was from one of their first papers where they claimed childhood ADHD was linked to fluoridation (see Malin, A. J., & Till, C. 2015. Exposure to fluoridated water and attention deficit hyperactivity disorder prevalence among children and adolescents in the United States: an ecological association).

The group used this paper to obtain funding for subsequent research. They obviously promoted this paper as showing real effects – and so have the anti-fluoride activists around the world, including the Fluoride Action Network (FAN) and its director Paul Connett.

But the claims made for this paper, and its promotion, are scientifically flawed:

  1. Correlation does not mean causation. Such relationships in larger datasets often occur by chance – hell they even occur with random data as the figure above shows.
  2. Yes, the authors argue there is a biologically plausible mechanism to “explain” their association. But that is merely cherry-picking to confirm a bias and there are other biologically plausible mechanisms they did not consider which would say there should not be an effect. The unfortunate problem with these sorts of arguments is that they are used to justify their findings as “proof” of an effect. To violate the warning that correlation is not causation.
  3. There is the problem of correcting for cofounders or other risk-modifying factors. While acknowledging the need for future studies considering other confounders, the authors considered their choice of socio-economic factors was sufficient and their peer reviewers limited their suggestion of other confounders to lead. However, when geographic factors were included in a later analysis of the data the reported relationship disappeared. 

Confounders often not properly considered

Smith & Ebrahim (2002) discuss this problem an article  – Data dredging, bias, or confounding. They can all get you into the BMJ and the Friday papers. The title itself indicates how the poor use of statistics and unwarranted promotion of statical analyses can be used to advance scientific careers and promote bad science in the public media.

These authors say:

“it is seldom recognised how poorly the standard statistical techniques “control” for confounding, given the limited range of confounders measured in many studies and the inevitable substantial degree of measurement error in assessing the potential confounders.”

This could be a problem even for studies where a range of confounders are included in the analyses. But Malin & Till (2015) considered the barest minimum of confounders and didn’t include ones which would be considered important to ADHD prevalence. In particular, they ignored geographic factors and these were shown to be important in another study using the same dataset. Huber et al (2015) reported a statistically significant relationship of ADHD prevalence with elevation. These relationships are shown in this figure

Of course, this is merely another statistically significant relationship – not proof of a real effect and no more justified than the one reported by Malin and Till (2015). But it does show an important confounder that Malin & Till should have included in their statistical analysis.

I did my own statistical analysis using the data set of Malin & Till (2015) and Huber et al (2015) and showed (Perrott 2018) that inclusion of geographic factors showed there was no statistically significant relationship of ADHD prevalence with fluoridation as suggest by Malin & Till (2015). Their study was flawed and it should never have been used to justify funding for future research on the effect of fluoridation. Nor should it have been used by activists promoting an anti-fluoridation agenda.

But, then again, derivation of a statistically significant relationship by Malin & Till (2o15) did get them published in the journal Environmental Health which, incidentally, has sympathetic reviewers (see Some fluoride-IQ researchers seem to be taking in each other’s laundry) and an anti-fluoridation Chief Editor – Phillipe Grandjean (see Special pleading by Philippe Grandjean on fluoride). It also enabled the promotion of their research via institutional press releases, newspaper article and the continual activity of anti-fluoridation activists. Perhaps some would argue this was a good career move!

Conclusion

OK, the faults of the Malin & Till (2015) study have been revealed – even though Perrott (2018) is studiously ignored by the anti-fluoride North American group which has continued to publish similar statistically significant relationships of measures of fluoride uptake and measures of ADH or IQ.

But there are many published papers – peer-reviewed papers – which suffer from the same faults and get similar levels of promotion. They are rarely subject to proper post-publication peer-review or scientific critique. But their authors get career advancement and scientific recognition out of their publication. And the relationships are promoted as evidence for real effects in the public media.

No wonder members of the public are so often confused by the contradictory reporting, the health scares of the week, they are exposed to.

No wonder many people feel they can’t trust science.

Similar articles

January ’21 – NZ blogs sitemeter ranking

Image credit: New Zealand Privacy Act 2020: standout features, fines and global comparisons

I notice a few regulars no longer allow public access to the site counters. This may happen accidentally when the blog format is altered. If your blog is unexpectedly missing or the numbers seem very low please check this out. After correcting send me the URL for your site meter and I can correct the information in the database.

Similarly, if your blog data in this list seems out of whack, please check your site meter. Usually, the problem is that for some reason your site meter is no longer working.

Sitemeter is no longer working so the total number of NZ blogs in this list has been drastically reduced. I recommend anyone with Sitemeter consider transferring to one of the other meters. See  NZ Blog Rankings FAQ.

This list is compiled automatically from the data in the various site meters used. If you feel the data in this list is wrong could you check to make sure the problem is not with your own site meter? I am of course happy to correct any mistakes that occur in the automatic transfer of data to this list but cannot be responsible for the site meters themselves. They do play up.

Every month I get queries from people wanting their own blog included. I encourage and am happy to respond to queries but have prepared a list of frequently asked questions (FAQs) people can check out. Have a look at NZ Blog Rankings FAQ. This is particularly helpful to those wondering how to set up sitemeters. Please note, the system is automatic and relies on blogs having sitemeters which allow public access to the stats.

Here are the rankings of New Zealand blogs with publicly available statistics for January 2021. Ranking is by visit numbers. I have listed the blogs in the table below, together with monthly visits and page view numbers. Meanwhile, I am still keen to hear of any other blogs with publicly available sitemeter or visitor stats that I have missed. Contact me if you know of any or wish help adding publicly available stats to your bog.

You can see data for previous months at Blog Ranks

Subscribe to NZ Blog Rankings Subscribe to NZ blog rankings by Email Find out how to get Subscription & email updates

Continue reading

I don’t “believe” in science – and neither should you

We should be very careful about naively accepting claims made by the mainstream media – but this is also true of scientific claims. We should approach them intelligently and critical and not merely accept them on faith

I cringe every time I read an advocate of science asserting they “believe in science.” Yes, I know they may be responding to an assertion made by supporters of religion or pseudoscience. But “belief” is the wrong word because it implies trust based on faith and that is not the way science works.

Sure, those asserting this may argue that they have this belief because science is based on evidence, not faith. But that is still a copout because evidence can be used to draw conclusions or make claims that are still not true. Anyway, published evidence may be weak, misleading or poorly interpreted.

Here is an example of this dilemma taken from the Vox article Hyped-up science erodes trust. Here’s how researchers can fight back.

The figure is based on data published in Schoenfeld, J. D., & Ioannidis, J. P. A. (2013). Is everything we eat associated with cancer? A systematic cookbook review. American Journal of Clinical Nutrition, 97(1), 127–134.

It is easy to cite a scientific article, for example, as evidence that wine protected one from cancer. Or that it, in fact, causes cancer. Unfortunately, the scientific literature is full of such studies with contradictory conclusions. Usually based on real data and statistical analyses which show a significant relationship. But, if it is easy to find such studies which can be claimed as evidence of opposite effects what good is a “belief” in science? All that simple “belief” does is provide a scientific source for one’s own beliefs, an exercise in confirmation bias.

This figure should be a warning to approach published findings in fields like nutritional epidemiology and environmental epidemiology critically and intelligently. One should simply not take them as factual – we should not “believe” in them simply because they are published in scientific journals.

Schoenfeld, & Ioannidis (2013) say of the studies they investigated that:

“the large majority of these studies were interpreted by their authors as offering evidence for increased or decreased risk of cancer. However, the vast majority of these claims were based on weak statistical evidence.”

They discuss problems such as the “pressure to publish,” undervaluation or not reporting negative results, “biases in the design, execution and reporting of studies” because nutritional ingredients “viewed as “unhealthy” may be demonized.” 

The authors warn that:

“studies that narrowly meet criteria for statistical significance may represent spurious results, especially when there is large flexibility in analyses, selection of contrasts, and reporting.”

And:

” When results are overinterpreted, the emerging literature can skew perspectives and potentially obfuscate other truly significant findings.”

They warn that these sorts of problems may be:

“especially problematic in areas such as cancer epidemiology, where randomized trials may be exceedingly difficult and expensive to conduct; therefore, more reliance is placed on observational studies, but with a considerable risk of trusting false-positive”

These comments are very relevant to consideration of recent scientific studies claiming a link between community after fluoridation and cognitive deficits. Studies the are heavily promoted by the anti-fluoridation activists and, more importantly for scientific readers, by the authors of these studies themselves and their institutions. I have discussed specific problems in previous posts about the results from the Till group and their promotion by the authors.

The merging of pseudoscience with science

We seem to make an issue of countering pseudoscience with science but in the process are often oblivious to the fact these to tend to merge – even for professional scientists. After all, we are human and all have our own biases to confirm and our jobs to advance.

This is a black and white contrast of science with pseudoscience promoted by Skeptics. It’s worth comparing this with the reality of the scientific world.

Do scientist always follow the evidence? Don’t they sometimes start with the conclusion and look for evidence to support it – even clutching at the straws of weak evidence (statically weak relationships in environmental epidemiological studies which are promoted as proof of harmful effects)?

Oh for the ideal scientist who embraces criticism. Sure, they are out there but so many refuse to accept criticism, “circle the wagons” and end up unfairly and emotively attacking their critics. I describe one example in When scientists get political: Lead fluoride-IQ researcher launches emotional attack on her scientific critics.

Are claims always conservative and tentative? Especially when scientists have a career or institution to promote. And institutions with their press releases are a big part of this problem of overpromotion. Unfortunately, in environmental epidemiology, some scientists will take weak research results to argue that they prove a cause and then request regulation by policymakers. I discuss this in . Specifically, there is the case of weak scientific data from Till’s research group being used to promote regulatory actions to confirm their biases.

Unfortunately, scientists with biases to confirm find it quite easy to ignore or downgrade the evidence which doesn’t fit. They may even work to prevent publication of countering evidence (see for example Fluoridation not associated with ADHD – a myth put to rest).

Conclusion

I could go one taking each point in order. But, in reality, I think such absolute claims about science are just not realistic. The scientific world is not that perfect.

In the end, the intelligent scientific reader must approach even the published literature very critically if they are to truly sift the wheat from the chaff.

Similar articles

December ’20 – NZ blogs sitemeter ranking

I notice a few regulars no longer allow public access to the site counters. This may happen accidentally when the blog format is altered. If your blog is unexpectedly missing or the numbers seem very low please check this out. After correcting send me the URL for your site meter and I can correct the information in the database.

Similarly, if your blog data in this list seems out of whack, please check your site meter. Usually, the problem is that for some reason your site meter is no longer working.

Sitemeter is no longer working so the total number of NZ blogs in this list has been drastically reduced. I recommend anyone with Sitemeter consider transferring to one of the other meters. See  NZ Blog Rankings FAQ.

This list is compiled automatically from the data in the various site meters used. If you feel the data in this list is wrong could you check to make sure the problem is not with your own site meter? I am of course happy to correct any mistakes that occur in the automatic transfer of data to this list but cannot be responsible for the site meters themselves. They do play up.

Every month I get queries from people wanting their own blog included. I encourage and am happy to respond to queries but have prepared a list of frequently asked questions (FAQs) people can check out. Have a look at NZ Blog Rankings FAQ. This is particularly helpful to those wondering how to set up sitemeters. Please note, the system is automatic and relies on blogs having sitemeters which allow public access to the stats.

Here are the rankings of New Zealand blogs with publicly available statistics for December 2020. Ranking is by visit numbers. I have listed the blogs in the table below, together with monthly visits and page view numbers. Meanwhile, I am still keen to hear of any other blogs with publicly available sitemeter or visitor stats that I have missed. Contact me if you know of any or wish help adding publicly available stats to your bog.

You can see data for previous months at Blog Ranks

Subscribe to NZ Blog Rankings Subscribe to NZ blog rankings by Email Find out how to get Subscription & email updates

Continue reading

Science is often wrong – be critical

Activists, and unfortunately many scientists, use published scientific reports like a drunk uses a lamppost – more for support than illumination

Uncritical use of science to support a preconceived position is widespread – and it really gets up my nose. I have no respect for the person, often an activist, who uncritically cites a scientific report. Often they will cite a report which they have read only the abstract of – or not even that. Sometimes commenters will support their claims by producing “scientific evidence” which are simply lists of citations obtained from PubMed or Google Scholar.

[Yes, readers will recognise this is a common behaviour with anti-fluoride activists]

Unfortunately, this problem is not restricted to activists. Too often I read scientific papers with discussions where authors have simply cited studies that support, or they interpret as supporting, their own preconceived ideas or hypotheses. Compounding this scientific “sin” is the habit of some authors who completely refuse to cite, or even discuss, studies producing evidence that doesn’t fit their scientific prejudices.

Publication does not magically make scientific findings or ideas “true” – far from it. The serious reader of scientific literature must constantly remember that the chances are very high that published conclusions or findings are likely to be false. John Ioannidis makes this point in his article Why most published research findings are false. Ioannidis concentrates on the poor use, or misuse, of statistics. This is a constant problem in scientific writing – and it certainly underlines the fact that even scientists will consciously or unconsciously manipulate their data to confirm their biases. They are using statistical analysis in the way a drunk used a lamppost – for support rather than illumination.

Poor studies often used to fool policymakers

These problems are often not easily understood by scientists themselves but the situation is much worse for policymakers. They are not trained in science and don’t have the scientific or statistical experience required for a proper critically analysis of claims made to them by activists. Yet they are often called on to make decisions which rely on the acceptance, or rejection, of scientific claims (or, claims about the science).

An example of this is a draft (not peer-reviewed) paper by Grandjean et al  – A Benchmark Dose Analysis for Maternal Pregnancy Urine-Fluoride and IQ in Children.

These authors have an anti-fluoride activists position and are campaigning against community water fluoridation (CWF). Their paper uses their own studies which report very poor and rare statistical relationships of child IQ with fluoride intake as “proof” of causation sufficiently strong to advocate for regulatory guidelines. Unsurprisingly their recommended guidelines are very low – much lower than those common with CWF.

Sadly, their sciencey sounding advocacy may convince some policymakers. It is important that policymakers be exposed to a critical analysis of these studies and their arguments. The authors will obviously not do this – they are selling their own biases. I hope that any regulator or policymaker required to make decisions on these recommendations have the sense to call for an independent, objective and critical analysis of the paper’s claims.

[Note: The purpose of the medRxiv preprints of non-peer-reviewed articles is to enable and invite discussion and comments that will help in revising the article. I submitted comments on the draft article over a month ago (Comments on “A Benchmark Dose Analysis for Maternal Pregnancy Urine-Fluoride and IQ in Children”) and have had no response from the authors.  This lack of response to constructive critiques is, unfortunately, common for this group. I guess one can only comment that scientists are human.]

Observational studies – exploratory fishing expeditions

A big problem with published science today is that many studies are nothing more than observational exploratory studies using existing databases which, by their nature, cannot be used to derive causes. Yet that can easily be used to derive statistically significant links or relationships. These can be used to write scientific papers but they are simply not evidence of causes.

Properly designed studies, with proper controls and randomised populations properly representing different groups, may provide reasonable evidence of causal relationships – but most reported studies are not like this. Most observational studies use existing databases with non-random populations where selection and confounding with other factors is a huge problem. Authors are often silent about selection problems and may claim to control for important confounding factors, but it is impossible to include all confounders. The databases used may not include data for relevant confounders and authors themselves may not properly select all relevant confounders for inclusion.

This sort of situation makes some degree of data mining likely., This occurs when a number of different variables and measures of outcomes are considered in the search for statistically significant relationships. Jim Frost illustrated the problems with this sort of approach. Using a set of completely fictitious random data he was able to obtain a statistically significant relationship with very low p values and R-squared values showing the explanation of 61% of the variance (see Jim Frost – Regression Analysis: An Intuitive Guide).

That is the problem with observational studies where some degree of data mining is often involved. It is possible to find relationships wich look good, have low p-values and relatively high R-squared values, but are entirely meaningless. They represent nothing.

So readers and users of science should beware. The findings they are given may be completely false or contradictory. or at least meaningless in quantitative terms (as is the case with the relationships produced by the Grandjean et al 2020 group discussed above).

A recent scientific article provides a practical example of this problem. Different authors used the same surgical database but produced complete opposite findings (see Childers et al: 2020). Same Data, Opposite Results?: A Call to Improve Surgical Database Research). By themselves each study may have looked convincing. Both used the same large database from the same year. Over 10,000 samples were used in both cases and both studies were published in the same journal within a few months. However, the inclusion and exclusion criteria used were different. Large numbers of possible covariates were considered but these differed. Similarly, different outcome measures were used.

Readers interested in the details can read the original study or a Sceptical Scalpel blog article Dangerous pitfalls of database research. However, Childers et al (2020) describe how the number of these sort of observational studies “has exploded over the past decade.” As they say:

“The reasons for this growth are clear: these sources are easily accessible, can be imported into statistical programs within minutes, and offer opportunities to answer a diverse breadth of questions.”

However:

“With increased use of database research, greater caution must be
exercised in terms of how it is performed and documented.”

“. . . because the data are observational, they may be prone to bias from selection
or confounding.”

Problems for policymakers and regulators

Given that many scientists do not have the statistical expertise to properly assess published scientific findings it is understandable for policymakers or regulators to be at a loss unless they have proper expert advice. However, it is important that policymakers obtain objective, critical advice and not simply rely on the advocates who may well have scientific degrees. Qualifications by themselves are not evidence of objectivity and, undoubtedly, we often do face situations where scientists become advocates for a cause.

I think policymakers should consciously seek out a range of scientific expert advice, recognising that not all scientists are objective. Given the nature of current observational research, its use of existing databases and the ease with which researchers can obtain statistically significant relationships I also think policymakers should consciously seek the input of statisticians when they seek help in interpreting the science.

Surely they owe this to the people they represent.

Similar articles