Category Archives: SciBlogs

Poor peer review – and its consequences

peer-review-diagram

See below for citations used

The diagram above displays links between the journal, editors and reviewers in the case of the paper Malin & Till (2015). I discussed these links before in Poor peer-review – a case study  but thought a diagram merited a separate post. “A picture is worth a thousand words,” they say.Unfortunately, I suspect, such incestuous arrangements around

Unfortunately, I suspect, such incestuous arrangements around the publication of a scientific paper is probably not too unusual. I guess it is human nature for authors to choose a journal which might be sympathetic (or biased) towards their ideas. In this case, the journal and its editors clearly have an orientation towards chemical toxicity hypotheses. The journal even allows authors to suggest possible referees. So again it is only human nature for the authors to suggest referees they consider sympathetic. Or perhaps it is only human nature for Grandjean or Bellinger to suggest referees they know are sympathetic to their own chemical toxicity hypotheses.

Human nature – but certainly not in the best interests of science – or the best outcome for the paper. The authors could have suggested at least some referees with experience in the field of attention deficit hyperactivity disorder (ADHD). And the editors could have done the same. This way they could have produced a better outcome – proper revision of the paper to consider other factors besides chemical toxicity. Or even the withdrawal of the paper itself once everyone realised that their fluoride toxicity hypothesis didn’t stand up to proper testing.

Just imagine if referees like the seven authors of Huber et al (2015) had been considered. I discussed their paper in ADHD link to fluoridation claim undermined again. It considered the same ADHD data as Malin & Till (2015) but found other, non-chemical factors, were implicated. In particular they found a correlation with altitude.  If a referee of the Malin & Till (2015) paper had suggested they consider factors like altitude the Malin & Till (2015) may never have seen the light of day. It would have, at least, been heavily modified.

And we would not have anti-fluoride activists and “natural”/alternative health web pages and magazines promoting the myth that community water fluoridation causes ADHD.

Similar articles

Choi, A. L., Sun, G., Zhang, Y., & Grandjean, P. (2012). Developmental fluoride neurotoxicity: A systematic review and meta-analysis. Environmental Health Perspectives, 120(10), 1362–1368.

Choi, A. L., Zhang, Y., Sun, G., Bellinger, D., Wang, K., Yang, X. J., … Grandjean, P. (2015). Association of lifetime exposure to fluoride and cognitive functions in Chinese children: A pilot study. Neurotoxicology and Teratology, 47, 96–101.

Grandjean, P., & Landrigan, P. J. (2014). Neurobehavioural effects of developmental toxicity. Lancet Neurol, 13(March), 330–338.

Huber, R. S., Kim, T.-S., Kim, N., Kuykendall, M. D., Sherwood, S. N., Renshaw, P. F., & Kondo, D. G. (2015). Association Between Altitude and Regional Variation of ADHD in Youth. Journal of Attention Disorders.

Malin, A. J., & Till, C. (2015). Exposure to fluoridated water and attention deficit hyperactivity disorder prevalence among children and adolescents in the United States: an ecological association. Environmental Health, 14.

 

Connett fiddles the data on fluoride

I am always suspicious when activists present simple figures to confirm their bias and fool their audience. I think anti-fluoride activists do this a lot. Here is an example in Paul Connett’s presentation to the recent Sydney anti-fluoride conference.

Connett uses data from Xiang et al (2003) and some of Xiang’s other papers and presentation to push his claim that fluoridation is bad for your IQ. Apparently he and Bill Hirzy (currently described as Fluoride Action Networks “chemist in residence”) are working on a paper attempting to justify a case that the maximum permissible level of fluoride in drinking water should be reduced to practically zero! They use a simplification of data from Xiang’s paper for this.

Xiang-2003

First of all the figure shows what Xiang’s data is like. It compares IQ with urine fluoride concentration – unfortunately he did not give a similar figure for fluoride concentration in drinking water. However, this is well correlated with urine fluoride.

There is certainly a lot of scatter, but Xiang (2003) reports a “Pearson correlation coefficient of 0.174, p=0.003.” So a statistically significant relationship (helped by having a large number of samples) but it still only explains about 3% of the variance!

This is important because, although Xiang did consider some confounding factors he could well have missed a factor which explains more of the variance which, when considered, may make the relationship with serum fluoride concentration non-significant. For example, I would be interested to see a statistical analysis which included incidence of moderate and severe dental fluorosis as this may be more important than the drinking water fluoride concentration itself).

Connett-sydneyBut have a look at how Paul Connett present this data (or the equivalent data for drinking water fluoride concentration) in his Sydney anti-fluoride conference presentation.

The “trick” has been to divide the data into “categories” based on inclusion in a separate water fluoride concentration ranges and then presenting only the averages within each category. I can see the point of sometimes using such categories, but this figure conveys a very misleading message.

The Sydney audience could have been excused for thinking that Xiang’s data showed a very strong connection between IQ and drinking water fluoride – a relationship explaining almost all the variance. Completely misleading as this relationship probably only explains only about 3% of the variance in the original data.

Paul Connett and William Hirzy are currently campaigning to make IQ the key factor for determining the maximum permissible levels of fluoride in drinking water. They might confuse a few politicians with these sort of distortions but hopefully the real decision-makers will be awake to such tricks.It really is

It really is a matter of “the reader beware.” Never take on trust what these political activists are saying. Always go to the original sources and consider them critically and intelligently.

Similar articles

ADHD link to fluoridation claim undermined again

Recently I suggested that Attention Deficit Hyperactivity Disorder (ADHD) was better correlated with elevation than with community water fluoridation (see ADHD linked to elevation not fluoridation). I criticised the study of Malin and Till (2015) for limiting their investigation to a chemical toxicity hypothesis and pointed out that once confounding factors like elevation are included their reported relationship between ADHD and community water fluoridation (CWF) disappears.

Seems I am not the only one to notice this. A new paper reports that same relationship:

Huber, R. S., Kim, T.-S., Kim, N., Kuykendall, M. D., Sherwood, S. N., Renshaw, P. F., & Kondo, D. G. (2015). Association Between Altitude and Regional Variation of ADHD in Youth. Journal of Attention Disorders.

They used data sets for the prevalence of ADHD in 2007 and 2010 in US states and found a negative relationship with average state elevation. Their correlation coefficients (R 2 = .38, p < .001; R 2 = .31, p < .001 respectively) are similar to the one I found.

This paper effectively supports my earlier conclusion:

“I do not think Malin and Till (2015) are justified in drawing the conclusion that CWF influences ADHD. Their mistaken conclusion has arisen from their limited choice of data considered for the exploratory analysis. That in itself seems to have resulted from a bias inherent in their hypothesis that “fluoride is a widespread neurotoxin.”

I was not advancing an alternative hypothesis but Huber et al., (2015) did suggest the hypothesis:

“As decreased dopamine (DA) activity has been reported with ADHD and hypoxia has shown to be associated with increased DA, we hypothesized that states at higher altitudes would have lower rates of ADHD.”

But the important lesson is once factors like elevation are taken into account there is no statistically significant relationship with CWF. The Malin & Till (2015) paper currently heavily promoted by anti-fluoride propagandists is flawed.

See alsoRates of ADHD appear to decrease at higher altitudes

 Similar articles

Commercial and ideological support of anti-fluoride activity

Fluoride Free NZ (FFNZ) promotes a list of “NZ Health Professionals who are calling for an end to fluoridation.” I am generally cynical about such endorsement lists, but the details in this list do give a picture of the commercial and ideological alignment of the FFNZ supporters and activists. So I did my own analysis, dividing the list into those described as “Science and Environmental PhD Professionals”, “NZ Dentists, “NZ Doctors” and Alternative health professionals (Chiropractors, naturopaths, Homeopaths, etc.). Of course, this is approximate as, for example, some listed as doctors may have specialised in one or another alternative fields. The pie chart below shows the distribution of FFNZ supporters among these groups. Clearly with such a large proportion of supporters coming from alternative health fields this above distribution is not representative of professionals in general, let alone health professionals. However, anyone who has looked at the anti-fluoride movement or debated with anti-fluoride activists would not be surprised as “natural”/alternative health arguments and sources are frequently used. I wonder, though, to what extent local body councillors are aware of this commercial and ideological orientation when considering submissions they get on the fluoridation issue. I suspect they aren’t. Yet groups like FFNZ engineer these submissions from their supporters – often providing templates for individuals to sign – and usually dominate the submission process. Personally I think this is a defect in our system of representative democracy – councils should actually insist on declarations of conflict of interest, details of employment and commercial interests from submitters. Their failure to do this explains how some local bodies, like the Hamilton City Council, have unwittingly been captured by ideological and commercial interests from the “natural”/alternative health industry during such submission processes.

Financial links

Declaration of conflicts of interest and details of employment, etc., may to some extent help identify big business interests financing this sort of submission in future. At the moment, we are largely left to speculate. However, there are financial data available showing the money trail involved in at least one anti-fluoride campaign – the High Court case against  the South Taranaki District Council aiming for a judicial review of a decision to fluoridate water supplies in Patea and Waverley (see Who is funding anti-fluoridation High Court action? and Corporate backers of anti-fluoride movement lose in NZ High Court). This action was taken by New Health NZ – an incorporated body set up by the NZ Health Trust – In November 2013. Statements of financial performance of these two organisations are available online and show the following movements of large amounts of money during the year to March 2014. NZHT As the NZ health Trust is a lobby group for the “natural”/alternative health industry the grants it receives must come out of the profits of this industry which is actually a big business in New Zealand. Although the financial statements do not identify sources and recipients the $100,00 grant to New Health NZ clearly came from its parent body and is included in their declared $125,ooo grants and donations. The $95,156 paid out by New Health NZ in professional and consulting fees would have covered the costs involved in their High Court action. So this is a clear example of pretty direct funding of anti-fluoride activity (the High Court action) by corporate interests – the “natural”/alternative health industry. But none of the reporting of this High Court action identified the commercial interests involved. Readers were given the impression that New Health NZ was just another one of these anti-fluoride activist groups and possibly assumed funds for the legal action came from donations. Again, this is a flaw in our representative democratic system. There should be more transparency of financial links. Corporate interests should be able to hide behind astroturf organisations and the dishonesty that their actions are the result of concerned citizens and not the ideological and commercial interests of big business. Similar articles

Is comfirmation bias essential to anti-fluoride “research?”

Anti-fluoride propagandists like Declan Waugh and Paul Connett avidly scan the scientific literature looking for anything they can present as evidence for harmful effects of community water fluoridation (CWF). Sometimes they will even do their own “research”  using published and on-line health data looking for any correlations with CWF, or even just with fluoride levels in drinking water.

Several years ago an activist going under the nom de plume “Fugio” posted images showing correlations of mental retardation, adult tooth loss and ADHD with the incidence of CWF in the US. These images are simply the result of “research” driven by confirmation bias and data dredging.They prove nothing. Correlation is not proof of a cause. And no effort was made to see if other factors could give better correlations.

I go through Fugio’s examples below – partly because I noticed one of their images surfacing recently on an anti-fluoridation Facebook page as “proof” that CWF causes tooth loss. But also because they are just more examples of the type of limited exploratory analysis used in two recently published papers – Peckham et al., (2015) (discussed in my article Paper claiming water fluoridation linked to hypothyroidism slammed by experts) and Malin and Till (2015) (discussed in my articles More poor-quality research promoted by anti-fluoride activistsADHD linked to elevation not fluoridation and Poor peer-review – a case study).

ADHD

This figure is essentially the same as that reported by Malin & Till (2015). In fact, I wonder if Fugio (who posted December 2012) is the unattributed source of Malin & Till’s hypothesis. Fugio chose the ADHD data for 2007 and fluoridation data for 2006 whereas Malin and Till (2015) concentrated mainly on fluoridation data for 1992 which had the highest correlation with ADHD figures.

I won’t discuss this further here – my earlier article ADHD linked to elevation not fluoridation shows there are a number of other factors which correlate with ADHD prevalence just as well or better than CWF incidence does and should have at least been considered as confounding if not the main factors. I found a model using mean elevation, home ownership and poverty only (no CWF included) explained about 48% of the variation whereas their model using CWF and mean income explained only 22-31% of the variation. And when these confounder factors were considered the correlation of ADHD with CWF was not statistically significant.

In other words we could do a far better job of predicting ADHD prevalence without involving CWF.

Water Fluoridation and Adult Tooth Loss

Fugio posted a figure showing a correlation of adult tooth loss with CWF incidence in 2008. It was statistically significant explaining 11% of the variation. But quite a few other factors display better correlations with adult tooth loss. For example, the data for smoking by itself explains 66% of the variation (see figures below).

Teeth-smoke

Checking out correlations with a range of factors I found a model involving only smoking and longitude  explaining  about 74% of the variation. The contribution from CWF was not significant statistically – it added nothing to this model.

Water Fluoridation and Mental Retardation

Fugio found a better relationship between CWF in 1992 and mental retardation in 1993 – a correlation explaining 19% of the variation. Apparently the concept of “mental retardation” was later abandoned as there do not appear to be any more recent statistics.

But again, if Fugio had not stopped there he/she would have found a number of other factors with better correlations. I give an example in the figure where state educational level (% Bachelors Degree in 1993) explained 50% if the variation. This correlation is negative as we might expect.

mental

 Again I used multiple regression analysis to derive a model involving educational level (% with Bachelors degree in 1993), poverty in 1993 and mean state elevation which explained 69% of the variation. No statistically significant contribution from CWF occurred.

Conclusions

I am not suggesting here that the factors I identified have a causal effect. Simply that they give better correlations  than CWF. These and similar confounding factors should have been considered by Fugio and Malin and Till (2015).

My purpose is to show that this sort of exploratory analysis of easily available data can easily produce results for anti-fluoride activists who are searching for some “sciency” looking arguments to back up  their position. Provided they don’t look too deeply, stop while they are ahead and refuse to consider the influence of other factors.

Unfortunately poor peer review by some journals is allowing publication of work that is no better than this. Peckham et al (2015) did nothing to check out other factors except gender in their correlations of hypothyroidism with CWF. The glaring omission was of course dietary iodine which is known to have a causative link with hypothyroidism. (I could not find US data for hypothyroidism so was unable to check out Peckham et al’s hypothesis for the US.) Malin and Till (2015) included only socioeconomic status (as indicated by income) in their analysis despite the fact that ADHD is known to be related to a number of factors like smoking and alcohol intake.

As I keep saying, when it comes to understanding the scientific literature it really is a matter of “reader beware.” It’s easy to find papers supporting one’s pet obsession if you are not critical and sensible with your literature searches. And it is important not to take at face value the claims of activists who clearly rely on confirmation bias when they explore the literature.

The will to find out

IMG_0649

 

Something for all of us to keep in mind.

IQ not influenced by water fluoridation

That is what the US data for average state IQ and percent water fluoridation tells us.

I thought I would check out the US IQ and fluoridation data for each state after reading Malin and Till (2015). That paper compared the prevalence of ADHD by state with the percent fluoridation in each state. There are problems with the paper (see ADHD linked to elevation not fluoridation and Poor peer-review – a case study) but what is good for the goose is good for the gander. How do the corresponding statistics for IQ compare?

IQ data for US states are not readily available but I managed to find a data set of IQ estimates by state in 2000 based on Scholastic Aptitude Test scores. The correlation of these average IQ scores with water fluoridation (1992) is not at all significant statistically. The slope of the trend line in the plot below is not significantly different to zero (-0.04 to +0.01 at the 95% confidence level as represented by the dashed lines).

fl-IQ

This lack of correlation is not at all surprising. After all, the only published study to compare IQ and community water fluoridation (CWF) is that of Broadbent et al., (2014)  – they also did not find any statistically signficant relationship.

So what are the anti-fluoride propagandists on about?

They do not rely on studies involving CWF but instead claim support in studies from areas where fluorosis due to excess fluoride is endemic – eg Choi et al., (2012). These and similar studies have reported a correlation of IQ with drinking water fluoride- but there are 2 problems:

  1. Very little was done in these studies to consider confounding factors. There is the possibility that inclusion of these confounding factors in correlations would show that fluoride does not make a statistically signficant contribution to IQ changes.
  2. Generally the authors have assumed a chemical toxicity explanation without any real justification. The data can be explained by other mechanisms such as the influence of the disfiguring effect of severe dental fluorosis on quality of life and learning (Perrott, 2015). In the few cases where data for severe dental fluorosis was included its relationship with IQ is statistically significant (eg Choi et al., 2015) (see Severe dental fluorosis the real cause of IQ deficits?). Severe dental fluorosis is not a problem in areas where CWF is used.

There is no need to consider confounding factors for the correction in the above figure as CWF does not explain any of the variation in IQ. But I did find statistically significant relationship for IQ with a number of factors. The plots below show the data for premature births in 1990-1991 and average percent poverty in 2002-2004.  These correlations by themselves explain 50 and 63% of the variation in IQ. Combined they explain 69% of the variation.

prem-pov

The percentage of CWF in each state explains none of the variation.

It would be more rational for those concerned about CWF to get active on issues related to poverty and premature births.

The community water fluoridation issue is a dead duck as far as IQ is concerned.

Similar articles

Making sense of scientific research

This has been a common theme here as I have campaigned against cherry-picking research papers, relying on confirmation bias and putting blind faith in peer-review as a guarantee of research quality.

In short I have pleaded for readers to approach published research critically and intelligently.

The article The 10 stuff-ups we all make when interpreting research from The Conversation gives some specific advice on how to do this. Well worth keeping in mind when you next set out to scan the literature to find the current state of scientific knowledge on a subject that interests you.


UNDERSTANDING RESEARCH: What do we actually mean by research and how does it help inform our understanding of things? Understanding what’s being said in any new research can be challenging and there are some common mistakes that people make.

Have you ever tried to interpret some new research to work out what the study means in the grand scheme of things?

Well maybe you’re smart and didn’t make any mistakes – but more likely you’re like most humans and accidentally made one of these 10 stuff ups.

1. Wait! That’s just one study!

You wouldn’t judge all old men based on just Rolf Harris or Nelson Mandela. And so neither should you judge any topic based on just one study.

If you do it deliberately, it’s cherry-picking. If you do it by accident, it’s an example of the exception fallacy.

The well-worn and thoroughly discredited case of the measles, mumps and rubella (MMR) vaccine causing autism serves as a great example of both of these.

People who blindly accepted Andrew Wakefield’s (now retracted) study – when all the other evidence was to the contrary – fell afoul of the exception fallacy. People who selectively used it to oppose vaccination were cherry-picking.

2. Significant doesn’t mean important

Some effects might well be statistically significant, but so tiny as to be useless in practice.

You know what they say about statistics? Flickr/Frits Ahlefeldt-Laurvig, CC BY-ND
Click to enlarge

Associations (like correlations) are great for falling foul of this, especially when studies have huge number of participants. Basically, if you have large numbers of participants in a study, significant associations tend to be plentiful, but not necessarily meaningful.

One example can be seen in a study of 22,000 people that found a significant (p<0.00001) association between people taking aspirin and a reduction in heart attacks, but the size of the result was miniscule.

The difference in the likelihood of heart attacks between those taking aspirin every day and those who weren’t was less than 1%. At this effect size – and considering the possible costs associated with taking aspirin – it is dubious whether it is worth taking at all.

3. And effect size doesn’t mean useful

We might have a treatment that lowers our risk of a condition by 50%. But if the risk of having that condition was already vanishingly low (say a lifetime risk of 0.002%), then reducing that might be a little pointless.

We can flip this around and use what is called Number Needed to Treat (NNT).

In normal conditions if two random people out of 100,000 would get that condition during their lifetime, you’d need all 100,000 to take the treatment to reduce that number to one.

4. Are you judging the extremes by the majority?

Biology and medical research are great for reminding us that not all trends are linear.

We all know that people with very high salt intakes have a greater risk of cardio-vascular disease than people with a moderate salt intake.

Too much or too little salt – which as worse? Flickr/JD Hancock, CC BY
Click to enlarge

But hey – people with a very low salt intake may also have a high risk of cardio-vascular disease too.

The graph is U shaped, not just a line going straight up. The people at each end of the graph are probably doing different things.

5. Did you maybe even want to find that effect?

Even without trying, we notice and give more credence to information that agrees with views we already hold. We are attuned to seeing and accepting things that confirm what we already know, think and believe.

There are numerous example of this confirmation bias but studies such as this reveal how disturbing the effect can be.

In this case, the more educated people believed a person to be, the lighter they (incorrectly) remembered that person’s skin was.

6. Were you tricked by sciencey snake oil?

A classic – The Turbo Encabulator.

You won’t be surprised to hear that sciencey-sounding stuff is seductive. Hey, even the advertisers like to use our words!

But this is a real effect that clouds our ability to interpret research.

In one study, non-experts found even bad psychological explanations of behaviour more convincing when they were associated with irrelevant neuroscience information. And if you add in a nice-and-shiny fMRI scan, look out!

7. Qualities aren’t quantities and quantities aren’t qualitites

For some reason, numbers feel more objective than adjectivally-laden descriptions of things. Numbers seem rational, words seem irrational. But sometimes numbers can confuse an issue.

For example, we know people don’t enjoy waiting in long queues at the bank. If we want to find out how to improve this, we could be tempted to measure waiting periods and then strive to try and reduce that time.

But in reality you can only reduce the wait time so far. And a purely quantitative approach may miss other possibilities.

If you asked people to describe how waiting made them feel, you might discover it’s less about how long it takes, and more about how uncomfortable they are.

8. Models by definition are not perfect representations of reality

A common battle-line between climate change deniers and people who actually understand evidence is the effectiveness and representativeness of climate models.

But we can use much simpler models to look at this. Just take the classic model of an atom. It’s frequently represented as a nice stable nucleus in the middle of a number of neatly orbiting electrons.

While this doesn’t reflect how an atom actually looks, it serves to explain fundamental aspects of the way atoms and their sub-elements work.

This doesn’t mean people haven’t had misconceptions about atoms based on this simplified model. But these can be modified with further teaching, study and experience.

9. Context matters

The US president Harry Truman once whinged about all his economists giving advice, but then immediately contradicting that with an “on the other hand” qualification.

Individual scientists – and scientific disciplines – might be great at providing advice from just one frame. But for any complex social, political or personal issue there are often multiple disciplines and multiple points of view to take into account.

To ponder this we can look at bike helmet laws. It’s hard to deny that if someone has a bike accident and hits their head, they’ll be better off if they’re wearing a helmet.

Do bike helmet laws stop some people from taking up cycling? Flickr/Petar, CC BY-NC
Click to enlarge

But if we are interested in whole-of-society health benefits, there is research suggesting that a subset of the population will choose not to cycle at all if they are legally required to wear a helmet.

Balance this against the number of accidents where a helmet actually makes a difference to the health outcome, and now helmet use may in fact be negatively impacting overall public health.

Valid, reliable research can find that helmet laws are both good and bad for health.

10. And just because it’s peer reviewed that doesn’t make it right

Peer review is held up as a gold standard in science (and other) research at the highest levels.

But even if we assume that the reviewers made no mistakes or that there were no biases in the publication policies (or that there wasn’t any straight out deceit), an article appearing in a peer reviewed publication just means that the research is ready to be put out to the community of relevant experts for challenging, testing, and refining.

It does not mean it’s perfect, complete or correct. Peer review is the beginning of a study’s active public life, not the culmination.

And finally …

Research is a human endeavour and as such is subject to all the wonders and horrors of any human endeavour.

Just like in any other aspect of our lives, in the end, we have to make our own decisions. And sorry, appropriate use even of the world’s best study does not relieve us of this wonderful and terrible responsibility.

There will always be ambiguities that we have to wade through, so like any other human domain, do the best you can on your own, but if you get stuck, get some guidance directly from, or at least originally via, useful experts.


This article is part of a series on Understanding Research.

Further reading:
Why research beats anecdote in our search for knowledge
Clearing up confusion between correlation and causation
Where’s the proof in science? There is none
Positives in negative results: when finding ‘nothing’ means something
The risks of blowing your own trumpet too soon on research
How to find the knowns and unknowns in any research
How myths and tabloids feed on anomalies in science

The frustrations of modern technology

IMG_0728

And after all that there is the problem of remembering the password(s).

March ’15 – NZ blogs sitemeter ranking

BlogThere are now over 300 blogs on the list, although I am weeding out those which are no longer active or have removed public access to sitemeters. (Let me know if I weed out yours by mistake, or get your stats wrong).

Every month I get queries from people wanting their own blog included. I encourage and am happy to respond to queries but have prepared a list of frequently asked questions (FAQs) people can check out. Have a look at NZ Blog Rankings FAQ. This is particularly helpful to those wondering how to set up sitemeters.

Please note, the system is automatic and relies on blogs having sitemeters which allow public access to the stats.

Here are the rankings of New Zealand blogs with publicly available statistics for March 2015. Ranking is by visit numbers. I have listed the blogs in the table below, together with monthly visits and page view numbers.

Meanwhile I am still keen to hear of any other blogs with publicly available sitemeter or visitor stats that I have missed. Contact me if you know of any or wish help adding publicly available stats to your bog.

You can see data for previous months at Blog Ranks

Subscribe to NZ Blog Rankings

Subscribe to NZ blog rankings by Email

Find out how to get Subscription & email updates

Continue reading