Tag Archives: peer review

We need more post-publication peer review

We often tout peer review as the reason for accepting the veracity of published scientific studies? But how good is it really? Does it ever match the ideal picture people have of it? And what about peer review before and after publication – are we neglecting these important stages?

Pre-publication peer review

Here I mean the collective process of evaluating ideas and presentations together with scientific colleagues. It’s great when it happens. Ideas flow and the critiques help prevent mistakes from persisting

This happens during discussion of research proposals and of research results. It happens during preparation of presentations.

But, unfortunately, it does not always happen – in fact, I suspect it may be relatively rare. When scientific reforms were introduced into New Zealand almost 30 years ago I noticed some scientific colleagues became less forthcoming about their ideas and research proposals. An air of competition seemed to destroy the previous cooperation.

Maybe things are better now. Hopefully there is less completion between individuals and within groups and institutions – although I imagine the competition between institutions will always be a problem. Quite apart from competing for grants humans simply identify with their own groups and fall victim to the “them vs us” problem.

Publication peer review

There is an impression that publication peer review happens only when the paper is submitted to a journal. But I think some of the best reviewing of a draft paper actually comes from colleagues before submission. That is why I strongly appreciated the institutional requirements I experienced that a draft paper be peer-reviewed within the institution before submission.

Unfortunately, not all institutions require this. I sometimes think many universities which don’t require this are taking “academic freedom” too far.

Perhaps some scientists see this as only landing extra work on them – but surely knocking a paper into better shape before submission is beneficial to both authors (getting a better draft)  and institutes (maintaining a reputation with journals).

Then there is the peer review organised by the journal. Many people think that is the only peer review. Just as well it isn’t because it can be very bad.

I am sure many poor quality papers slip through to be published simply because reviewers do not do a good job or spend insufficient time on that job. Personally, my impression of reviewers and journals drop when I see reviewers comments indicating a lack of attention or responsibility. Even worse, when I have had a paper accepted by an editor saying the reviewers had no comments I seriously questioned the quality of the journal and the advisability of submitting to it in future.

Still, when an author gets conscientious reviewers and comments indicating the paper has been read carefully an author can’t help but be appreciative – even if it means more work knocking the paper into shape.

As a reviewer, I always attempted to do a thorough job – even if it meant producing an over-long and detailed report. I once a received, via an editor, a response from an author I had reviewed expressing appreciation of the detail so I know such attention to detail is worthwhile.

I think most scientific authors will have occasionally faced the problem of brief or perfunctory reviewing of their submitted papers and can, therefore, understand the feelings behind that note.

Post-publication peer review

This is hardly ever considered. Once published the authors move on – their job is done. Readers also tend to be very accepting of published papers – after all peer review means that the paper’s findings must be trustworthy.

But this is obviously not the case. I think the slogan “reader beware” applies just as much to the scientific literature as it does to the news media. The reader should not automatically accept reported findings or conclusions as correct – just because the paper was peer-reviewed. They should do their own due diligence, consider all papers critically and avoid automatic acceptance.

Formal post-publication peer review can occur – but it is not as common as it should be. Some online journals provide space for readers comments. Helpful to the author but not adequate for proper evaluation.

The best post-publication peer review comes from published critiques because they become part of the established literature and available to anyone following up a subject or reviewing a field. Some journals provide space for shorter critiques of this sort – not requiring these authors to present new and original data but simply critique what has been published. Of course, despite the lower requirements such critiques should undergo their own peer review consistent with the policies of the journal.

The ethics of post-publication review

This is sore point for me – having had an editor recently refuse to consider a critique of mine (see Fluoridation not associated with ADHD – a myth put to rest). 

Surely there is a moral obligation for a journal, and its editor, to consider submissions of critiques of paper they have published? This is the obvious place for a critique – and the journal can normally then offer the right of reply to the original authors.  The writer of a critique should not have to search out an alternative journal – especially as the lack of new data or new research in a critique makes its acceptance by an alternative journal problematic. Nor should the original authors be denied an automatic right of reply which can be provided by the original journal.

Authors of a critique can face obstacles like the cost of publication. An original paper may be published in a journal which extracts publication fees from the author. It is the original authors decision whether or not to publish in such journals. But it seems unethical to expect the submitter of a critique to pay such fees. That puts a financial hurdle in the way of proper scientific peer-review. The original authors’ institution may be prepared to cover the cost of publication but institutions are unlikely to financially cover critiques in the same way.

The other obstacle is, of course, the attitude of editors. It is surely just common sense that critiques should undergo the normal peer review but when journals or editors refuse outright to even consider a critique, to not even enable it to undergo peer review, then that is ethically wrong.

Similar articles


Leader of flawed fluoridation study gets money for another go


Professor Christine Till has been given a $300,000 grant to test for harmful effects of fluoride.

Malin and Till (2015) published research indicating a relationship between fluoridation and Attention Deficit Hyperactivity Disorder (ADHD). However, that study was flawed because it omitted important confounders. When these are included the relationship disappears.

I analysed that study in my article ADHD linked to elevation not fluoridation where I showed the relationship of ADHD to elevation was much more important than fluoridation. Huber at al., (2015) published work confirming the relationship of ADHD with elevation. So, obviously, elevation is an important confounder and  Malin and Till (2015) did not consider it in their study.

My own analysis indicated that there were a number of other confounders which are related to ADHD – with correlations similar to (eg., educational attainment, proportion of the sate’s population older than 65  and Per Capita personal income) or better (mean state elevation, home ownership and % living in poverty ) than that for fluoridation. That rings alarm bells – why consider only one factor (fluoridation) if there are other factors which appear equally or more important? Isn’t that confirmation bias? (I concede that Malin and Till did include a socioeconomic measure in their statistical analysis – but this was clearly not enough).

I tested the relative importance of the different facts using multiple regression and – sure enough – found that once a few important confounders were included water fluoridation could not explain any of the variance in ADHD! The statistically significant factors were mean elevation, home ownership, and poverty. The contribution of fluoridation was not statistically significant in this multiple regression.

A model including mean state elevation, home ownership and poverty explains about 45% of the variance in ADHD – much better than fluoridation could (Malin and Till explained 27 -32% for the fluoridation data).

Now, I read that Professor Till has been given research finds to have another go and possible harmful effects of fluoride. (see York professor leads study that could help answer fluoride safety questions). She plans to look at data from a Canadian investigation of pregnant women exposed to  contaminants. She says:

“Our study employs a prospective design that includes biomarkers of exposure to fluoride, detailed assessment of potential confounders, a comparison group, and the use of sensitive cognitive and behavioural measures that have been collected in one of the world’s most comprehensively characterized national pregnancy cohorts (MIREC).”

Now, I am pleased she aspires to a “detailed assessment of potential confounders” but wonder how detailed this will be after the problems with the Malin and Till (2015) study.

I have not yet seen any published response to the Malin and Till paper – maybe the cost of publication (US$2020) that journal is discouraging critics. It certainly discouraged me (I do not have institutional support for publication costs). Nevertheless, I hope professor Till has been acquainted with some of the criticism of that paper so that she can pay more attention to important confounders in the coming work

We can draw a few lessons from this.

Be careful of published statistical relationships

These days it is so easy to hunt down data and do this sort of exploratory statistical searching for significant relationships. But a statistically significant relationship is not evidence of a real cause. For example, there is a strong relationship between the sales of organic produce and prevalence of autism – but I have yet to hear anyone seriously suggest the relationship is at all causal.

But the scientific literature is still full of such studies – and I guess the motivated author can easily find arguments and other data in the literature that they, at least, feel convincing enough to justify publication.

Refereeing of scientific papers is, on the whole, abysmal

All authors have a pretty good idea of which journals, and reviewers, will be friendlier to their work – and which would be antagonistic. It is only natural tosubmitt to the friendlier journal.

Unfortunately, the Malin and Till paper was submitted to a journal with editors known to be friendly to a chemical toxicity model of cognitive deficits. Further, it turns out that the reviewers chosen for the paper were also supportive of such an approach.

While one reviewer did suggest including lead as a possible confounder (again showing a chemical toxicity bias) none of them suggested consideration of other confounders more likely to be connected with ADHD.

I discussed the editorial and reviewer problems of the Malin and Till paper in . (The journal, Environmental Health, has a transparent peer-review process which provides access to the names and reports of the reviewers.)

Again – another example of readers beware – even readers of scientific papers in credible journals.

Similar articles

Peer review – the “tyranny” of the third reviewer

Yes, I know, this video clip is old hat. It’s been used so many times it has got boring. And, maybe most readers won’t really relate to the way it is used here.

Still, those who have published scientific papers and suffered the emotional roller-coaster of peer-review will be aware of the problem of the “third reviewer.”

The more cynical will describe the peer review process as a bit of a farce. Usually, a journal will use three reviewers. Many times the first one presents a glowing, but undetailed report, recommending publication. The second reviewer will reject the paper out of hand, recommending that no way should it be published. But their recommendation is similarly undetailed.

The third reviewer will have done the work, hopefully in detail and conscientiously. And they may recommend publication – but only after their detailed critiqued is dealt with. But many authors dread their report because it usually means a lot more work for them – maybe, more experimental work.

I have seen authors get very emotional about specific peer reviews of their papers – and it is usually the “third reviewer” that upsets them. It means more work – and the detailed critique seems to be harder to handle than the undetailed outright rejection of the second reviewer. Perhaps the Hitler video clip is not too far from the truth.

Still, I think the peer-review process hangs on the “third reviewer.” There need to be far more reviewers who take their role seriously in this way. The first two reviewers are just lazy and opinionated – their comments are worth nothing.

For more on the peer-review process and its problems read Is scientific peer review a “sacred cow” ready to be slaughtered? « Science-Based Medicine

Similar articles

Poor peer-review – a case study


“Peer-review” status is often used to endorse scientific papers cherry-picked because they support a bias.

Many scientists are not impressed with the peer-review processes scientific journals use. Like democracy, this peer-review is better than all the available alternatives but it certainly doesn’t guarantee published scientific papers are problem-free.

Sure, peer-reviewed sources are better than others which have no quality control. But it is still a matter of “customer beware.” The intelligent users of scientific literature must do their own filtering – make their own critical judgements of the likely reliability of reported scientific findings.

Despite this people often use the “peer-reviewed” description to endorse published finding (especially if they confirm their own biases) without any critical assessment. This happens a lot in on-line debates of “controversial” issues.

Here I will go through the details of peer-review of a recently published paper which anti-fluoride activists are endorsing and promoting, but others are critcising. The paper is:

Malin, A. J., & Till, C. (2015). Exposure to fluoridated water and attention deficit hyperactivity disorder prevalence among children and adolescents in the United States: an ecological association. Environmental Health, 14.

I have discussed this paper in recent posts (see More poor-quality research promoted by anti-fluoride activists and ADHD linked to elevation not fluoridation). The journal, Environmental Health, has a transparent peer-review process which provides access to the names and reports of the reviewers. This reveals problems with the review process in this case. Below I discuss the responsibility of the authors, reviewers and the journal for the problems with this paper and its reported findings.

Authors’ responsibilities

The authors are clearly committed to a pet theory that fluoride is a neurotoxicant which could contribute to ADHD prevalence. Nothing wrong with that – we all feel committed to our hypotheses. We wouldn’t be human if we didn’t. But the best way to produce evidence for a hypothesis is to test it in a way that could prove it wrong.

In this case the authors found a correlation between ADHD prevalence in US states and the amount of community water fluoridation in each state. Trouble is, one can find just as good a correlation, or even a better correlation, with many other criteria for which state prevalence statistics are available. I listed a few in  ADHD linked to elevation not fluoridation. Some of these factors are also correlated with community water fluoridation suggesting the correlation reported by Malin and Till (2015) may be deceptive.

A proper test of the fluoridation hypothesis would include considering the effect of including such confounders together with fluoridation in their statistical analysis. Malin and Till (2015) did include one other criteria – the median household income for 1992 – but did not include any others. I find this surprising because they acknowledged ADHD results from interaction of genetic and environmental factors. While fluoridation is not usually considered a relevant factor things like smoking and premature births are and there is conflicting evidence about the role of economic factors like poverty and income.

In my article  ADHD linked to elevation not fluoridation I showed ADHD prevalence is better explained by a few of these factors without any input from water fluoridation.

I can’t help feeling the limited consideration of confounding factors results from a desire to protect the fluoridation hypothesis and therefore not test it properly.

Reviewers responsibilities

Again, such a desire is only human. But reviewers should have picked this up during their own considerations.

Interestingly, only one of the two reviewers raised possibility other confounders – specifically lead levels. This is of course valid as lead is a recognised neurotoxicant – but why did none of the reviewers question why other factors like smoking, premature births and social or regional factors were not considered?

I believe that is because both reviewers had research interests directed at chemical toxicity and not ADHD or similar mental characteristics. A matter of someone with a hammer only seeing nails.

The reviewers and their research interests are:
Marc Weisskopf whose reviews are available here and here.

“Some examples of my current work are exploring how exposure to, e.g., lead, manganese, and air pollution affect cognitive function and psychiatric symptoms; how exposure to Agent Orange and other herbicides used in Vietnam relate to the development of PD; and how formaldehyde and lead exposure relate to the development of ALS.”

Anna Choi whose review is here.

“Dr Choi’s research focuses on the effects of environmental exposures on health outcomes. She has been studying the birth cohorts in the Faroe Islands where exposures to environmental chemicals including mercury, PCBs, and PFCs are increased due to traditional marine diets. In addition, she also studies the effects of the contaminants on cardiovascular function and type 2 diabetes among the Faroese septuagenarians. She is also actively involved in the research on the impact of nutrients as possible negative confounders that may have caused an underestimation of methylmercury toxicity. Dr Choi’s other research interests include studying the adverse effects of fluoride exposure in children.”

Why were reviewers with a wider research experience not chosen? This journal allows authors to propose suitable reviewers themselves. Or the reviewers may have been chose by the associate editor handling this paper – Prof David Bellinger. His research focuses on the neurotoxicity of metabolic and chemical insults in children. So again it may just be the blinkered view of someone whose research background stressed the role of neurotoxicants rather than other factors likely to influence ADHD prevalence.

The journal’s responsibility

I noticed that one of the two chief editors (who have final say over acceptance of submitted papers) of this journal is Prof Philippe Grandjean. He himself has been actively promoting the idea that fluoride is a neurotoxicant purely on the evidence of the metareview of Choi et al (2012). Yes he is a coauthor of that review and Choi is one of the reviewers of the Malin and Till paper. The review of Choi et al (2012) related to areas of mainly China where fluoride concentrations are higher than used in community water fluoridation. Areas where endemic fluorosis is common.

I have to wonder if Grandjean’s well-known position on fluoride and community water fluoridation was a consideration in choosing this journal for publication.

Others have commented that the journal Environmental Health is considered low-quality based on its low impact factor. I do not know the area well enough to pass judgement myself. However, I notice that the journal charges authors for publishing their paper (£1290/$2020/€1645 for each article accepted for publication.) This sort of charge, associated with poor quality peer-review makes me suspicious. I have commented on these sort of journal before in my post Peer review, shonky journals and misrepresenting fluoride science.


This is one example of peer-review and paper acceptance which brings into question the idea of using  publication and peer-review as endorsement of a study’s   quality. I am sure this is not an isolated case. Even with the best of intentions journal editors and reviewers are limited by their own areas of expertise. Journal publication and peer-review is a far from perfect process – even if it is preferable to current alternatives.

Unfortunately activists will promote poor quality studies like this by blindly using the study’s peer-review status.

The intelligent reader should beware of such blind endorsements. Knowing the human foibles which exist in the research and publication processes such a reader will consider the contents of the paper and not rely on peer-review status. They will consider the evidence and conclusions critically. And if they don’t have enough background to make their own critical assessment they will consider the views of others with the required expertise and not blindly accepting what political activists tell them.


Just came across this article referring to peer-review problems in journals published by BioMed Central – Major publisher retracts 43 scientific papers amid wider fake peer-review scandal.

BioMed Central publishes the Journal Environmental Health discussed in this post. I am not suggesting this paper was part of the peer-review racket discussed in the article. But the news item does highlight the point I am making that intelligent readers need to consider published scientific papers carefully and critically and not blindly rely on “peer-review” endorsement.

Similar articles

Sceptical humility and peer review in science

This follows on from my recent post about Rebecca Watson’s condemnation of evolutionary psychology (see Sceptical arrogance and evolutionary psychology). Rebecca has now delivered this lecture several times New Zealand. None of them local so I couldn’t personally check if she had taken criticism on board. However, she was interviewed this morning on Kim Hill’s Saturday Morning (here’s the mp3 link).

She has not withdrawn her overall criticism of evolutionary psychology. She makes clear in her interview that this, as well as pop psychology in the media, are her targets (I’ll come back to that below). In fact, I think she makes her position even more untenable by providing a very naive description of peer review in science.

I wish sceptics when they defend science would describe it more realistically. It doesn’t help when they describe a utopian version which doesn’t exist in reality.

What is peer review?

This is an important social process in science where scientists’ ideas, conference presentations, and publications are reviewed by others in their field. Their peers. This helps reduce, maybe even eliminate, the influence of biases and pet, but unsubstantiated, theories held by the author. I have pointed out before that we are all prone to such cognitive biases – it’s part of being human. And having a Ph. D. doesn’t eliminate this human foible.

Scientists are human. And individually it’s hard to escape from our biases. The social nature of science helps to reduce their effect. Hence the importance of peer review.

But I must stress, such review doesn’t just operate at the time of a paper’s publication as Rebecca asserted in the interview. And it’s not just carried out under the watch of the scientific journal. Peer review occurs at all stages.

Peer review is occurring when hypotheses and ideas are being floated with colleagues. In more formal settings like departmental seminars, ideas are presented and exposed to criticism and improvement. Effectively conferences do the same thing to presentations. But often preparation of a conference presentation will have been reviewed by institutional colleagues – formally and informally. (Many groups will even have practice “dry runs” where the scientific content may be considered by colleagues as well as the details of the presentation, speech and visual aids).

Now, getting around to publications (the only area Rebecca included in her naive description). Good scientific institutes will have procedures which ensure formal internal review well before the paper is sent to the journal. And good journals will also have formal procedures to ensure quality and scientific standards in what they accept. Philosopher Masimmo Pigliucci provided a details of the procedure he uses as a journal editor in one of his Rationally Speaking podcasts (see 57: Peer Review). If you aren’t familiar with the process it’s worth listening to. Often such review will involve three anonymous referees, with the requirement that authors respond to questions and recommendations and final decisions on acceptance are made by an editor.

But wait, that’s not all.

The scientific peer review has barely started. Once published the research and conclusions are exposed to a far greater audience of peers. There’s plenty of opportunity for acceptance or rejection by peers. Often journals well accept “Letters to the Editor” types of response. Other scientists will condemn or support those conclusions when they write discussions in their own publications. There is scope for independent people to repeat the work, or usually something similar rather than exactly the same, and publish different conclusions.

Science is dynamic – our knowledge improves all the time. Publications are not sacred – they are easily and often superseded. (And there is scope for withdrawal of published papers when mistakes or scientific fraud are found).

It’s a mistake to think a published paper is the final authoritative stamp of approval on scientific ideas. It isn’t. And that’s the mistake Rebecca makes in her naive reference to the concept of peer review in science.

Rebecca presented an idealistic version of peer review where all mistakes, particularly scientific one, are detected during prepublication review of a paper. She says such mistakes should never make it into the published version. Yet, she says, this is happening in evolutionary psychology and she gives specific example where she critiques research findings and not just media coverage.

Well, guess what Rebecca. Such mistakes are probably made to some degree in all scientific fields. We are human after all. Mistakes do get into published papers (one of mine has my own name spelt wrong – five times). And all publishing scientists are well aware that some journals have much lower standards than others. We have probably all had a paper accepted without any feedback or criticisms from reviewers. Maybe even just on the decision of the editor. I certainly downgrade my impression of the journal when it happens to me.

Those shonky studies

Personally, it think peer review during publication may be a particular problem in the “soft sciences.” At least, I have been surprised to see some ideas presented in this area without supporting evidence, or obviously selected references. Perhaps these weaker standards are inevitable in some areas. Perhaps this allows more scope for intrusion of “political correctness” and popular ideological positions. Or perhaps authors feel less need to justify ideas if they are consistent with the prevailing ideologies in their field or institutes. Maybe the ideological issues in these areas are just too harsh to handle objectively. I imagine this might be true for feminism in the US and race relations in New Zealand.

I am sure Rebecca can find evolutionary psychology research journals where the quality of review is poor or ideologically compromised. But I am sure she could also find, if she looked, journals and publications where the standard of peer review is much higher. Perhaps her interest in feminist ideology and preoccupation with sex-related research has soured her overall view. I wouldn’t like to make that judgement. But soured it is.

Evolutionary psychology is being targeted

Some US bloggers have defended Rebecca on this issue by claiming her criticisms were only of pop psychology and media presentation. They refuse to acknowledge her inclusion of the whole field of evolutionary psychology in her attacks. Or else they excuse it. Maybe that is just the humane propensity to defensiveness coming out. Those sceptics may just be guilty of motivated reasoning (I referred to this in Sceptical arrogance and evolutionary psychology). For their sake I include this slide from Rebecca’s talk where she specifically describes her version of evolutionary psychology and critiques it.


I understand evolutionary psychology in broad terms as the application of an evolutionary perspective to human and animal psychology. This doesn’t need that researchers assume that human evolution stopped in the Pleistocene – or any of the other bullet points she has.

Rebecca has set up a straw man version of evolutionary psychology. Maybe that’s because of limitations in her reading or understanding. Maybe just because of her preoccupation with feminism and gender issues. But a straw man nevertheless.

Peer review for Rebecca

Rebecca Watson would have benefited immensely from some peer review herself before finalising her presentation. And all is not lost. Her presentation is getting peer review now. Yes, some of it will be rubbish which she should ignore. But there are some excellent comments being made she would be wise to take on board.

See also:
Science denialism at a skeptic conference
Science Denialism? The Role of Criticism
Oh gob, evo psych again?
Evolving skeptic psychology
Responsible Reading
Responsible Writing
FTB Blogger Stephanie Zvan Makes A Small Mistake
Let’s Confirm Negative Stereotypes About Women
αEP: Shut up and sing!
Do You Need To Be An Expert To Criticize Science?

Similar articles

ID research and publications

Here is another post to mark Darwin Day.

The pro-intelligent design (ID) internet echo chamber has been making a big thing of late about “peer-reviewed papers supporting intelligent design.” Their “Center for Science and Culture” has even published an updated list. (PZ Myers has provided a more accessible version of the list at More bad science in the literature).

This of course does raise some questions about what they mean by “peer review” and the real nature of some of the journals these papers are in (have a look at their in-house journal Bio-Complexity). But leaving those issues aside for now I just don’t think any of these papers are reporting “ID research.”

The nature of “ID research”

To me research supporting intelligent design should postulate some structured hypotheses for ID and seek to test them or validate them against reality. But none of the articles do that. Most, especially ones that are published in credible journals, deal with aspects of evolutionary science.

Sure they may postulate a problem, an example or issue where they feel current science does not have an answer. That’s what I expect in a scientific paper. Identification of problems and reporting work on them.

Like all areas of science, evolutionary science has its so far unanswered questions, its problems and anomalies. perfectly natural and perfectly acceptable to identify and investigate them. But calling such work “supporting intelligent design” is just dishonest. No specific ID hypotheses have been advanced, let alone tested.

This always seems to be the case for any list of “peer-reviewed scientific papers supporting ID.”

“Theistic science” – or argument by default

Nor, by the way, do these papers display any example of the alternative to “materialist” science. Their declared aim of replacing modern science with a “theistic science.” (See Wedge Strategy and Theistic science? No such thing). If they were doing any work like this why isn’t that demonstrated by the publications? I would love to see examples of such research and identify the different methods characteristic of such science.

To list these papers as supporting ID  is simply assuming that any criticism, any problem, any gap in evolutionary science is, by default, evidence for ID.

It’s not.

Relying on cranks

David L Abel

Another issue with this publication list which does supply some mirth is the frequent occurrence of publications by David L. Abel (17% of total list). He has raised some attention because he published a paper in the journal Life which had recently received attention for its publication of the whaky paper Theory of the Origin, Evolution, and Nature of Life,” by Erik D. Andrulis. (See The comparison to jabberwocky is inevitable for PZ Myers’ in depth discussion of that paper). Abel’s article is titled “Is Life Unique?” – Myers describes this as “Intelligent Design creationism crap,” and “drivel” (see More bad science in the literature). But Myers was impressed with Abel’s address and affiliation:

Department of ProtoBioCybernetics and ProtoBioSemiotics, Origin of Life Science Foundation, Inc., 113-120 Hedgewood Drive, Greenbelt, MD 20770

Turns out this is a residential house, probably Abel is the only “employee,” but it does have na official name plate besides his front door! As PZ says:

“That’s every intelligent design creationism institute of scientific thinking: a cheap sign tacked up on a garage, with some guy with delusions of competence twiddling his thumbs inside.” (see Zooming in on the Origin of Life Science Foundation)

Abel himself describes his institute as a “science and education foundation with corporate headquarters near NASA’s Goddard Space Flight Center just off the Washington, D. C. Beltway in Greenbelt, MD.”  If you are not careful in your reading you might assume he was actually based at a NASA site!

And here is the information on Abel held in his profile at the ID journal

David L. Abel

Affiliation The Gene Emergence Project; The Origin-of-Life Science Foundation
Bio statement Director, The Gene Emergence ProjectDepartment of ProtoBioCybernetics & ProtoBioSemioticsThe Origin of Life Science Foundation, Inc.

These lists of “peer reviewed papers supporting ID’ are getting rather desperate.

Similar articles

Personal attacks on climate scientists

The American Association for the Advancement of Science (AAAS) released a statement this week expressing concern for the current personal attacks being made on climate scientists by politicians and others. The text of the statement follows:

Statement of the Board of Directors of the American Association for the Advancement of Science
Regarding Personal Attacks on Climate Scientists
Approved by the AAAS Board of Directors
28 June 2011

We are deeply concerned by the extent and nature of personal attacks on climate scientists. Reports of harassment, death threats, and legal challenges have created a hostile environment that inhibits the free exchange of scientific findings and ideas and makes it difficult for factual information and scientific analyses to reach policymakers and the public. This both impedes the progress of science and interferes with the application of science to the solution of global problems. AAAS vigorously opposes attacks on researchers that question their personal and professional integrity or threaten their safety based on displeasure with their scientific conclusions. The progress of science and protection of its integrity depend on both full transparency about the details of scientific methodology and the freedom to follow the pursuit of knowledge. The sharing of research data is vastly different from unreasonable, excessive Freedom of Information Act requests for personal information and voluminous data that are then used to harass and intimidate scientists. The latter serve only as a distraction and make no constructive contribution to the public discourse.

Scientists and policymakers may disagree over the scientific conclusions on climate change and other policy-relevant topics. But the scientific community has proven and well-established methods for resolving disagreements about research results. Science advances through a self-correcting system in which research results are shared and critically evaluated by peers and experiments are repeated when necessary. Disagreements about the interpretation of data, the methodology, and findings are part of daily scientific discourse. Scientists should not be subjected to fraud investigations or harassment simply for providing scientific results that are controversial. Most scientific disagreements are unrelated to any kind of fraud and are considered a legitimate and normal part of the scientific process. The scientific community takes seriously its responsibility for policing research misconduct, and extensive procedures exist to protect the rigor of the scientific method and to ensure the credibility of the research enterprise

While we fully understand that policymakers must integrate the best available scientific data with other factors when developing policies, we think it would be unfortunate if policymakers became the arbiters of scientific information and circumvented the peer-review process. Moreover, we are concerned that establishing a practice of aggressive inquiry into the professional histories of scientists whose findings may bear on policy in ways that some find unpalatable could well have a chilling effect on the willingness of scientists to conduct research that intersects with policy-relevant scientific questions.

Real science – warts and all

After the PR hype NASA seemed to purposely promote around “arsenic bacteria” research published by Science (see NASA and old lace) there has been quite a critical reaction. Critical of the way the story was hyped by NASA, but also critical of the work itself.

The whole story does raise issues of how science is done, how it is published and reviewed, and how it is reported in the media.  It also raises issues about the sometimes negative role institutions like NASA can play in all this.

There is a useful discussion of this on the latest Guardian Science Weekly podcast (see Science Weekly podcast: Global criticism of the arsenic bacteria study; plus, we expose some dating myths).

Download mp3

A panel of “those in the know,” including astrobiologist Dr Zita Martins from Imperial College London and science writer David Dobbs who has been blogging and tweeting about this specific research, discuss the issue. David writes for the Atlantic Monthly, New York Times Magazine, Slate, National Geographic, Audubon, and Scientific American Mind, where he is a contributing editor. There is also a clip from Carl Zimmer speaking on NHPR (New Hampshire Public Radio).

The discussion gives a good idea of how science is actually done – warts and all! It looks behind the sometimes ideally presented public image and considers the problem of scientists own emotional agendas, the reality of peer review and new issues arising from the way science is conducted in the internet age.

The panelists see any problems with the “arsenic bacteria” research being resolved over time by the normal process of science and stress that the issues discussed are more general.

As an extra, and for light relief, the podcast also contains  comments from Dr Petra Boynton from UCL exposing four key myths about dating.

The Guardian Science Weekly recently won an award for the best science podcast. It is well presented and informative. Worth subscribing to and following.

Similar articles

Enhanced by Zemanta

Putting the IPCC in its place?

My PhotoThe blog The Climate Scum is worth keeping an eye on. It’s satirical, of course, but it’s content is not too far from what we often find on the internet. By the way it’s written by Baron von Monckhofen (right).

Here’s an extract from a recent post Reforming the IPCC: how to do it properly!

“The following measures are intended to turn the IPCC and its future assessment reports into vehicles for Truth and Reason instead of vehicles for Eco-Fascist Fraud and Deception, as they have been so far.

  1. No communists like Hansen and Mann should be allowed to participate. Only politically independent and objective people should be allowed. Thus, alls participants must have read and memorized “Atlas Shrugged”.
  2. No people who receive grants for doing climate science should be allowed to participate. Such people will just make up scary things so they can get even more grants. Only economically independent people should be allowed.
  3. Likewise, no people who publish climate science articles in peer-reviewed journals should be allowed. They just want to cite their own papers and those of their tribe.
  4. No Chinese or Indians, who just want to weaken the competiveness of the West. Tricky bastards!
  5. No previous IPCC participant can participate in the new IPCC (in particular not Pachauri)! . As everybody who has any experience with management knows, if you want to change an organization the first thing you must do is to get rid of all members/employees.
  6. All previous IPCC participants must release all the email correspondence they have ever had. Releasing email correspondence is vital for the auditing of science and to guarantee repeatability and transparency.
  7. All IPCC prisoners must be released and all weapons of mass destructions must be disarmed.
  8. Any IPCC participant that claims that CO2 can affect the climate must, in order to be credible, abstain from travelling in airplanes and in cars, living indoors, eating warmed food and breathing.
  9. No use of models. Good science is based on empirical observations, and not models. In particular, any “predictions” and “projections” about the future must be entirely based on observations, and not models. If Galileo and Newton and Maxwell and Einstein had bee diddling with models, science would never have progressed.
  10. No use of temperature data. Temperature data, whether from thermometers on the ground or those mounted on satellites, are notoriously unreliable and affected by the urban heat island effect.
  11. Likewise, sea level data, carbon dioxide data, precipitation data, arctic ice volume data and climate proxies must be avoided, as they are inherently unreliable and unscientific.
  12. Climate data from other planets must be included, so we can compare the warming on Earth, the Moon, Mars, Jupiter, Haley’s comet and the iron-core Sun. No theory that cannot explain all these warming incidents should be taken seriously.
  13. Anecdotal evidence, such as medieval Chinese fleets navigating around the North Pole, should not be dismissed unless proven wrong beyond the shadow of a doubt. To rely more on instruments than on human observers and chroniclers is elitistic and in its essence anti-human.
  14. No references should be allowed to any shady grey literature, like WWF reports. Only shiningly white NGOs working for the benefit of mankind, like the Heartland Institute, should be referenced. White humans are more important than grey frogs!
  15. No references should be allowed to journals like Nature and Science, which have been participating in the suppression of AGW-skeptical papers. Only truly openminded and unbiased journals like Energy & Environment should be referenced.
  16. For each unbalanced alarmist reference, there must be at least one skeptical reference in order to assure fairness and balance.
  17. Uncertainty should be specified according to the scale “Uncertain”, “Highly uncertain”, “Extremely uncertain” and “Completely wrong”.
  18. The best science nowadays is done on blogs, were new ideas easily can be proposed and peer review is instant. Hence, the focus of the assessment reports should be moved from reviewing what is published in the ivory-tower journals to what is published on the blog science blogs. The blogs belonging to journals like Science and Nature do not count – they are just ivory tower blogs masquerading as blog science blogs.
  19. The assessment reports should not exceed 20 pages, and all information should be presented as comic strips. In that way, even illiterate people with a limited attention span will be able to comprehend it. (Like Al Gore, he he!)
  20. In order to ensure its independence, the IPCC should not receive any funding from governments. Instead, it has find its own financing, for instance by selling advertisements in the assessment reports. The taxpayer money that is saved can be used for more important things, like eradicating malaria and giving tax cuts for productive citizens.”
Enhanced by Zemanta

A desperate plea to be noticed?

Quite a few local bloggers* have commented on the legal action some New Zealand climate deniers are taking to get NIWA to change its national temperature record. This is only the latest step in a nasty little campaign by these people to deny the reality of climate change. Nasty because it distorts the data and facts and makes outrageous attacks on the integrity and honesty of New Zealand scientists. The latest step – but I do wonder if it is the last step – seeing it is likely to backfire.Initially this campaign attempted to take advantage of the “climategate” email hysteria to whip up local anti-science feelings. Of late, as this hysteria has dispersed the local deniers have deteriorated to a small but vocal clique making carping and dishonest attacks on NIWA. I guess they see this legal action as a way of somehow revitalising their campaign.

Continue reading