Tag Archives: scientific papers

Scientific papers, civil disobedience and personal networks

finding-and-reading-scientific-papers

Image credit: 4 tips on finding and reading scientific papers…

Jerry Coyne raises an important issue about science publishing on his blog, Why Evolution is True. That is the problem of most published scientific journals being behind a firewall and so inaccessible to readers who do not have an institutional subscription – unless they pay an exorbitant fee – US$30 or more per paper.

His article, Scientists engage in civil disobedience, share copyrighted papers, is aimed mainly at scientists, but the problems is probably greater for the non-scientist, as most working scientists already have institutional subscriptions and libraries which can source papers where there is no subscription.

Incidentally, this is also a big problem for the retired scientist. Since the advent of Human Resources Departments, one loses all privileges and accesses on retirement. Cards and pin numbers for access to buildings no longer work. Emails addresses are lost. And access to institutional networks, databases, libraries and journal subscriptions also disappear.

It is particularly a problem for people who wish to discuss scientific evidence online – whether they have a scientific background or not. Firewalls often mean that discussion is hindered because people rely only on abstracts (and sometimes only titles!). Sure, we are all familiar with trolls who will make confident assertions on even less evidence – but they are dishonest. I strongly believe that participants in these discussions have a responsibility to at least read the papers they cite.

So, it is frustrating to post a blog article about a new paper knowing that many readers simply don’t have access to more than the abstract. Providing a link to a copy of these papers violates copyright and there are limits to the amount of text that can be quoted in a blog post.

So what is a reader to do? I wouldn’t recommend paying exorbitant fees for a paper which may or may not be useful – and that only encourages science publishers in a practice which is little more than  blackmail or piracy.

Here are two suggestions – first the “civil disobedience” described by Jerry Coyne, which is most probably illegal because of copyright violations. Secondly, one that is far more legal and better for one’s conscience.

Sharing copyrighted papers by civil disobedience.

i-can-haz-pdf-memeJerry describes a method using the hashtag  #icanhazpdf on Twitter. The procedure is described in the Atlantic article, How to Get Free Access to Academic Papers on Twitter. Have a read – but I find it impersonal and a bit sneaky (it involves deleting one’s tweet once a paper is downloaded and there is no real contact with the person who made the paper available). However, it will appeal to some people attracted to the idea of civil disobedience and “putting it to the man.”

This method is also discussed in the articles The scientists encouraging online piracy with a secret codeword and I can haz PDF: Academics tweet secret code word to get expensive research papers for free.

Using personal and online networks

One could always try a public library for a personal inter-loan – but that hardly appeals to the modern person who desires more immediate access.

I have found using Google Scholar to search for a title will often produce a link to a pdf copy already online, maybe already in violation of copyright. It’s amazing how many papers used by anti-fluoride activists are available from links on anti-fluoride web pages.

And, in the old days we used to request reprints from authors. Why not give that a go – send an email to the corresponding author asking if they could send you a pdf.

But what about considering your own personal and online networks.

Do you have a family member, friend or even an acquaintance (or several) who works in a scientific institution? It wouldn’t hurt to politely ask if they could get a pdf of the paper you are after and send it to you. Surely it is legally OK for staff in such institutes to discuss their work, and other aspects of science, with interested people via email. I can’t see that such communications, sometimes involving attached scientific papers, violate copyright – at least in spirit.

Then there are the online networks we seem to have these days – usually via Facebook groups. Most scientists would be cagey about attaching a link to a Facebook comment but sending a pdf via personal message or email would be OK. If you don’t already belong to a science or sceptical group then this is a good reason for joining. There will be people in these groups willing to help – and if the group is a closed one there is little risk.

Perhaps join several groups – after all if you have several people or networks to call on you will feel less guilty about asking others to spend time on your request.

Finally, it is not enough to acquire these pdfs – one should always read them before discussing them. And I mean read them critically and intelligently. This infographic gives you an idea of what can be involved.

infographic-how-to-read-scientific-papers-1-638

Credit: Natalia Rodrigue –  Infographic: How to read a scientific paper

Similar articles

Making sense of scientific research

This has been a common theme here as I have campaigned against cherry-picking research papers, relying on confirmation bias and putting blind faith in peer-review as a guarantee of research quality.

In short I have pleaded for readers to approach published research critically and intelligently.

The article The 10 stuff-ups we all make when interpreting research from The Conversation gives some specific advice on how to do this. Well worth keeping in mind when you next set out to scan the literature to find the current state of scientific knowledge on a subject that interests you.


UNDERSTANDING RESEARCH: What do we actually mean by research and how does it help inform our understanding of things? Understanding what’s being said in any new research can be challenging and there are some common mistakes that people make.

Have you ever tried to interpret some new research to work out what the study means in the grand scheme of things?

Well maybe you’re smart and didn’t make any mistakes – but more likely you’re like most humans and accidentally made one of these 10 stuff ups.

1. Wait! That’s just one study!

You wouldn’t judge all old men based on just Rolf Harris or Nelson Mandela. And so neither should you judge any topic based on just one study.

If you do it deliberately, it’s cherry-picking. If you do it by accident, it’s an example of the exception fallacy.

The well-worn and thoroughly discredited case of the measles, mumps and rubella (MMR) vaccine causing autism serves as a great example of both of these.

People who blindly accepted Andrew Wakefield’s (now retracted) study – when all the other evidence was to the contrary – fell afoul of the exception fallacy. People who selectively used it to oppose vaccination were cherry-picking.

2. Significant doesn’t mean important

Some effects might well be statistically significant, but so tiny as to be useless in practice.

You know what they say about statistics? Flickr/Frits Ahlefeldt-Laurvig, CC BY-ND
Click to enlarge

Associations (like correlations) are great for falling foul of this, especially when studies have huge number of participants. Basically, if you have large numbers of participants in a study, significant associations tend to be plentiful, but not necessarily meaningful.

One example can be seen in a study of 22,000 people that found a significant (p<0.00001) association between people taking aspirin and a reduction in heart attacks, but the size of the result was miniscule.

The difference in the likelihood of heart attacks between those taking aspirin every day and those who weren’t was less than 1%. At this effect size – and considering the possible costs associated with taking aspirin – it is dubious whether it is worth taking at all.

3. And effect size doesn’t mean useful

We might have a treatment that lowers our risk of a condition by 50%. But if the risk of having that condition was already vanishingly low (say a lifetime risk of 0.002%), then reducing that might be a little pointless.

We can flip this around and use what is called Number Needed to Treat (NNT).

In normal conditions if two random people out of 100,000 would get that condition during their lifetime, you’d need all 100,000 to take the treatment to reduce that number to one.

4. Are you judging the extremes by the majority?

Biology and medical research are great for reminding us that not all trends are linear.

We all know that people with very high salt intakes have a greater risk of cardio-vascular disease than people with a moderate salt intake.

Too much or too little salt – which as worse? Flickr/JD Hancock, CC BY
Click to enlarge

But hey – people with a very low salt intake may also have a high risk of cardio-vascular disease too.

The graph is U shaped, not just a line going straight up. The people at each end of the graph are probably doing different things.

5. Did you maybe even want to find that effect?

Even without trying, we notice and give more credence to information that agrees with views we already hold. We are attuned to seeing and accepting things that confirm what we already know, think and believe.

There are numerous example of this confirmation bias but studies such as this reveal how disturbing the effect can be.

In this case, the more educated people believed a person to be, the lighter they (incorrectly) remembered that person’s skin was.

6. Were you tricked by sciencey snake oil?

A classic – The Turbo Encabulator.

You won’t be surprised to hear that sciencey-sounding stuff is seductive. Hey, even the advertisers like to use our words!

But this is a real effect that clouds our ability to interpret research.

In one study, non-experts found even bad psychological explanations of behaviour more convincing when they were associated with irrelevant neuroscience information. And if you add in a nice-and-shiny fMRI scan, look out!

7. Qualities aren’t quantities and quantities aren’t qualitites

For some reason, numbers feel more objective than adjectivally-laden descriptions of things. Numbers seem rational, words seem irrational. But sometimes numbers can confuse an issue.

For example, we know people don’t enjoy waiting in long queues at the bank. If we want to find out how to improve this, we could be tempted to measure waiting periods and then strive to try and reduce that time.

But in reality you can only reduce the wait time so far. And a purely quantitative approach may miss other possibilities.

If you asked people to describe how waiting made them feel, you might discover it’s less about how long it takes, and more about how uncomfortable they are.

8. Models by definition are not perfect representations of reality

A common battle-line between climate change deniers and people who actually understand evidence is the effectiveness and representativeness of climate models.

But we can use much simpler models to look at this. Just take the classic model of an atom. It’s frequently represented as a nice stable nucleus in the middle of a number of neatly orbiting electrons.

While this doesn’t reflect how an atom actually looks, it serves to explain fundamental aspects of the way atoms and their sub-elements work.

This doesn’t mean people haven’t had misconceptions about atoms based on this simplified model. But these can be modified with further teaching, study and experience.

9. Context matters

The US president Harry Truman once whinged about all his economists giving advice, but then immediately contradicting that with an “on the other hand” qualification.

Individual scientists – and scientific disciplines – might be great at providing advice from just one frame. But for any complex social, political or personal issue there are often multiple disciplines and multiple points of view to take into account.

To ponder this we can look at bike helmet laws. It’s hard to deny that if someone has a bike accident and hits their head, they’ll be better off if they’re wearing a helmet.

Do bike helmet laws stop some people from taking up cycling? Flickr/Petar, CC BY-NC
Click to enlarge

But if we are interested in whole-of-society health benefits, there is research suggesting that a subset of the population will choose not to cycle at all if they are legally required to wear a helmet.

Balance this against the number of accidents where a helmet actually makes a difference to the health outcome, and now helmet use may in fact be negatively impacting overall public health.

Valid, reliable research can find that helmet laws are both good and bad for health.

10. And just because it’s peer reviewed that doesn’t make it right

Peer review is held up as a gold standard in science (and other) research at the highest levels.

But even if we assume that the reviewers made no mistakes or that there were no biases in the publication policies (or that there wasn’t any straight out deceit), an article appearing in a peer reviewed publication just means that the research is ready to be put out to the community of relevant experts for challenging, testing, and refining.

It does not mean it’s perfect, complete or correct. Peer review is the beginning of a study’s active public life, not the culmination.

And finally …

Research is a human endeavour and as such is subject to all the wonders and horrors of any human endeavour.

Just like in any other aspect of our lives, in the end, we have to make our own decisions. And sorry, appropriate use even of the world’s best study does not relieve us of this wonderful and terrible responsibility.

There will always be ambiguities that we have to wade through, so like any other human domain, do the best you can on your own, but if you get stuck, get some guidance directly from, or at least originally via, useful experts.


This article is part of a series on Understanding Research.

Further reading:
Why research beats anecdote in our search for knowledge
Clearing up confusion between correlation and causation
Where’s the proof in science? There is none
Positives in negative results: when finding ‘nothing’ means something
The risks of blowing your own trumpet too soon on research
How to find the knowns and unknowns in any research
How myths and tabloids feed on anomalies in science

The good(?) old days of scientific writing

I recently read the first volume of Richard Dawkins’s memoirs An Appetite for Wonder: The Making of a Scientist. It brought back a few memories for me.

The political and trade union battles of the early 1970s in the UK, for example. I was working in Aberdeen, Scotland, in 1973-1975. That period saw 3 general elections and massive power cuts. I remember the problems of trying to do research, and even writing, when we only had power for 3 days in a week!

Dawkins took advantage of that time to begin writing the book which established him as a popular science writer – The Selfish Gene.

His description of what writing was like in those days  also brought back strong memories. We wrote and rewrote, with copious use of Sellotape and paste. A few, often considered eccentric, scientists had their own portable typewriters but most of us relied on the “typing pool.” That was an interesting social phenomenon – all female, it reminds me now of the way women were employed to do the tedious calculations for astronomers. They were called “calculators.”

I remember one stroppy typist who just could not understand why I kept rewriting manuscripts. She would often complain – but one had to keep on the good side of people like that otherwise your typing would go to the bottom of the pile.

Here’s how Dawkins described the process:

“I now find it quite hard to comprehend how we all used to tolerate the burden of writing in the age before computer word processors. Pretty much every sentence I write is revised, fiddled with, re-ordered, crossed out and reworked. I reread my work obsessively, subjecting the text to a kind of Darwinian sieving which, I hope and believe, improves it with every pass. Even as I type a sentence for the first time, at least half the words are deleted and changed before the sentence ends. I have always worked like this. But while a computer is naturally congenial to this way of working, and the text itself remains clean with every revision, on a typewriter the result was a mess. Scissors and sticky tape were tools of the trade as important as the typewriter itself. The growing typescript of The Selfish Gene was covered with xxxxxxx deletions, handwritten insertions, words ringed and moved with arrows to other places, strips of paper inelegantly taped to the margin or the bottom of the page. One would think it a necessary part of composition that one should be able to read one’s text fluently. This would seem to be impossible when working on paper. Yet, mysteriously, writing style does not seem to have shown any general improvement since the introduction of computer word processors. Why not?”

By the way, his book is a good read if you enjoy scientific biographies.

Similar articles