Publication bias within climate science!

What on earth is happening within science these days?

Here is an article on publication bias published in the journal Climatic Change:
No evidence of publication bias in climate change science

Abstract:
Non-significant results are less likely to be reported by authors and, when submitted for peer review, are less likely to be published by journal editors. This phenomenon, known collectively as publication bias, is seen in a variety of scientific disciplines and can erode public trust in the scientific method and the validity of scientific theories. Public trust in science is especially important for fields like climate change science, where scientific consensus can influence state policies on a global scale, including strategies for industrial and agricultural management and development. Here, we used meta-analysis to test for biases in the statistical results of climate change articles, including 1154 experimental results from a sample of 120 articles.

That first section contains an illogical inference:
The article analyze a sample of published articles to test if non-significant results are less likely to be published!

That seems like a silly thing to say. Wikipedia contains a similar conclusion:

Where publication bias is present, published studies do not represent the universe of valid studies. This bias distorts the results of meta-analyses and systematic reviews.
Wikipedia – Publication bias

Regardless of this illogical inference, the abstract concludes:

Funnel plots revealed no evidence of publication bias given no pattern of non-significant results being under-reported, even at low sample sizes.

The other findings of the paper are remarkable. The abstract continues to report what certainly looks like a biasing practice:

However, we discovered three other types of systematic bias relating to writing style, the relative prestige of journals, and the apparent rise in popularity of this field:

First, the magnitude of statistical effects was significantly larger in the abstract than the main body of articles.

Second, the difference in effect sizes in abstracts versus main body of articles was especially pronounced in journals with high impact factors.

Finally, the number of published articles about climate change and the magnitude of effect sizes therein both increased within 2 years of the seminal report by the Intergovernmental Panel on Climate Change 2007.

If we now recall that the title of the paper is: No evidence of publication bias in climate change science  , the title of the paper does not seem to reflect the main conclusions of the paper. It seems as if the scientists, reviewers and journal editor, must have been irrational or misled by the title. Luckily, I would say, as that article would probably not have been published in the journal Climatic Change if it had a title that properly reflected the conclusions in the paper – like e.g.: High impact journals publish articles with significantly larger effects in abstracts versus the main body of the articles.

Another thing to note is that publication bias is not properly defined in the paper. The only definition is the one in the first sentence of the abstract:

Non-significant results are less likely to be reported by authors and, when submitted for peer review, are less likely to be published by journal editors. This phenomenon, known collectively as publication bias ….

For comparison, here is the Wikipedia definition of publication bias:

Publication bias occurs when the publication of research results depends not just on the quality of the research but also on the hypothesis tested, and the significance and direction of effects detected. The term was first used in 1959 by statistician Theodore Sterling to refer to fields in which “successful” research is more likely to be published. As a result, “the literature of such a field consists in substantial part of false conclusions resulting from type-I errors. [type I error is the incorrect rejection of a true null hypothesis].

However, it is interesting to observe  that the paper concludes with the following statements:

The number of articles about climate change in ocean ecosystems has increased annually since 1997, peaking within 2 years after IPCC 2007 and subsiding after Climategate 2009…

Before Climategate, reported effect sizes were significantly larger in article abstracts than in the main body of articles, suggesting a systematic bias in how authors are communicating results in scientific articles:

Large, significant effects were emphasized where readers are most likely to see them (in abstracts), whereas small or non-significant effects were more often found in the technical results sections where we presume they are less likely to be seen by the majority of readers, especially non-scientists.»

Other conclusions in the article are:

journals reported significantly larger effect sizes in abstracts than in the main body of articles

and

However, our meta-analysis did find multiple lines of evidence of biases within our sample of articles, which were perpetuated in journals of all impact factors and related largely to how science is communicated:

The large, statistically significant effects were typically showcased in abstracts and summary paragraphs, whereas the lesser effects, especially those that were not statistically significant, were often buried in the main body of reports.

and:

For example, our results corroborate with others by showing that high impact journals typically report large effects based on small sample sizes (Fraley and Vazire 2014), and high impact journals have shown publication bias in climate change research (Michaels 2008, and further discussed in Radetzki 2010).

At last, one of the concluding remarks in the article is:

The onus to effectively communicate science does not fall entirely on the reader; rather, it is the responsibility of scientists and editors to remain vigilant, to understand how biases may pervade their work, and to be proactive about communicating science to non-technical audiences in transparent and un-biased ways.

I think the article itself is an extraordinarily good example of a paper having a title that is likely to contribute to a biased perception on the state of climate science.

The title of the article is a misleading statement that supports the orthodoxy on climate change. My guess is that that the review was sloppy and that the reason why this article was published in the journal Climatic Change was the misleading title that supported the orthodoxy on climate change: “No evidence of publication bias in climate change science”. But that is a guess.

The following quote may indicate how this might have happened.

“.. I’ve never requested data/codes to do a review and I don’t think others should either. I do many of my reviews on travel. I have a feel for whether something is wrong – call it intuition. If analyses don’t seem right, look right or feel right, I say so. Some of my reviews for [Climatic Change] could be called into question!”
UEA’s renowned Director of the Climatic Research Unit (CRU), Phil Jones

(Hat tip to Hilary Ostrov (aka hro001) , for that quote:
Phil Jones keeps peer-review process humming … by using “intuition”)


Here is another article demonstrating how a misleading title contributed to a biased perception: United Nations has taken part in scientific malpractice – and contributed to an overly negative perception of the state of the ocean!

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s