Saturday, September 25, 2010

It's not what you publish, it's where you publish it

Last post I mentioned the Matthew Effect, or "The rich get richer." In Bibliometrics, it means that the more you're cited and/or the more you publish, the more you'll continue to get cited/publish. When applied to journals, that means that papers published in high-impact journals get cited more often. As a result, the IF of the journals remains high, and so on. In short, a positive feedback loop.

However, there's always a question of quality. Perhaps the papers published in high-impact journals are, indeed, better? Lariviere and Gingras (2010) tried to solve the problem by using duplicates: the same paper published twice, in a high IF journal and in a low IF journal. In order to find those papers, they searched the WoS database, comparing papers according to their names, first authors and the number of cited references. Out of 4,918 pairs of papers identified they ended up using 4,532. The publication year of the pairs was either identical of in the one year range in about 80% of the papers.

They compared the average numbers of citations and average of relative citations for disciplines with more than 30 duplicates. The biggest Matthew Effect was found in the clinical medicine (21.46 for high IF, 12.08 for low IF) and Biomedical Research. (19.77 and 8.15). Significant differences were also found for Chemistry, Engineering and Technology, Physics and Social Sciences. However, there weren’t significant differences for Biology, Earth and Space, Health (Social Sciences), Math and Psychology.

Personally, I wonder if part of the effect can be accounted for the pay walls: libraries tend to buy more subscriptions to high-impact journals, so the chances of getting access to a paper in one of those journals is higher.

Speaking of the Matthew Effect, John Wilbanks, vice president of science at Creative Commons, just published a short article in Seed Magazine about the subject. He points out that in 1968 (the year Merton named the “Matthew Effect”) “the average age of a biomedical researcher in the US receiving his or her first significant funding was 35 or younger.” Today, it’s almost 42 for NIH grants. That means that fewer young, talented scientists get opportunities for independent research, while the already established scientists get even more funding.

Wilbanks recommends that we “start rethinking the way we reward and fund science and assess researchers using more than just citations.”

All I can say, Mr. Wilbanks, is that Bibliometricans are working on it…

P.S. Of course, as soon as I finished this post I ran into this post in The Scholarly Kitchen about the very same paper. Oh, well.

Lariviere, V. & Gingras, Y. (2010). The impact factor’s Matthew effect: a natural experiment in bibliometrics Journal of the American Society for Information Science and Technology, 2 (61), 424-427 : 10.1002/asi.21232


  1. FYI, Joseph Stiglitz, Nobel prize winning economist, has argued that prizes are a good way to stimulate academic research here:

    Now, he was comparing prizes to patents in the field of medicine, but his main point is interesting and worth exploring for other fields as well.

  2. I'm not sure prizes will work that well in medicine. The problem with medical research is that it's *expensive*. Prizes will go to those who already have enough funding to do the research. However, prizes can work better, I think, in other fields.

    Thank you for the FYI!