The ResearchGate Score: a good example of a bad metric

According to ResearchGate, the academic social networking site, their RG Score is “a new way to measure your scientific reputation”. With such high aims, Peter Kraker, Katy Jordan and Elisabeth Lex take a closer look at the opaque metric. By reverse engineering the score, they find that a significant weight is linked to ‘impact points’ – a similar metric to the widely discredited journal impact factor. Transparency in metrics is the only way scholarly measures can be put into context and the only way biases – which are inherent in all socially created metrics – can be uncovered.

Altmetrics could enable scholarship from developing countries to receive due recognition.

The Web of Science and its corresponding Journal Impact Factor are inadequate for an understanding of the impact of scholarly work from developing regions, argues Juan Pablo Alperin. Alternative metrics offer the opportunity to redirect incentive structures towards problems that contribute to development, or at least to local priorities. But the altmetrics community needs to actively engage with scholars from developing regions to ensure the new metrics do not continue to cater to well-known and well-established networks.

LSE Impact Blog.

Franzen’s The Kraus Project on Altmetrics

Over the weekend I read "The Kraus Project", the newest book by Jonathan Franzen. In it Franzen translates a couple of texts written by  Viennese satirist Karl Kraus roughly 100 years ago in "Die Fackel". The translations are explained with a multitude of explanatory and autobiographical footnotes put together by Franzen, Paul Reitter and Daniel Kehlmann. Kraus in general was sceptic of technological change and Franzen describes him as
"Kraus was the first great instance of a writer fully experiencing how modernity, whose essence is the accelerating rate of change, in itself creates the conditions for personal apocalypse."
And somewhere on page 273 of the book a passage in the footnotes struck me as a noteworthy counterargument to the current altmetrics craze:
The work of yakkers and tweeters and braggers, and of people with the money to pay somebody to churn out hundreds of five-star reviews for them, will flourish in that world. (Kraus's dictate "Sing, bird, or die" could now read "Tweet, bird, or die.") But what happens to the people who want to communicate in depth, individual to individual, in the quiet and permanence of the printed word, and who were shaped by their love of writers who wrote when publication still assured some kind of quality control and literary reputations were more than a matter self-promotional decibel levels?
A video on the book project can be found here. SF

What is impact?

Mendeley, Open Access, and the Altmetrics Revolution: Implications for Japanese Research

Keita Bando, Mendeley Advisor, Digital Repository Librarian Coordinator for Scholarly Communication, My Open Archive on open access, Mendeley and Altmetrics.

Altmetrics Collection

Altmetrics Collection

The Promise Of Another Open: Open Impact Tracking

The Promise Of Another Open: Open Impact Tracking

Webometrics and altmetrics: digital world measurements

Webometrics and altmetrics: digital world measurements

Baseball Stats and Impact Factors

Baseball is a game of statistics. Ever detail of every game gets scored, every ball a player hits or misses gets noted and ends up in a database. Baseball fans love getting lost in cascades of numbers, love comparing players and teams. Every baseball fan knows that statistics are an essential element to what makes this game so exciting to follow.

There is an interesting link between baseball and science and that is that science is increasingly becoming a game of statistics, too. And by that I mean that impact factors in many cases have a higher priority than the actual content of the scientific work. In other words: many scientists would prefer a dull paper that hardly anyone is interested in but that is published in a highly ranked journal over a great paper that actually conveys important finding but that does not get published in a top tier journal. That is kind of odd.

Now, there is a noteworthy difference between baseball and science and that is that science is focused on a single measurand and that is the impact factor. In baseball there are dozens of measurands that provide insights into how a player performs in every aspect of the game. Yet, there is not an overall statistic that simply says who is the greatest player. There is no such statistic because it wouldn’t make sense. A great defensive player might not be as great offensively, a great pitcher might not have the stamina of a weaker pitcher … Statistics help fans to understand who is good and who is not, they point to the strengths and the weaknesses of players but they cannot decide who is the most valuable player. Annually, this decision gets made by humans, by the Baseball Writers Association of America, and they never decide unanimously.

In science however, people belief that a single measurand is enough to rank scientists. It is somewhat absurd that the same people whose job it is to create knowledge have created so little knowledge on how to comprehensively evaluate scientific works. Why don’t we have a database that shows how often a journal article gets downloaded, how often it gets mentioned outside the scientific community, how many word a scientist publishes a year, how often he/she helps other scientists, or how many other scientist find an article actually insightful?