Why do we still have journals?

Here is an editorial essay by Gerald F. Davis that appeared recently in Administrative Science Quarterly.
Abstract: The Web has greatly reduced the barriers to entry for new journals and other platforms for communicating scientific output, and the number of journals continues to multiply. This leaves readers and authors with the daunting cognitive challenge of navigating the literature and discerning contributions that are both relevant and significant. Meanwhile, measures of journal impact that might guide the use of the literature have become more visible and consequential, leading to “impact gamesmanship” that renders the measures increasingly suspect. The incentive system created by our journals is broken. In this essay, I argue that the core technology of journals is not their distribution but their review process. The organization of the review process reflects assumptions about what a contribution is and how it should be evaluated. Through their review processes, journals can certify contributions, convene scholarly communities, and curate works that are worth reading. Different review processes thereby create incentives for different kinds of work. It’s time for a broader dialogue about how we connect the aims of the social science enterprise to our system of journals.
Its open access.

Open data and open access – what society loses when knowledge is offline

What is impact?

Open Access, the Impact Agenda and resistance to the neoliberal paradigm

Impact Factors — Letter to RCUK

Narrating impacts in the Arts & Humanities

Narrating impacts in the Arts & Humanities

The end of journals? Open access, impact and the production of knowledge

The end of journals? Open access, impact and the production of knowledge

10 years of BOAI (Budapest Open Access Initiative)

10 years of BOAI (Budapest Open Access Initiative)

Ijad Madisch, CEO of Researchgate.com in an interview with Neue Zürcher Zeitung

Ijad Madisch, CEO of Researchgate.com in an interview with Neue Zürcher Zeitung

The weakening relationship between the Impact Factor and papers’ citations in the digital age

The weakening relationship between the Impact Factor and papers' citations in the digital age

Self-Selected or Mandated, Open Access Increases Citation Impact for Higher Quality Research

Self-Selected or Mandated, Open Access Increases Citation Impact for Higher Quality Research

Baseball Stats and Impact Factors

Baseball is a game of statistics. Ever detail of every game gets scored, every ball a player hits or misses gets noted and ends up in a database. Baseball fans love getting lost in cascades of numbers, love comparing players and teams. Every baseball fan knows that statistics are an essential element to what makes this game so exciting to follow.

There is an interesting link between baseball and science and that is that science is increasingly becoming a game of statistics, too. And by that I mean that impact factors in many cases have a higher priority than the actual content of the scientific work. In other words: many scientists would prefer a dull paper that hardly anyone is interested in but that is published in a highly ranked journal over a great paper that actually conveys important finding but that does not get published in a top tier journal. That is kind of odd.

Now, there is a noteworthy difference between baseball and science and that is that science is focused on a single measurand and that is the impact factor. In baseball there are dozens of measurands that provide insights into how a player performs in every aspect of the game. Yet, there is not an overall statistic that simply says who is the greatest player. There is no such statistic because it wouldn’t make sense. A great defensive player might not be as great offensively, a great pitcher might not have the stamina of a weaker pitcher … Statistics help fans to understand who is good and who is not, they point to the strengths and the weaknesses of players but they cannot decide who is the most valuable player. Annually, this decision gets made by humans, by the Baseball Writers Association of America, and they never decide unanimously.

In science however, people belief that a single measurand is enough to rank scientists. It is somewhat absurd that the same people whose job it is to create knowledge have created so little knowledge on how to comprehensively evaluate scientific works. Why don’t we have a database that shows how often a journal article gets downloaded, how often it gets mentioned outside the scientific community, how many word a scientist publishes a year, how often he/she helps other scientists, or how many other scientist find an article actually insightful?