Data sharing as social dilemma: Influence of the researcher’s personality

Here is our article on data sharing and the researchers personality. Published in PLoS ONE.
Abstract: It is widely acknowledged that data sharing has great potential for scientific progress. However, so far making data available has little impact on a researcher’s reputation. Thus, data sharing can be conceptualized as a social dilemma. In the presented study we investigated the influence of the researcher’s personality within the social dilemma of data sharing. The theoretical background was the appropriateness framework. We conducted a survey among 1564 researchers about data sharing, which also included standardized questions on selected personality factors, namely the so-called Big Five, Machiavellianism and social desirability. Using regression analysis, we investigated how these personality domains relate to four groups of dependent variables: attitudes towards data sharing, the importance of factors that might foster or hinder data sharing, the willingness to share data, and actual data sharing. Our analyses showed the predictive value of personality for all four groups of dependent variables. However, there was not a global consistent pattern of influence, but rather different compositions of effects. Our results indicate that the implications of data sharing are dependent on age, gender, and personality. In order to foster data sharing, it seems advantageous to provide more personal incentives and to address the researchers’ individual responsibility

A reputation economy: how individual reward considerations trump systemic arguments for open access to data

Fecher, B., Friesike, S., Hebing, M., Linek, S. (2017). A reputation economy: how individual reward considerations trump systemic arguments for open access to data. Palgrave Communications 3, Article number: 17051.
Open access to research data has been described as a driver of innovation and a potential cure for the reproducibility crisis in many academic fields. Against this backdrop, policy makers are increasingly advocating for making research data and supporting material openly available online. Despite its potential to further scientific progress, widespread data sharing in small science is still an ideal practised in moderation. In this article, we explore the question of what drives open access to research data using a survey among 1564 mainly German researchers across all disciplines. We show that, regardless of their disciplinary background, researchers recognize the benefits of open access to research data for both their own research and scientific progress as a whole. Nonetheless, most researchers share their data only selectively. We show that individual reward considerations conflict with widespread data sharing. Based on our results, we present policy implications that are in line with both individual reward considerations and scientific progress.

OpenAIRE as the basis for a European Open Access Platform

An exciting recent article on the LSE Impact Blog proposes a European Open Access Platform for research. This idea is very much in line with OpenAIRE’s mission of building a public research publication infrastructure and as such we welcome the authors’ vision. A public platform for the dissemination of research will become essential infrastructure to finally fully integrate research publishing and dissemination into the research lifecycle, rather than seeing it as an added-extra to be outsourced. OpenAIRE is already contributing to make such a vision a reality. We here discuss how OpenAIRE can contribute further to create a participatory, federated OA platform.

Bird watchers discuss changes in trends at annual Christmas bird count

This is a great example for citizen science with tradition.
According to the Audubon Society’s website, orinthologist Frank Chapman organized the first Christmas bird count in 1900. The activity was an alternative to the “side hunts” which were popular at the time, the goal of which was to shoot as many animals as possible. The first count featured 27 birders and stretched from Canada to California. The birders made note of about 90 species. The tradition has since continued.

Setting up crowd science projects

In this reserach paper Kaja Scheliga, Sascha Friesike, Cornelius Puschmann and Benedikt Fecher deal with the setup of crowd science projects.
Crowd science is scientific research that is conducted with the participation of volunteers who are not professional scientists. Thanks to the Internet and online platforms, project initiators can draw on a potentially large number of volunteers. This crowd can be involved to support data-rich or labour-intensive projects that would otherwise be unfeasible. So far, research on crowd science has mainly focused on analysing individual crowd science projects. In our research, we focus on the perspective of project initiators and explore how crowd science projects are set up. Based on multiple case study research, we discuss the objectives of crowd science projects and the strategies of their initiators for accessing volunteers. We also categorise the tasks allocated to volunteers and reflect on the issue of quality assurance as well as feedback mechanisms. With this article, we contribute to a better understanding of how crowd science projects are set up and how volunteers can contribute to science. We suggest that our findings are of practical relevance for initiators of crowd science projects, for science communication as well as for informed science policy making.

Could #Blockchain provide the technical fix to solve science’s reproducibility crisis?

Soenke Bartling and Benedikt Fecher on the use of blockchain technology in research.
Currently blockchain is being hyped. Many claim that the blockchain revolution will affect not only our online life, but will profoundly change many more aspects of our society. Many foresee these changes as potentially being more far-reaching than those brought by the internet in the last two decades. If this holds true, it is certain that research and knowledge creation will also be affected by this. So, what is blockchain all about? More importantly, could knowledge creation benefit from it? One potential area it could be useful is in addressing the credibility and reproducibility crisis in science.

Article on #openaccess scholarly innovation and research infrastructure

In this article Benedikt Fecher and Gert Wagner argue that the current endeavors to achieve open access in scientific literature require a discussion about innovation in scholarly publishing and research infrastructure. Drawing on path dependence theory and addressing different open access (OA) models and recent political endeavors, the authors argue that academia is once again running the risk of outsourcing the organization of its content.

Research data explored: an extended analysis of citations and altmetrics

In this study, we explore the citedness of research data, its distribution over time and its relation to the availability of a digital object identifier (DOI) in the Thomson Reuters database Data Citation Index (DCI). We investigate if cited research data “impacts” the (social) web, reflected by altmetrics scores, and if there is any relationship between the number of citations and the sum of altmetrics scores from various social media platforms. Three tools are used to collect altmetrics scores, namely PlumX, ImpactStory, and Altmetric.com, and the corresponding results are compared. We found that out of the three altmetrics tools, PlumX has the best coverage. Our experiments revealed that research data remain mostly uncited (about 85 %), although there has been an increase in citing data sets published since 2008. The percentage of the number of cited research data with a DOI in DCI has decreased in the last years. Only nine repositories are responsible for research data with DOIs and two or more citations. The number of cited research data with altmetrics “foot-prints” is even lower (4–9 %) but shows a higher coverage of research data from the last decade. In our study, we also found no correlation between the number of citations and the total number of altmetrics scores. Yet, certain data types (i.e. survey, aggregate data, and sequence data) are more often cited and also receive higher altmetrics scores. Additionally, we performed citation and altmetric analyses of all research data published between 2011 and 2013 in four different disciplines covered by the DCI. In general, these results correspond very well with the ones obtained for research data cited at least twice and also show low numbers in citations and in altmetrics. Finally, we observed that there are disciplinary differences in the availability and extent of altmetrics scores.

Misconceptions about academic data sharing #datasharing #openscience

Gert Wagner and Benedikt Fecher reply to an editorial about data sharing in medicine.
Longo and Drazen miss the very point of scientific research when they write, that the researchers may «even use the data to try to disprove what the original investigators had posited«. It is at the core of the scientific paradigm that researchers take nothing as final truth. This is what Popper proposed in his critical rationalism and Merton in his conceptualization of skepticism.

NEJM Editorial and the journals reply #datasharing

Last week, Longo and Drazen published a frantic editorial in the New England Journal of Medicing on academic data sharing, implying that researchers that use data from other researcher are "research parasites". The journal replied:
We want to clarify, given recent concern about our policy, that the Journal is committed to data sharing in the setting of clinical trials. As stated in the Institute of Medicine report from the committee1 on which I served and the recent editorial by the International Committee of Medical Journal Editors (ICMJE),2 we believe there is a moral obligation to the people who volunteer to participate in these trials to ensure that their data are widely and responsibly used.

The ResearchGate Score: a good example of a bad metric

According to ResearchGate, the academic social networking site, their RG Score is “a new way to measure your scientific reputation”. With such high aims, Peter Kraker, Katy Jordan and Elisabeth Lex take a closer look at the opaque metric. By reverse engineering the score, they find that a significant weight is linked to ‘impact points’ – a similar metric to the widely discredited journal impact factor. Transparency in metrics is the only way scholarly measures can be put into context and the only way biases – which are inherent in all socially created metrics – can be uncovered.