Public butterfly count aims to check countryside health
The charity Butterfly Conservation is calling on the public to help survey the state of Britain's countryside by counting our most colourful insects.
To create societies where everyone has both access to key information and the ability to use it to understand and shape their lives, we must build knowledge into the heart of all of our activities. This is a big task which requires not just a global shift in mindset, but also that we build the tools and communities to make such a society possible. We invite you to join us from 15-17 July in Berlin for OKFestival 2014 as we consider how to translate Open Minds to Open Action.
Researchers say European commission-funded initiative to simulate human brain suffers from 'substantial failures'
The UK Working Group on Expanding Access to Published Research Findings, chaired by Dame Janet Finch, report, “Accessibility, sustainability, excellence: how to expand access to research publications,” helped to crystallize a long simmering debate within the open access (OA) community: should the focus for OA advocates be “green” open access – that is, the use of repositories to make research published through traditional subscription-based venues openly available – or should it be ‘gold’ open access – that is, through publication within venues that are themselves open access?
... in the ten countries that publish the highest portions of Open Access journals
scinoptica. Ulrich Herb.
For many purchases, price comparisons are a few mouse clicks away. Not for academic journals. Universities buy access to most of their subscription journals through large bundled packages, much like home cable subscriptions that include hundreds of TV stations. But whereas cable TV providers largely stick to advertised prices, universities negotiate with academic publishing companies behind closed doors, and those deals usually come with nondisclosure agreements that keep the bundled prices secret. After several years of digging, and even legal action, a team of economists has pried out some of those numbers.
AAAS Science. John Bohannon.
Abstract: The Web has greatly reduced the barriers to entry for new journals and other platforms for communicating scientific output, and the number of journals continues to multiply. This leaves readers and authors with the daunting cognitive challenge of navigating the literature and discerning contributions that are both relevant and significant. Meanwhile, measures of journal impact that might guide the use of the literature have become more visible and consequential, leading to “impact gamesmanship” that renders the measures increasingly suspect. The incentive system created by our journals is broken. In this essay, I argue that the core technology of journals is not their distribution but their review process. The organization of the review process reflects assumptions about what a contribution is and how it should be evaluated. Through their review processes, journals can certify contributions, convene scholarly communities, and curate works that are worth reading. Different review processes thereby create incentives for different kinds of work. It’s time for a broader dialogue about how we connect the aims of the social science enterprise to our system of journals.Its open access.
The conclusions of research articles generally depend on bodies of data that cannot be included in the articles themselves. The sharing of this data is important for reasons of both transparency and possible reuse. Science, Technology and Medicine journals have an obvious role in facilitating sharing, but how they might do that is not yet clear.
Would more researchers share data if they got more for it? Possibly. The currency does not even have to change. What is missing in the academic system is the recognition for intermediaries, also for data. Those who publish well get cited. The H-index increases and thereby the chances for professional advancement. Good articles are good for the career. Good data however are still not as important than they should be.
Writer Seth Godin explains why it’s absolutely fine if you steal his ideas … you have to promise to make them better.
Looking at Open Science and Open Data from a broad perspective. This is the idea behind “Scientific data sharing: an interdisciplinary workshop”, an initiative designed to foster dialogue between scholars from different scientific domains which was organized by the Istituto Italiano di Antropologia in Anagni, Italy, 2-4 September 2013.We here report summaries of the presentations and discussions at the meeting. They deal with four sets of issues: (i) setting a common framework, a general discussion of open data principles, values and opportunities; (ii) insights into scientific practices, a view of the way in which the open data movement is developing in a variety of scientific domains (biology, psychology, epidemiology and archaeology); (iii) a case study of human genomics, which was a trail-blazer in data sharing, and which encapsulates the tension that can occur between large-scale data sharing and one of the boundaries of openness, the protection of individual data; (iv) open science and the public, based on a round table discussion about the public communication of science and the societal implications of open science.
Despite widespread support from policy makers, funding agencies, and scientific journals, academic researchers rarely make their research data available to others. At the same time, data sharing in research is attributed a vast potential for scientific progress. It allows the reproducibility of study results and the reuse of old data for new research questions. Based on a systematic review of 98 scholarly papers and an empirical survey among 603 secondary data users, we develop a conceptual framework that explains the process of data sharing from the primary researcher’s point of view. We show that this process can be divided into six descriptive categories: Data donor, research organization, research community, norms, data infrastructure, and data recipients. Drawing from our findings, we discuss theoretical implications regarding knowledge creation and dissemination as well as research policy measures to foster academic collaboration. We conclude that research data cannot be regarded a knowledge commons, but research policies that better incentivise data sharing are needed to improve the quality of research results and foster scientific progress.
Aside from the ethics and etiquette of fully open data-sharing, there are practical issues that journals still need to address. One is the cost of sharing data. Both the Public Library of Science and the UK Royal Society recommend the storage repository Dryad, which currently charges US$15 for the first gigabyte of data over its 10-gigabyte limit, and $10 per gigabyte thereafter. However, studies in areas such as neuroscience can generate terabytes of raw data (1 terabyte is 1,000 gigabytes) — a quantity that few labs could afford to upload.
Open science is a practice in which the scientific process is shared completely and in real time. It offers the potential to support information flow, collaboration and dialogue among professional and non-professional participants. Using semi-structured interviews and case studies, this research investigated the relationship between open science and public engagement. This article concentrates on three particular areas of concern that emerged: first, how to effectively contextualise and narrate information to render it accessible, as opposed to simply available; second, concerns about data quantity and quality; and third, concerns about the skills required for effective contextualisation, mapping and interpretation of information
Long before the term "citizen science" was coined, the field of astronomy has benefited from countless men and women who study the sky in their spare time.
Today, most research projects are considered complete when a journal article based on the analysis has been written and published. The trouble is, unlike Galileo's report in Sidereus Nuncius, the amount of real data and data description in modern publications is almost never sufficient to repeat or even statistically verify a study being presented. Worse, researchers wishing to build upon and extend work presented in the literature often have trouble recovering data associated with an article after it has been published. More often than scientists would like to admit, they cannot even recover the data associated with their own published works.