Updating search results...

OSKB

This endorsement is the seal of approval for inclusion in the OSKB Library collections.

These resources have been vetted by the OSKB Team.

329 affiliated resources

Search Resources

View
Selected filters:
Ten Simple Rules for Reproducible Computational Research
Unrestricted Use
CC BY
Rating
0.0 stars

Replication is the cornerstone of a cumulative science. However, new tools and technologies, massive amounts of data, interdisciplinary approaches, and the complexity of the questions being asked are complicating replication efforts, as are increased pressures on scientists to advance their research. As full replication of studies on independently collected data is often not feasible, there has recently been a call for reproducible research as an attainable minimum standard for assessing the value of scientific claims. This requires that papers in experimental science describe the results and provide a sufficiently clear protocol to allow successful repetition and extension of analyses based on original data. The importance of replication and reproducibility has recently been exemplified through studies showing that scientific papers commonly leave out experimental details essential for reproduction, studies showing difficulties with replicating published experimental results, an increase in retracted papers, and through a high number of failing clinical trials. This has led to discussions on how individual researchers, institutions, funding bodies, and journals can establish routines that increase transparency and reproducibility. In order to foster such aspects, it has been suggested that the scientific community needs to develop a “culture of reproducibility” for computational science, and to require it for published claims. We want to emphasize that reproducibility is not only a moral responsibility with respect to the scientific field, but that a lack of reproducibility can also be a burden for you as an individual researcher. As an example, a good practice of reproducibility is necessary in order to allow previously developed methodology to be effectively applied on new data, or to allow reuse of code and results for new projects. In other words, good habits of reproducibility may actually turn out to be a time-saver in the longer run. We further note that reproducibility is just as much about the habits that ensure reproducible research as the technologies that can make these processes efficient and realistic. Each of the following ten rules captures a specific aspect of reproducibility, and discusses what is needed in terms of information handling and tracking of procedures. If you are taking a bare-bones approach to bioinformatics analysis, i.e., running various custom scripts from the command line, you will probably need to handle each rule explicitly. If you are instead performing your analyses through an integrated framework (such as GenePattern, Galaxy, LONI pipeline, or Taverna), the system may already provide full or partial support for most of the rules. What is needed on your part is then merely the knowledge of how to exploit these existing possibilities.

Subject:
Applied Science
Computer Science
Information Science
Material Type:
Reading
Provider:
PLOS Computational Biology
Author:
Anton Nekrutenko
Eivind Hovig
Geir Kjetil Sandve
James Taylor
Date Added:
08/07/2020
Ten Simple Rules for the Care and Feeding of Scientific Data
Unrestricted Use
CC BY
Rating
0.0 stars

This article offers a short guide to the steps scientists can take to ensure that their data and associated analyses continue to be of value and to be recognized. In just the past few years, hundreds of scholarly papers and reports have been written on questions of data sharing, data provenance, research reproducibility, licensing, attribution, privacy, and more—but our goal here is not to review that literature. Instead, we present a short guide intended for researchers who want to know why it is important to “care for and feed” data, with some practical advice on how to do that. The final section at the close of this work (Links to Useful Resources) offers links to the types of services referred to throughout the text.

Subject:
Applied Science
Life Science
Physical Science
Social Science
Material Type:
Reading
Author:
Alberto Pepe
Aleksandra Slavkovic
Alexander W. Blocker
Alyssa Goodman
Aneta Siemiginowska
Ashish Mahabal
Christine L. Borgman
David W. Hogg
Kyle Cranmer
Margaret Hedstrom
Merce Crosas
Paul Groth
Rosanne Di Stefano
Vinay Kashyap
Yolanda Gil
Date Added:
04/24/2014
Tools for Reproducible Research
Read the Fine Print
Rating
0.0 stars

Course summary
A minimal standard for data analysis and other scientific computations is that they be reproducible: that the code and data are assembled in a way so that another group can re-create all of the results (e.g., the figures in a paper). The importance of such reproducibility is now widely recognized, but it is still not so widely practiced as it should be, in large part because many computational scientists (and particularly statisticians) have not fully adopted the required tools for reproducible research.

In this course, we will discuss general principles for reproducible research but will focus primarily on the use of relevant tools (particularly make, git, and knitr), with the goal that the students leave the course ready and willing to ensure that all aspects of their computational research (software, data analyses, papers, presentations, posters) are reproducible.

Subject:
Applied Science
Information Science
Material Type:
Full Course
Author:
Karl Broman
Date Added:
08/07/2020
Toward Reproducible Computational Research: An Empirical Analysis of Data and Code Policy Adoption by Journals
Unrestricted Use
CC BY
Rating
0.0 stars

Journal policy on research data and code availability is an important part of the ongoing shift toward publishing reproducible computational science. This article extends the literature by studying journal data sharing policies by year (for both 2011 and 2012) for a referent set of 170 journals. We make a further contribution by evaluating code sharing policies, supplemental materials policies, and open access status for these 170 journals for each of 2011 and 2012. We build a predictive model of open data and code policy adoption as a function of impact factor and publisher and find higher impact journals more likely to have open data and code policies and scientific societies more likely to have open data and code policies than commercial publishers. We also find open data policies tend to lead open code policies, and we find no relationship between open data and code policies and either supplemental material policies or open access journal status. Of the journals in this study, 38% had a data policy, 22% had a code policy, and 66% had a supplemental materials policy as of June 2012. This reflects a striking one year increase of 16% in the number of data policies, a 30% increase in code policies, and a 7% increase in the number of supplemental materials policies. We introduce a new dataset to the community that categorizes data and code sharing, supplemental materials, and open access policies in 2011 and 2012 for these 170 journals.

Subject:
Applied Science
Computer Science
Information Science
Material Type:
Reading
Provider:
PLOS ONE
Author:
Peixuan Guo
Victoria Stodden
Zhaokun Ma
Date Added:
08/07/2020
Trainer Space for the Introduction to Open and Reproducible Research Workshop
Unrestricted Use
CC BY
Rating
0.0 stars

Central location housing curriculum materials and planning tools for trainers of the COS Introduction to Open and Reproducible Research workshop.

Subject:
Applied Science
Life Science
Physical Science
Social Science
Material Type:
Activity/Lab
Provider:
Center for Open Science
Author:
Courtney K. Soderberg
Ian Sullivan
Jennifer Freeman Smith
Jolene Esposito
Matthew Spitzer
Natalie Meyers
Date Added:
04/24/2019
Transparency and Open Science Symposium GSA 2019
Unrestricted Use
Public Domain
Rating
0.0 stars

The past decade has seen rapid growth in conversations around and progress towards fostering a more transparent, open, and cumulative science. Best practices are being codified and established across fields relevant to gerontology from cancer science to psychological science. Many of the areas currently under development are of particular relevance to gerontologists such as best practices in balancing open science with participant confidentiality or best practices for preregistering archival, longitudinal data analysis. The present panel showcases one of the particular strengths of the open science movement - the contribution that early career researchers are making to these ongoing conversations on best practices. Early career researchers have the opportunity to blend their expertise with technology, their knowledge of their disciplines, and their vision for the future in shaping these conversations. In this panel, three early career researchers share their insights. Pfund presents an introduction to preregistration and the value of preregistration from the perspective of “growing up” within the open science movement. Seaman discusses efforts in and tools for transparency and reproducibility in neuroimaging of aging research. Ludwig introduces the idea of registered reports as a particularly useful form of publication for researchers who use longitudinal methods and/or those who work with hard-to-access samples. The symposium will include time for the audience to engage the panel in questions and discussion about current efforts in and future directions for transparent, open, and cumulative science efforts in gerontology.

Subject:
Life Science
Social Science
Material Type:
Reading
Author:
Eileen K Graham
Gabrielle N
Jennifer Lodi-smith
Kendra Leigh Seaman
Rita M
Date Added:
08/03/2021
Transparency of CHI Research Artifacts: Results of a Self-Reported Survey
Unrestricted Use
CC BY
Rating
0.0 stars

Several fields of science are experiencing a "replication crisis" that has negatively impacted their credibility. Assessing the validity of a contribution via replicability of its experimental evidence and reproducibility of its analyses requires access to relevant study materials, data, and code. Failing to share them limits the ability to scrutinize or build-upon the research, ultimately hindering scientific progress.Understanding how the diverse research artifacts in HCI impact sharing can help produce informed recommendations for individual researchers and policy-makers in HCI. Therefore, we surveyed authors of CHI 2018–2019 papers, asking if they share their papers' research materials and data, how they share them, and why they do not. The results (N = 460/1356, 34% response rate) show that sharing is uncommon, partly due to misunderstandings about the purpose of sharing and reliable hosting. We conclude with recommendations for fostering open research practices.This paper and all data and materials are freely available at https://osf.io/csy8q

Subject:
Life Science
Social Science
Material Type:
Reading
Author:
Chatchavan Wacharamanotham
Florian Echtler
Lukas Eisenring
Steve Haroz
Date Added:
08/07/2020
Transparent, Reproducible, and Open Science Practices of Published Literature in Dermatology Journals: Cross-Sectional Analysis
Unrestricted Use
CC BY
Rating
0.0 stars

Background: Reproducible research is a foundational component for scientific advancements, yet little is known regarding the extent of reproducible research within the dermatology literature. Objective: This study aimed to determine the quality and transparency of the literature in dermatology journals by evaluating for the presence of 8 indicators of reproducible and transparent research practices. Methods: By implementing a cross-sectional study design, we conducted an advanced search of publications in dermatology journals from the National Library of Medicine catalog. Our search included articles published between January 1, 2014, and December 31, 2018. After generating a list of eligible dermatology publications, we then searched for full text PDF versions by using Open Access Button, Google Scholar, and PubMed. Publications were analyzed for 8 indicators of reproducibility and transparency—availability of materials, data, analysis scripts, protocol, preregistration, conflict of interest statement, funding statement, and open access—using a pilot-tested Google Form. Results: After exclusion, 127 studies with empirical data were included in our analysis. Certain indicators were more poorly reported than others. We found that most publications (113, 88.9%) did not provide unmodified, raw data used to make computations, 124 (97.6%) failed to make the complete protocol available, and 126 (99.2%) did not include step-by-step analysis scripts. Conclusions: Our sample of studies published in dermatology journals do not appear to include sufficient detail to be accurately and successfully reproduced in their entirety. Solutions to increase the quality, reproducibility, and transparency of dermatology research are warranted. More robust reporting of key methodological details, open data sharing, and stricter standards journals impose on authors regarding disclosure of study materials might help to better the climate of reproducible research in dermatology. [JMIR Dermatol 2019;2(1):e16078]

Subject:
Applied Science
Biology
Genetics
Health, Medicine and Nursing
Life Science
Material Type:
Reading
Provider:
JMIR Dermatology
Author:
Andrew Niemann
Austin L. Johnson
Courtney Cook
Daniel Tritz
J. Michael Anderson
Matt Vassar
Date Added:
08/07/2020
Two Years Later: Journals Are Not Yet Enforcing the ARRIVE Guidelines on Reporting Standards for Pre-Clinical Animal Studies
Unrestricted Use
CC BY
Rating
0.0 stars

A study by David Baker and colleagues reveals poor quality of reporting in pre-clinical animal research and a failure of journals to implement the ARRIVE guidelines. There is growing concern that poor experimental design and lack of transparent reporting contribute to the frequent failure of pre-clinical animal studies to translate into treatments for human disease. In 2010, the Animal Research: Reporting of In Vivo Experiments (ARRIVE) guidelines were introduced to help improve reporting standards. They were published in PLOS Biology and endorsed by funding agencies and publishers and their journals, including PLOS, Nature research journals, and other top-tier journals. Yet our analysis of papers published in PLOS and Nature journals indicates that there has been very little improvement in reporting standards since then. This suggests that authors, referees, and editors generally are ignoring guidelines, and the editorial endorsement is yet to be effectively implemented.

Subject:
Applied Science
Health, Medicine and Nursing
Life Science
Material Type:
Reading
Provider:
PLOS Biology
Author:
Ana Sottomayor
David Baker
Katie Lidster
Sandra Amor
Date Added:
08/07/2020
UKRN Open Research Primers
Unrestricted Use
CC BY
Rating
0.0 stars

The UKRN primer series is designed to introduce a broad audience to important topics in open and reproducible scholarship. Each primer includes an overview of the topic in the introductory “What?” section, reasons for undertaking these practices in the “Why?” section, followed by a longer “How?” section that provides guidance on how to do that open research behaviour practically. Throughout the primers there are embedded explanatory weblinks, and at the end of each is a collated list of links to useful further resources.

Subject:
Applied Science
Life Science
Physical Science
Social Science
Material Type:
Reading
Provider:
UK Reproducibility Network
Author:
Emma Henderson
Jackie Thompson
Date Added:
08/07/2020
UKRN Primers
Unrestricted Use
CC BY
Rating
0.0 stars

Open Research Action Plan, Data Sharing, Open Access, Open Code & Software, Open Resarch Awards, Preprints, Preregistration & Registered Reports

Subject:
Education
Material Type:
Reading
Author:
UKRN
Date Added:
12/21/2021
The Unix Shell
Unrestricted Use
CC BY
Rating
0.0 stars

Software Carpentry lesson on how to use the shell to navigate the filesystem and write simple loops and scripts. The Unix shell has been around longer than most of its users have been alive. It has survived so long because it’s a power tool that allows people to do complex things with just a few keystrokes. More importantly, it helps them combine existing programs in new ways and automate repetitive tasks so they aren’t typing the same things over and over again. Use of the shell is fundamental to using a wide range of other powerful tools and computing resources (including “high-performance computing” supercomputers). These lessons will start you on a path towards using these resources effectively.

Subject:
Applied Science
Computer Science
Mathematics
Measurement and Data
Material Type:
Module
Provider:
The Carpentries
Author:
Adam Huffman
Adam James Orr
Adam Richie-Halford
AidaMirsalehi
Alex Kassil
Alex Mac
Alexander Konovalov
Alexander Morley
Alix Keener
Amy Brown
Andrea Bedini
Andrew Boughton
Andrew Reid
Andrew T. T. McRae
Andrew Walker
Ariel Rokem
Armin Sobhani
Ashwin Srinath
Bagus Tris Atmaja
Bartosz Telenczuk
Ben Bolker
Benjamin Gabriel
Bertie Seyffert
Bill Mills
Brian Ballsun-Stanton
BrianBill
Camille Marini
Chris Mentzel
Christina Koch
Colin Morris
Colin Sauze
Damien Irving
Dan Jones
Dana Brunson
Daniel Baird
Daniel McCloy
Daniel Standage
Danielle M. Nielsen
Dave Bridges
David Eyers
David McKain
David Vollmer
Dean Attali
Devinsuit
Dmytro Lituiev
Donny Winston
Doug Latornell
Dustin Lang
Elena Denisenko
Emily Dolson
Emily Jane McTavish
Eric Jankowski
Erin Alison Becker
Ethan P White
Evgenij Belikov
Farah Shamma
Fatma Deniz
Filipe Fernandes
Francis Gacenga
François Michonneau
Gabriel A. Devenyi
Gerard Capes
Giuseppe Profiti
Greg Wilson
Halle Burns
Hannah Burkhardt
Harriet Alexander
Hugues Fontenelle
Ian van der Linde
Inigo Aldazabal Mensa
Jackie Milhans
Jake Cowper Szamosi
James Guelfi
Jan T. Kim
Jarek Bryk
Jarno Rantaharju
Jason Macklin
Jay van Schyndel
Jens vdL
John Blischak
John Pellman
John Simpson
Jonah Duckles
Jonny Williams
Joshua Madin
Kai Blin
Kathy Chung
Katrin Leinweber
Kevin M. Buckley
Kirill Palamartchouk
Klemens Noga
Kristopher Keipert
Kunal Marwaha
Laurence
Lee Zamparo
Lex Nederbragt
M Carlise
Mahdi Sadjadi
Marc Rajeev Gouw
Marcel Stimberg
Maria Doyle
Marie-Helene Burle
Marisa Lim
Mark Mandel
Martha Robinson
Martin Feller
Matthew Gidden
Matthew Peterson
Megan Fritz
Michael Zingale
Mike Henry
Mike Jackson
Morgan Oneka
Murray Hoggett
Nicola Soranzo
Nicolas Barral
Noah D Brenowitz
Noam Ross
Norman Gray
Orion Buske
Owen Kaluza
Patrick McCann
Paul Gardner
Pauline Barmby
Peter R. Hoyt
Peter Steinbach
Philip Lijnzaad
Phillip Doehle
Piotr Banaszkiewicz
Rafi Ullah
Raniere Silva
Robert A Beagrie
Ruud Steltenpool
Ry4an Brase
Rémi Emonet
Sarah Mount
Sarah Simpkin
Scott Ritchie
Stephan Schmeing
Stephen Jones
Stephen Turner
Steve Leak
Stéphane Guillou
Susan Miller
Thomas Mellan
Tim Keighley
Tobin Magle
Tom Dowrick
Trevor Bekolay
Varda F. Hagh
Victor Koppejan
Vikram Chhatre
Yee Mey
csqrs
earkpr
ekaterinailin
nther
reshama shaikh
s-boardman
sjnair
Date Added:
03/20/2017
Update on the endorsement of CONSORT by high impact factor journals: a survey of journal “Instructions to Authors” in 2014
Unrestricted Use
CC BY
Rating
0.0 stars

The CONsolidated Standards Of Reporting Trials (CONSORT) Statement provides a minimum standard set of items to be reported in published clinical trials; it has received widespread recognition within the biomedical publishing community. This research aims to provide an update on the endorsement of CONSORT by high impact medical journals. Methods We performed a cross-sectional examination of the online “Instructions to Authors” of 168 high impact factor (2012) biomedical journals between July and December 2014. We assessed whether the text of the “Instructions to Authors” mentioned the CONSORT Statement and any CONSORT extensions, and we quantified the extent and nature of the journals’ endorsements of these. These data were described by frequencies. We also determined whether journals mentioned trial registration and the International Committee of Medical Journal Editors (ICMJE; other than in regards to trial registration) and whether either of these was associated with CONSORT endorsement (relative risk and 95 % confidence interval). We compared our findings to the two previous iterations of this survey (in 2003 and 2007). We also identified the publishers of the included journals. Results Sixty-three percent (106/168) of the included journals mentioned CONSORT in their “Instructions to Authors.” Forty-four endorsers (42 %) explicitly stated that authors “must” use CONSORT to prepare their trial manuscript, 38 % required an accompanying completed CONSORT checklist as a condition of submission, and 39 % explicitly requested the inclusion of a flow diagram with the submission. CONSORT extensions were endorsed by very few journals. One hundred and thirty journals (77 %) mentioned ICMJE, and 106 (63 %) mentioned trial registration. Conclusions The endorsement of CONSORT by high impact journals has increased over time; however, specific instructions on how CONSORT should be used by authors are inconsistent across journals and publishers. Publishers and journals should encourage authors to use CONSORT and set clear expectations for authors about compliance with CONSORT.

Subject:
Applied Science
Health, Medicine and Nursing
Material Type:
Reading
Provider:
Trials
Author:
David Moher
Douglas G. Altman
Kenneth F. Schulz
Larissa Shamseer
Sally Hopewell
Date Added:
08/07/2020
Use of the Journal Impact Factor in academic review, promotion, and tenure evaluations
Unrestricted Use
CC BY
Rating
0.0 stars

The Journal Impact Factor (JIF) was originally designed to aid libraries in deciding which journals to index and purchase for their collections. Over the past few decades, however, it has become a relied upon metric used to evaluate research articles based on journal rank. Surveyed faculty often report feeling pressure to publish in journals with high JIFs and mention reliance on the JIF as one problem with current academic evaluation systems. While faculty reports are useful, information is lacking on how often and in what ways the JIF is currently used for review, promotion, and tenure (RPT). We therefore collected and analyzed RPT documents from a representative sample of 129 universities from the United States and Canada and 381 of their academic units. We found that 40% of doctoral, research-intensive (R-type) institutions and 18% of master’s, or comprehensive (M-type) institutions explicitly mentioned the JIF, or closely related terms, in their RPT documents. Undergraduate, or baccalaureate (B-type) institutions did not mention it at all. A detailed reading of these documents suggests that institutions may also be using a variety of terms to indirectly refer to the JIF. Our qualitative analysis shows that 87% of the institutions that mentioned the JIF supported the metric’s use in at least one of their RPT documents, while 13% of institutions expressed caution about the JIF’s use in evaluations. None of the RPT documents we analyzed heavily criticized the JIF or prohibited its use in evaluations. Of the institutions that mentioned the JIF, 63% associated it with quality, 40% with impact, importance, or significance, and 20% with prestige, reputation, or status. In sum, our results show that the use of the JIF is encouraged in RPT evaluations, especially at research-intensive universities, and indicates there is work to be done to improve evaluation processes to avoid the potential misuse of metrics like the JIF.

Subject:
Applied Science
Health, Medicine and Nursing
Information Science
Life Science
Social Science
Material Type:
Reading
Author:
Carol Muñoz Nieves
Erin C. McKiernan
Juan Pablo Alperin
Lesley A. Schimanski
Lisa Matthias
Meredith T. Niles
Date Added:
08/07/2020
Version Control with Git
Unrestricted Use
CC BY
Rating
0.0 stars

This lesson is part of the Software Carpentry workshops that teach how to use version control with Git. Wolfman and Dracula have been hired by Universal Missions (a space services spinoff from Euphoric State University) to investigate if it is possible to send their next planetary lander to Mars. They want to be able to work on the plans at the same time, but they have run into problems doing this in the past. If they take turns, each one will spend a lot of time waiting for the other to finish, but if they work on their own copies and email changes back and forth things will be lost, overwritten, or duplicated. A colleague suggests using version control to manage their work. Version control is better than mailing files back and forth: Nothing that is committed to version control is ever lost, unless you work really, really hard at it. Since all old versions of files are saved, it’s always possible to go back in time to see exactly who wrote what on a particular day, or what version of a program was used to generate a particular set of results. As we have this record of who made what changes when, we know who to ask if we have questions later on, and, if needed, revert to a previous version, much like the “undo” feature in an editor. When several people collaborate in the same project, it’s possible to accidentally overlook or overwrite someone’s changes. The version control system automatically notifies users whenever there’s a conflict between one person’s work and another’s. Teams are not the only ones to benefit from version control: lone researchers can benefit immensely. Keeping a record of what was changed, when, and why is extremely useful for all researchers if they ever need to come back to the project later on (e.g., a year later, when memory has faded). Version control is the lab notebook of the digital world: it’s what professionals use to keep track of what they’ve done and to collaborate with other people. Every large software development project relies on it, and most programmers use it for their small jobs as well. And it isn’t just for software: books, papers, small data sets, and anything that changes over time or needs to be shared can and should be stored in a version control system.

Subject:
Applied Science
Computer Science
Information Science
Mathematics
Measurement and Data
Material Type:
Module
Provider:
The Carpentries
Author:
Alexander G. Zimmerman
Amiya Maji
Amy L Olex
Andrew Lonsdale
Annika Rockenberger
Begüm D. Topçuoğlu
Ben Bolker
Bill Sacks
Brian Moore
Casey Youngflesh
Charlotte Moragh Jones-Todd
Christoph Junghans
David Jennings
Erin Alison Becker
François Michonneau
Garrett Bachant
Grant Sayer
Holger Dinkel
Ian Lee
Jake Lever
James E McClure
James Tocknell
Janoš Vidali
Jeremy Teitelbaum
Jeyashree Krishnan
Jimmy O'Donnell
Joe Atzberger
Jonah Duckles
Jonathan Cooper
João Rodrigues
Katherine Koziar
Katrin Leinweber
Kunal Marwaha
Kurt Glaesemann
L.C. Karssen
Lauren Ko
Lex Nederbragt
Madicken Munk
Maneesha Sane
Marie-Helene Burle
Mark Woodbridge
Martino Sorbaro
Matt Critchlow
Matteo Ceschia
Matthew Bourque
Matthew Hartley
Maxim Belkin
Megan Potterbusch
Michael Torpey
Michael Zingale
Mingsheng Zhang
Nicola Soranzo
Nima Hejazi
Oscar Arbeláez
Peace Ossom Williamson
Pey Lian Lim
Raniere Silva
Rayna Michelle Harris
Rene Gassmoeller
Rich McCue
Richard Barnes
Ruud Steltenpool
Rémi Emonet
Samniqueka Halsey
Samuel Lelièvre
Sarah Stevens
Saskia Hiltemann
Schlauch, Tobias
Scott Bailey
Simon Waldman
Stefan Siegert
Thomas Morrell
Tommy Keswick
Traci P
Tracy Teal
Trevor Keller
TrevorLeeCline
Tyler Crawford Kelly
Tyler Reddy
Umihiko Hoshijima
Veronica Ikeshoji-Orlati
Wes Harrell
Will Usher
Wolmar Nyberg Åkerström
abracarambar
butterflyskip
jonestoddcm
Date Added:
03/20/2017
Version control with the OSF
Unrestricted Use
CC BY
Rating
0.0 stars

This webinar will introduce the concept of version control and the version control features that are built into the Open Science Framework (OSF; https://osf.io). The OSF is a free, open source web application built to help researchers manage their workflows. The OSF is part collaboration tool, part version control software, and part data archive. The OSF connects to popular tools researchers already use, like Dropbox, Box, Github and Mendeley, to streamline workflows and increase efficiency. This webinar will discuss how keeping track of the different file versions is important for efficient reproducible research practices, how version control works on the OSF, and how researchers can view and download previous versions of files.

Subject:
Applied Science
Computer Science
Information Science
Material Type:
Lecture
Provider:
Center for Open Science
Author:
Center for Open Science
Date Added:
08/07/2020
The Weak Spots in Contemporary Science (and How to Fix Them)
Unrestricted Use
CC BY
Rating
0.0 stars

In this review, the author discusses several of the weak spots in contemporary science, including scientific misconduct, the problems of post hoc hypothesizing (HARKing), outcome switching, theoretical bloopers in formulating research questions and hypotheses, selective reading of the literature, selective citing of previous results, improper blinding and other design failures, p-hacking or researchers’ tendency to analyze data in many different ways to find positive (typically significant) results, errors and biases in the reporting of results, and publication bias. The author presents some empirical results highlighting problems that lower the trustworthiness of reported results in scientific literatures, including that of animal welfare studies. Some of the underlying causes of these biases are discussed based on the notion that researchers are only human and hence are not immune to confirmation bias, hindsight bias, and minor ethical transgressions. The author discusses solutions in the form of enhanced transparency, sharing of data and materials, (post-publication) peer review, pre-registration, registered reports, improved training, reporting guidelines, replication, dealing with publication bias, alternative inferential techniques, power, and other statistical tools.

Subject:
Biology
Life Science
Material Type:
Reading
Provider:
Animals
Author:
Jelte M. Wicherts
Date Added:
08/07/2020
The What, Why, and How of Preregistration
Unrestricted Use
CC BY
Rating
0.0 stars

More researchers are preregistering their studies as a way to combat publication bias and improve the credibility of research findings. Preregistration is at its core designed to distinguish between confirmatory and exploratory results. Both are important to the progress of science, but when they are conflated, problems arise. In this webinar, we discuss the What, Why, and How of preregistration and what it means for the future of science. Visit cos.io/prereg for additional resources.

Subject:
Applied Science
Computer Science
Information Science
Material Type:
Lecture
Provider:
Center for Open Science
Author:
Center for Open Science
Date Added:
08/07/2020
What incentives increase data sharing in health and medical research? A systematic review
Unrestricted Use
CC BY
Rating
0.0 stars

The foundation of health and medical research is data. Data sharing facilitates the progress of research and strengthens science. Data sharing in research is widely discussed in the literature; however, there are seemingly no evidence-based incentives that promote data sharing. Methods A systematic review (registration: doi.org/10.17605/OSF.IO/6PZ5E) of the health and medical research literature was used to uncover any evidence-based incentives, with pre- and post-empirical data that examined data sharing rates. We were also interested in quantifying and classifying the number of opinion pieces on the importance of incentives, the number observational studies that analysed data sharing rates and practices, and strategies aimed at increasing data sharing rates. Results Only one incentive (using open data badges) has been tested in health and medical research that examined data sharing rates. The number of opinion pieces (n = 85) out-weighed the number of article-testing strategies (n = 76), and the number of observational studies exceeded them both (n = 106). Conclusions Given that data is the foundation of evidence-based health and medical research, it is paradoxical that there is only one evidence-based incentive to promote data sharing. More well-designed studies are needed in order to increase the currently low rates of data sharing.

Subject:
Applied Science
Health, Medicine and Nursing
Material Type:
Reading
Provider:
Research Integrity and Peer Review
Author:
Adrian G. Barnett
Anisa Rowhani-Farid
Michelle Allen
Date Added:
08/07/2020