Updating search results...

Search Resources

329 Results

View
Selected filters:
  • oskb
Deep Dive into Open Scholarship: Preregistration and Registered Reports
Unrestricted Use
CC BY
Rating
0.0 stars

In this deep dive session, Amanda Montoya (UCLA) and Karen Rambo-Hernandez (Texas A&M University) introduce the basics of preregistration and Registered Reports: two methods for creating a permanent record of a research plan prior to conducting data collection. They discuss the conceptual similarities and practical differences between pre-registration and registered reports. They provide practical advice from their own experiences using these practices in research labs and resources available for researchers interested in using these approaches. The session concludes with questions and discussion about adopting these practices and unique considerations for implementing these practices in education research.

Subject:
Education
Material Type:
Lesson
Author:
Karen Rambo-Hernandez
Amanda Montoya
Date Added:
03/15/2021
Deep Dive on Open Practices: Understanding Data Sharing with Sara Hart
Unrestricted Use
Public Domain
Rating
0.0 stars

As sharing data openly becomes more and more the norm, and not just because of mandates for federal funding, more researchers may become more interested in sharing data. Benefits of data sharing for educational research include increased collaboration, acceleration of knowledge through novel and creative research questions, and an increase in equitable opportunities for early career researchers and faculty at under-resourced institutions. In this session, Sara Hart covers the benefits of data sharing as well as the “how to” of how to prepare data for sharing. Participants are provided information about data sharing and resources to support their own data sharing.

Subject:
Education
Material Type:
Lecture
Author:
Sara Hart
Date Added:
04/20/2022
Deep Dive on Open Practices: Understanding Preregistration with Scott Peters & Karen Rambo-Hernandez
Unrestricted Use
Public Domain
Rating
0.0 stars

In this deep dive session, we introduce the basics of pre-registration: a method for creating a permanent record of a research plan prior to conducting data collection and/or data analysis. We discuss the conceptual similarities and practical differences between pre-registration and registered reports and traditional approaches to educational research. We provide some practical advice from our own experiences using this practice in our own research and resources available for researchers interested in pre-registering their work. Finally, we end with questions and discussion about adopting pre-registration practices and unique considerations for implementing pre-registration in education research.

Subject:
Education
Material Type:
Lecture
Author:
Karen Rambo-Hernandez
Scott Peters
Date Added:
04/20/2022
Deep Dive on Open Practices: Understanding Registered Reports in Education Research
Unrestricted Use
Public Domain
Rating
0.0 stars

Deep Dive on Open Practices: Understanding Registered Reports in Education Research with Amanda Montoya and Betsy McCoach - Registered reports are a new publication mechanism where peer review and the decision to publish the results of a study occur prior to data collection and/or analysis. Registered reports share many characteristics with preregistration but are distinct by involving the journal prior to completing the study. Journals in the field of education are increasingly offering opportunities to publish registered reports. Registered reports offer a variety of benefits to both the researcher and to the research field. In this workshop, we will discuss the basics of registered reports, benefits and limitations of registered reports, and which journals in education accept registered reports. We provide some practical advice on deciding which projects are appropriate for registered reports, implementing registered reports, and time management throughout the process. We discuss how special cases can be implemented as registered reports, such as secondary data analysis, replications, meta-analyses, and longitudinal studies.

Subject:
Education
Material Type:
Lecture
Author:
Betsy McCoach
Amanda Montoya
Date Added:
04/20/2022
Deep Dive on Open Practices: Understanding Replication in Education Research with Matt Makel
Unrestricted Use
Public Domain
Rating
0.0 stars

Deep Dive on Open Practices: Understanding Replication in Education Research with Matt Makel - In this deep dive session, we introduce the purpose of replication, different conceptions of replication, and some models for implementation in education. Relevant terms, methods, publication possibilities, and existing funding mechanisms are reviewed. Frequently asked questions and potential answers are shared.

Subject:
Education
Material Type:
Lesson
Author:
Matt Makel
Date Added:
04/20/2022
Defining and Growing the Field of Metascience
Unrestricted Use
CC BY
Rating
0.0 stars

In this talk, Professor Fidler argues how the field of metascience contrasts with many scientific disciplines because it works in service to science with a goal to improve the process by which science is conducted. The importance of creating a defined community is that is allows for norms to develop and for proper credit to be given for this work, without which it will be marginalized or demeaned.
---
Are you a funder interested in supporting research on the scientific process? Learn more about the communities mobilizing around the emerging field of metascience by visiting metascience.com. Funders are encouraged to review and adopt the practices overviewed at cos.io/top-funders as part of the solution to issues discussed during the Funders Forum.

Subject:
Education
Material Type:
Lecture
Provider:
Center for Open Science
Author:
Fiona Fidler
Date Added:
03/21/2021
Degrees of Freedom in Planning, Running, Analyzing, and Reporting Psychological Studies: A Checklist to Avoid p-Hacking
Unrestricted Use
CC BY
Rating
0.0 stars

The designing, collecting, analyzing, and reporting of psychological studies entail many choices that are often arbitrary. The opportunistic use of these so-called researcher degrees of freedom aimed at obtaining statistically significant results is problematic because it enhances the chances of false positive results and may inflate effect size estimates. In this review article, we present an extensive list of 34 degrees of freedom that researchers have in formulating hypotheses, and in designing, running, analyzing, and reporting of psychological research. The list can be used in research methods education, and as a checklist to assess the quality of preregistrations and to determine the potential for bias due to (arbitrary) choices in unregistered studies.

Subject:
Psychology
Social Science
Material Type:
Reading
Provider:
Frontiers in Psychology
Author:
Coosje L. S. Veldkamp
Hilde E. M. Augusteijn
Jelte M. Wicherts
Marcel A. L. M. van Assen
Marjan Bakker
Robbie C. M. van Aert
Date Added:
08/07/2020
Dementia, Big Data and Open Science
Unrestricted Use
Public Domain
Rating
0.0 stars

Although there is clear potential to improve science and innovation systems through big data and open science, barriers still remain with respect to data sharing efforts. How can the available massive and diverse data collections be used and shared more efficiently to boost global research and innovation and improve care? What actions are needed to facilitate open access to research data generated with public funding?

The OECD is bringing together policy makers, funding agencies and researchers to tackle the issue of open access to data, focused around developing good practice and principles on data governance. Four case studies highlight best practice and identify barriers to progress.

Following an OECD-hosted consultation with the Ontario Brain Institute (OBI), the United Kingdom Medical Research Council (MRC), and the US Alzheimer’s Association, two concrete examples of global data sharing have been created. The first, focused on providing a wealth of open-source biomedical data for the community (deep data), builds upon GAAIN, the Global Alzheimer’s Association Interactive Network, and links eleven international partners through a federated network of data resources. The capability of this network is being extended significantly through connections with the French National Alzheimer’s Database (BNA), the European Medicines Informatics Framework (EMIF), and the Canadian based Longitudinal Online Research and Imaging System (LORIS). The second focused on linking big data approaches at the population level (broad data), is a complementary collaboration between the Canadian Consortium on Neurodegeneration in Ageing and the Dementias Platform UK to share and analyse large-scale complex population-wide datasets from up to 2 million individuals, including imaging, genomics and health data.

As a result, these collaborations will enable the aggregation of an unprecedented volume of individual and population-level data, offering an open science solution to help research to more efficiently tackle Alzheimer’s disease and related disorders.

Subject:
Biology
Life Science
Material Type:
Reading
Author:
OECD
Date Added:
07/16/2021
Did awarding badges increase data sharing in BMJ Open? A randomized controlled trial
Unrestricted Use
CC BY
Rating
0.0 stars

Sharing data and code are important components of reproducible research. Data sharing in research is widely discussed in the literature; however, there are no well-established evidence-based incentives that reward data sharing, nor randomized studies that demonstrate the effectiveness of data sharing policies at increasing data sharing. A simple incentive, such as an Open Data Badge, might provide the change needed to increase data sharing in health and medical research. This study was a parallel group randomized controlled trial (protocol registration: doi:10.17605/OSF.IO/PXWZQ) with two groups, control and intervention, with 80 research articles published in BMJ Open per group, with a total of 160 research articles. The intervention group received an email offer for an Open Data Badge if they shared their data along with their final publication and the control group received an email with no offer of a badge if they shared their data with their final publication. The primary outcome was the data sharing rate. Badges did not noticeably motivate researchers who published in BMJ Open to share their data; the odds of awarding badges were nearly equal in the intervention and control groups (odds ratio = 0.9, 95% CI [0.1, 9.0]). Data sharing rates were low in both groups, with just two datasets shared in each of the intervention and control groups. The global movement towards open science has made significant gains with the development of numerous data sharing policies and tools. What remains to be established is an effective incentive that motivates researchers to take up such tools to share their data.

Subject:
Applied Science
Information Science
Material Type:
Reading
Provider:
Royal Society Open Science
Author:
Adrian Aldcroft
Adrian G. Barnett
Anisa Rowhani-Farid
Date Added:
08/07/2020
Discrepancies in the Registries of Diet vs Drug Trials
Unrestricted Use
CC BY
Rating
0.0 stars

This cross-sectional study examines discrepancies between registered protocols and subsequent publications for drug and diet trials whose findings were published in prominent clinical journals in the last decade. ClinicalTrials.gov was established in 2000 in response to the Food and Drug Administration Modernization Act of 1997, which called for registration of trials of investigational new drugs for serious diseases. Subsequently, the scope of ClinicalTrials.gov expanded to all interventional studies, including diet trials. Presently, prospective trial registration is required by the National Institutes of Health for grant funding and many clinical journals for publication.1 Registration may reduce risk of bias from selective reporting and post hoc changes in design and analysis.1,2 Although a study3 of trials with ethics approval in Finland in 2007 identified numerous discrepancies between registered protocols and subsequent publications, the consistency of diet trial registration and reporting has not been well explored.

Subject:
Applied Science
Health, Medicine and Nursing
Material Type:
Reading
Provider:
JAMA Network Open
Author:
Cara B. Ebbeling
David S. Ludwig
Steven B. Heymsfield
Date Added:
08/07/2020
Dissemination and publication of research findings: an updated review of related biases
Read the Fine Print
Rating
0.0 stars

Objectives To identify and appraise empirical studies on publication and related biases published since 1998; to assess methods to deal with publication and related biases; and to examine, in a random sample of published systematic reviews, measures taken to prevent, reduce and detect dissemination bias. Data sources The main literature search, in August 2008, covered the Cochrane Methodology Register Database, MEDLINE, EMBASE, AMED and CINAHL. In May 2009, PubMed, PsycINFO and OpenSIGLE were also searched. Reference lists of retrieved studies were also examined. Review methods In Part I, studies were classified as evidence or method studies and data were extracted according to types of dissemination bias or methods for dealing with it. Evidence from empirical studies was summarised narratively. In Part II, 300 systematic reviews were randomly selected from MEDLINE and the methods used to deal with publication and related biases were assessed. Results Studies with significant or positive results were more likely to be published than those with non-significant or negative results, thereby confirming findings from a previous HTA report. There was convincing evidence that outcome reporting bias exists and has an impact on the pooled summary in systematic reviews. Studies with significant results tended to be published earlier than studies with non-significant results, and empirical evidence suggests that published studies tended to report a greater treatment effect than those from the grey literature. Exclusion of non-English-language studies appeared to result in a high risk of bias in some areas of research such as complementary and alternative medicine. In a few cases, publication and related biases had a potentially detrimental impact on patients or resource use. Publication bias can be prevented before a literature review (e.g. by prospective registration of trials), or detected during a literature review (e.g. by locating unpublished studies, funnel plot and related tests, sensitivity analysis modelling), or its impact can be minimised after a literature review (e.g. by confirmatory large-scale trials, updating the systematic review). The interpretation of funnel plot and related statistical tests, often used to assess publication bias, was often too simplistic and likely misleading. More sophisticated modelling methods have not been widely used. Compared with systematic reviews published in 1996, recent reviews of health-care interventions were more likely to locate and include non-English-language studies and grey literature or unpublished studies, and to test for publication bias. Conclusions Dissemination of research findings is likely to be a biased process, although the actual impact of such bias depends on specific circumstances. The prospective registration of clinical trials and the endorsement of reporting guidelines may reduce research dissemination bias in clinical research. In systematic reviews, measures can be taken to minimise the impact of dissemination bias by systematically searching for and including relevant studies that are difficult to access. Statistical methods can be useful for sensitivity analyses. Further research is needed to develop methods for qualitatively assessing the risk of publication bias in systematic reviews, and to evaluate the effect of prospective registration of studies, open access policy and improved publication guidelines.

Subject:
Applied Science
Health, Medicine and Nursing
Material Type:
Reading
Provider:
Health Technology Assessment
Author:
Aj Sutton
C Hing
C Pang
Cs Kwok
F Song
I Harvey
J Ryder
L Hooper
S Parekh
Yk Loke
Date Added:
08/07/2020
Does use of the CONSORT Statement impact the completeness of reporting of randomised controlled trials published in medical journals? A Cochrane reviewa
Unrestricted Use
CC BY
Rating
0.0 stars

Background
The Consolidated Standards of Reporting Trials (CONSORT) Statement is intended to facilitate better reporting of randomised clinical trials (RCTs). A systematic review recently published in the Cochrane Library assesses whether journal endorsement of CONSORT impacts the completeness of reporting of RCTs; those findings are summarised here.

Methods
Evaluations assessing the completeness of reporting of RCTs based on any of 27 outcomes formulated based on the 1996 or 2001 CONSORT checklists were included; two primary comparisons were evaluated. The 27 outcomes were: the 22 items of the 2001 CONSORT checklist, four sub-items describing blinding and a ‘total summary score’ of aggregate items, as reported. Relative risks (RR) and 99% confidence intervals were calculated to determine effect estimates for each outcome across evaluations.

Results
Fifty-three reports describing 50 evaluations of 16,604 RCTs were assessed for adherence to at least one of 27 outcomes. Sixty-nine of 81 meta-analyses show relative benefit from CONSORT endorsement on completeness of reporting. Between endorsing and non-endorsing journals, 25 outcomes are improved with CONSORT endorsement, five of these significantly (α = 0.01). The number of evaluations per meta-analysis was often low with substantial heterogeneity; validity was assessed as low or unclear for many evaluations.

Conclusions
The results of this review suggest that journal endorsement of CONSORT may benefit the completeness of reporting of RCTs they publish. No evidence suggests that endorsement hinders the completeness of RCT reporting. However, despite relative improvements when CONSORT is endorsed by journals, the completeness of reporting of trials remains sub-optimal. Journals are not sending a clear message about endorsement to authors submitting manuscripts for publication. As such, fidelity of endorsement as an ‘intervention’ has been weak to date. Journals need to take further action regarding their endorsement and implementation of CONSORT to facilitate accurate, transparent and complete reporting of trials.

Subject:
Applied Science
Health, Medicine and Nursing
Material Type:
Reading
Provider:
Systematic Reviews
Author:
David Moher
Douglas G Altman
Kenneth F Schulz
Larissa Shamseer
Lucy Turner
Date Added:
08/07/2020
Economics Lesson with Stata
Unrestricted Use
CC BY
Rating
0.0 stars

A Data Carpentry curriculum for Economics is being developed by Dr. Miklos Koren at Central European University. These materials are being piloted locally. Development for these lessons has been supported by a grant from the Sloan Foundation.

Subject:
Applied Science
Computer Science
Economics
Information Science
Mathematics
Measurement and Data
Social Science
Material Type:
Module
Provider:
The Carpentries
Author:
Andras Vereckei
Arieda Muço
Miklós Koren
Date Added:
08/07/2020
The Economics of Reproducibility in Preclinical Research
Unrestricted Use
CC BY
Rating
0.0 stars

Low reproducibility rates within life science research undermine cumulative knowledge production and contribute to both delays and costs of therapeutic drug development. An analysis of past studies indicates that the cumulative (total) prevalence of irreproducible preclinical research exceeds 50%, resulting in approximately US$28,000,000,000 (US$28B)/year spent on preclinical research that is not reproducible—in the United States alone. We outline a framework for solutions and a plan for long-term improvements in reproducibility rates that will help to accelerate the discovery of life-saving therapies and cures.

Subject:
Biology
Life Science
Material Type:
Reading
Provider:
PLOS Biology
Author:
Iain M. Cockburn
Leonard P. Freedman
Timothy S. Simcoe
Date Added:
08/07/2020
Educational Psychologist - Educational Psychology in the Open Science Era
Unrestricted Use
CC BY
Rating
0.0 stars

Special Issue of Educational Psychologist - Educational Psychology in the Open Science EraRecently, scholars have noted how several “old school” practices—a host of well-regarded, long-standing scientific norms—in combination, sometimes compromise the credibility of research. In response, other scholarly fields have developed several “open science” norms and practices to address these credibility issues. Against this backdrop, this special issue explores the extent to which and how these norms should be adopted and adapted for educational psychology and education more broadly.

Subject:
Education
Material Type:
Reading
Author:
OSKB Admin
Date Added:
03/22/2021
Effect of Population Heterogenization on the Reproducibility of Mouse Behavior: A Multi-Laboratory Study
Unrestricted Use
CC BY
Rating
0.0 stars

In animal experiments, animals, husbandry and test procedures are traditionally standardized to maximize test sensitivity and minimize animal use, assuming that this will also guarantee reproducibility. However, by reducing within-experiment variation, standardization may limit inference to the specific experimental conditions. Indeed, we have recently shown in mice that standardization may generate spurious results in behavioral tests, accounting for poor reproducibility, and that this can be avoided by population heterogenization through systematic variation of experimental conditions. Here, we examined whether a simple form of heterogenization effectively improves reproducibility of test results in a multi-laboratory situation. Each of six laboratories independently ordered 64 female mice of two inbred strains (C57BL/6NCrl, DBA/2NCrl) and examined them for strain differences in five commonly used behavioral tests under two different experimental designs. In the standardized design, experimental conditions were standardized as much as possible in each laboratory, while they were systematically varied with respect to the animals' test age and cage enrichment in the heterogenized design. Although heterogenization tended to improve reproducibility by increasing within-experiment variation relative to between-experiment variation, the effect was too weak to account for the large variation between laboratories. However, our findings confirm the potential of systematic heterogenization for improving reproducibility of animal experiments and highlight the need for effective and practicable heterogenization strategies.

Subject:
Applied Science
Health, Medicine and Nursing
Material Type:
Reading
Provider:
PLOS ONE
Author:
Benjamin Zipser
Berry Spruijt
Britta Schindler
Chadi Touma
Christiane Brandwein
David P. Wolfer
Hanno Würbel
Johanneke van der Harst
Joseph P. Garner
Lars Lewejohann
Niek van Stipdonk
Norbert Sachser
Peter Gass
S. Helene Richter
Sabine Chourbaji
Vootele Võikar
Date Added:
08/07/2020
El Control de Versiones con Git
Unrestricted Use
CC BY
Rating
0.0 stars

Software Carpentry lección para control de versiones con Git Para ilustrar el poder de Git y GitHub, usaremos la siguiente historia como un ejemplo motivador a través de esta lección. El Hombre Lobo y Drácula han sido contratados por Universal Missions para investigar si es posible enviar su próximo explorador planetario a Marte. Ellos quieren poder trabajar al mismo tiempo en los planes, pero ya han experimentado ciertos problemas anteriormente al hacer algo similar. Si se rotan por turnos entonces cada uno gastará mucho tiempo esperando a que el otro termine, pero si trabajan en sus propias copias e intercambian los cambios por email, las cosas se perderán, se sobreescribirán o se duplicarán. Un colega sugiere utilizar control de versiones para lidiar con el trabajo. El control de versiones es mejor que el intercambio de ficheros por email: Nada se pierde una vez que se incluye bajo control de versiones, a no ser que se haga un esfuerzo sustancial. Como se van guardando todas las versiones precedentes de los ficheros, siempre es posible volver atrás en el tiempo y ver exactamente quién escribió qué en un día en particular, o qué versión de un programa fue utilizada para generar un conjunto de resultados en particular. Como se tienen estos registros de quién hizo qué y en qué momento, es posible saber a quién preguntar si se tiene una pregunta en un momento posterior y, si es necesario, revertir el contenido a una versión anterior, de forma similar a como funciona el comando “deshacer” de los editores de texto. Cuando varias personas colaboran en el mismo proyecto, es posible pasar por alto o sobreescribir de manera accidental los cambios hechos por otra persona. El sistema de control de versiones notifica automáticamente a los usuarios cada vez que hay un conflicto entre el trabajo de una persona y la otra. Los equipos no son los únicos que se benefician del control de versiones: los investigadores independientes se pueden beneficiar en gran medida. Mantener un registro de qué ha cambiado, cuándo y por qué es extremadamente útil para todos los investigadores si alguna vez necesitan retomar el proyecto en un momento posterior (e.g. un año después, cuando se ha desvanecido el recuerdo de los detalles).

Subject:
Applied Science
Computer Science
Information Science
Mathematics
Measurement and Data
Material Type:
Module
Provider:
The Carpentries
Author:
Alejandra Gonzalez-Beltran
Amy Olex
Belinda Weaver
Bradford Condon
Casey Youngflesh
Daisie Huang
Dani Ledezma
Francisco Palm
Garrett Bachant
Heather Nunn
Hely Salgado
Ian Lee
Ivan Gonzalez
James E McClure
Javier Forment
Jimmy O'Donnell
Jonah Duckles
K.E. Koziar
Katherine Koziar
Katrin Leinweber
Kevin Alquicira
Kevin MF
Kurt Glaesemann
LauCIFASIS
Leticia Vega
Lex Nederbragt
Mark Woodbridge
Matias Andina
Matt Critchlow
Mingsheng Zhang
Nelly Sélem
Nima Hejazi
Nohemi Huanca Nunez
Olemis Lang
P. L. Lim
Paula Andrea Martinez
Peace Ossom Williamson
Rayna M Harris
Romualdo Zayas-Lagunas
Sarah Stevens
Saskia Hiltemann
Shirley Alquicira
Silvana Pereyra
Tom Morrell
Valentina Bonetti
Veronica Ikeshoji-Orlati
Veronica Jimenez
butterflyskip
dounia
Date Added:
08/07/2020
Embedding open and reproducible science into teaching: A bank of lesson plans and resources
Unrestricted Use
CC BY
Rating
0.0 stars

Recently, there has been a growing emphasis on embedding open and reproducible approaches into research. One essential step in accomplishing this larger goal is to embed such practices into undergraduate and postgraduate research training. However, this often requires substantial time and resources to implement. Also, while many pedagogical resources are regularly developed for this purpose, they are not often openly and actively shared with the wider community. The creation and public sharing of open educational resources is useful for educators who wish to embed open scholarship and reproducibility into their teaching and learning. In this article, we describe and openly share a bank of teaching resources and lesson plans on the broad topics of open scholarship, open science, replication, and reproducibility that can be integrated into taught courses, to support educators and instructors. These resources were created as part of the Society for the Improvement of Psychological Science (SIPS) hackathon at the 2021 Annual Conference, and we detail this collaborative process in the article. By sharing these open pedagogical resources, we aim to reduce the labour required to develop and implement open scholarship content to further the open scholarship and open educational materials movement.

Subject:
Education
Material Type:
Reading
Author:
Alaa Aldoh
Catherine V. Talbot
Charlotte Rebecca Pennington
David Moreau
Flavio Azevedo
John J Shaw
Loukia Tzavella
Mahmoud Elsherif
Martin Rachev Vasilev
Matthew C. Makel
Meng Liu
Myrthe Vel Tromp
Natasha April Tonge
Olly Robertson
Ronan McGarrigle
Ruth Horry
Sam Parsons
Madeleine Pownall
Date Added:
07/29/2021
Empirical Study of Data Sharing by Authors Publishing in PLoS Journals
Unrestricted Use
CC BY
Rating
0.0 stars

Background Many journals now require authors share their data with other investigators, either by depositing the data in a public repository or making it freely available upon request. These policies are explicit, but remain largely untested. We sought to determine how well authors comply with such policies by requesting data from authors who had published in one of two journals with clear data sharing policies. Methods and Findings We requested data from ten investigators who had published in either PLoS Medicine or PLoS Clinical Trials. All responses were carefully documented. In the event that we were refused data, we reminded authors of the journal's data sharing guidelines. If we did not receive a response to our initial request, a second request was made. Following the ten requests for raw data, three investigators did not respond, four authors responded and refused to share their data, two email addresses were no longer valid, and one author requested further details. A reminder of PLoS's explicit requirement that authors share data did not change the reply from the four authors who initially refused. Only one author sent an original data set. Conclusions We received only one of ten raw data sets requested. This suggests that journal policies requiring data sharing do not lead to authors making their data sets available to independent investigators.

Subject:
Applied Science
Health, Medicine and Nursing
Material Type:
Reading
Provider:
PLOS ONE
Author:
Andrew J. Vickers
Caroline J. Savage
Date Added:
08/07/2020