Workshop goals - Why are we teaching this - Why is this …
Workshop goals - Why are we teaching this - Why is this important - For future and current you - For research as a whole - Lack of reproducibility in research is a real problem
Materials and how we'll use them - Workshop landing page, with
- links to the Materials - schedule
Structure oriented along the Four Facets of Reproducibility:
How this workshop is run - This is a Carpentries Workshop - that means friendly learning environment - Code of Conduct - active learning - work with the people next to you - ask for help
Workshop goals - Why are we teaching this - Why is this …
Workshop goals - Why are we teaching this - Why is this important - For future and current you - For research as a whole - Lack of reproducibility in research is a real problem
Materials and how we'll use them - Workshop landing page, with
- links to the Materials - schedule
Structure oriented along the Four Facets of Reproducibility:
How this workshop is run - This is a Carpentries Workshop - that means friendly learning environment - Code of Conduct - active learning - work with the people next to you - ask for help
Workshop goals - Why are we teaching this - Why is this …
Workshop goals - Why are we teaching this - Why is this important - For future and current you - For research as a whole - Lack of reproducibility in research is a real problem
Materials and how we'll use them - Workshop landing page, with
- links to the Materials - schedule
Structure oriented along the Four Facets of Reproducibility:
How this workshop is run - This is a Carpentries Workshop - that means friendly learning environment - Code of Conduct - active learning - work with the people next to you - ask for help
Workshop goals - Why are we teaching this - Why is this …
Workshop goals - Why are we teaching this - Why is this important - For future and current you - For research as a whole - Lack of reproducibility in research is a real problem
Materials and how we'll use them - Workshop landing page, with
- links to the Materials - schedule
Structure oriented along the Four Facets of Reproducibility:
How this workshop is run - This is a Carpentries Workshop - that means friendly learning environment - Code of Conduct - active learning - work with the people next to you - ask for help
Background There is wide agreement in the biomedical research community that research …
Background There is wide agreement in the biomedical research community that research data sharing is a primary ingredient for ensuring that science is more transparent and reproducible. Publishers could play an important role in facilitating and enforcing data sharing; however, many journals have not yet implemented data sharing policies and the requirements vary widely across journals. This study set out to analyze the pervasiveness and quality of data sharing policies in the biomedical literature. Methods The online author’s instructions and editorial policies for 318 biomedical journals were manually reviewed to analyze the journal’s data sharing requirements and characteristics. The data sharing policies were ranked using a rubric to determine if data sharing was required, recommended, required only for omics data, or not addressed at all. The data sharing method and licensing recommendations were examined, as well any mention of reproducibility or similar concepts. The data was analyzed for patterns relating to publishing volume, Journal Impact Factor, and the publishing model (open access or subscription) of each journal. Results A total of 11.9% of journals analyzed explicitly stated that data sharing was required as a condition of publication. A total of 9.1% of journals required data sharing, but did not state that it would affect publication decisions. 23.3% of journals had a statement encouraging authors to share their data but did not require it. A total of 9.1% of journals mentioned data sharing indirectly, and only 14.8% addressed protein, proteomic, and/or genomic data sharing. There was no mention of data sharing in 31.8% of journals. Impact factors were significantly higher for journals with the strongest data sharing policies compared to all other data sharing criteria. Open access journals were not more likely to require data sharing than subscription journals. Discussion Our study confirmed earlier investigations which observed that only a minority of biomedical journals require data sharing, and a significant association between higher Impact Factors and journals with a data sharing requirement. Moreover, while 65.7% of the journals in our study that required data sharing addressed the concept of reproducibility, as with earlier investigations, we found that most data sharing policies did not provide specific guidance on the practices that ensure data is maximally available and reusable.
The objective of this study was to evaluate the nature and extent …
The objective of this study was to evaluate the nature and extent of reproducible and transparent research practices in neurology publications. Methods The NLM catalog was used to identify MEDLINE-indexed neurology journals. A PubMed search of these journals was conducted to retrieve publications over a 5-year period from 2014 to 2018. A random sample of publications was extracted. Two authors conducted data extraction in a blinded, duplicate fashion using a pilot-tested Google form. This form prompted data extractors to determine whether publications provided access to items such as study materials, raw data, analysis scripts, and protocols. In addition, we determined if the publication was included in a replication study or systematic review, was preregistered, had a conflict of interest declaration, specified funding sources, and was open access. Results Our search identified 223,932 publications meeting the inclusion criteria, from which 400 were randomly sampled. Only 389 articles were accessible, yielding 271 publications with empirical data for analysis. Our results indicate that 9.4% provided access to materials, 9.2% provided access to raw data, 0.7% provided access to the analysis scripts, 0.7% linked the protocol, and 3.7% were preregistered. A third of sampled publications lacked funding or conflict of interest statements. No publications from our sample were included in replication studies, but a fifth were cited in a systematic review or meta-analysis. Conclusions Currently, published neurology research does not consistently provide information needed for reproducibility. The implications of poor research reporting can both affect patient care and increase research waste. Collaborative intervention by authors, peer reviewers, journals, and funding sources is needed to mitigate this problem.
Currently, there is a growing interest in ensuring the transparency and reproducibility …
Currently, there is a growing interest in ensuring the transparency and reproducibility of the published scientific literature. According to a previous evaluation of 441 biomedical journals articles published in 2000–2014, the biomedical literature largely lacked transparency in important dimensions. Here, we surveyed a random sample of 149 biomedical articles published between 2015 and 2017 and determined the proportion reporting sources of public and/or private funding and conflicts of interests, sharing protocols and raw data, and undergoing rigorous independent replication and reproducibility checks. We also investigated what can be learned about reproducibility and transparency indicators from open access data provided on PubMed. The majority of the 149 studies disclosed some information regarding funding (103, 69.1% [95% confidence interval, 61.0% to 76.3%]) or conflicts of interest (97, 65.1% [56.8% to 72.6%]). Among the 104 articles with empirical data in which protocols or data sharing would be pertinent, 19 (18.3% [11.6% to 27.3%]) discussed publicly available data; only one (1.0% [0.1% to 6.0%]) included a link to a full study protocol. Among the 97 articles in which replication in studies with different data would be pertinent, there were five replication efforts (5.2% [1.9% to 12.2%]). Although clinical trial identification numbers and funding details were often provided on PubMed, only two of the articles without a full text article in PubMed Central that discussed publicly available data at the full text level also contained information related to data sharing on PubMed; none had a conflicts of interest statement on PubMed. Our evaluation suggests that although there have been improvements over the last few years in certain key indicators of reproducibility and transparency, opportunities exist to improve reproducible research practices across the biomedical literature and to make features related to reproducibility more readily visible in PubMed.
Scientific code is different from production software. Scientific code, by producing results …
Scientific code is different from production software. Scientific code, by producing results that are then analyzed and interpreted, participates in the elaboration of scientific conclusions. This imposes specific constraints on the code that are often overlooked in practice. We articulate, with a small example, five characteristics that a scientific code in computational science should possess: re-runnable, repeatable, reproducible, reusable and replicable. The code should be executable (re-runnable) and produce the same result more than once (repeatable); it should allow an investigator to reobtain the published results (reproducible) while being easy to use, understand and modify (reusable), and it should act as an available reference for any ambiguity in the algorithmic descriptions of the article (replicable).
The replicability of research findings has recently been disputed across multiple scientific …
The replicability of research findings has recently been disputed across multiple scientific disciplines. In constructive reaction, the research culture in psychology is facing fundamental changes, but investigations of research practices that led to these improvements have almost exclusively focused on academic researchers. By contrast, we investigated the statistical reporting quality and selected indicators of questionable research practices (QRPs) in psychology students' master's theses. In a total of 250 theses, we investigated utilization and magnitude of standardized effect sizes, along with statistical power, the consistency and completeness of reported results, and possible indications of p-hacking and further testing. Effect sizes were reported for 36% of focal tests (median r = 0.19), and only a single formal power analysis was reported for sample size determination (median observed power 1 − β = 0.67). Statcheck revealed inconsistent p-values in 18% of cases, while 2% led to decision errors. There were no clear indications of p-hacking or further testing. We discuss our findings in the light of promoting open science standards in teaching and student supervision.
Workshop goals - Why are we teaching this - Why is this …
Workshop goals - Why are we teaching this - Why is this important - For future and current you - For research as a whole - Lack of reproducibility in research is a real problem
Materials and how we'll use them - Workshop landing page, with
- links to the Materials - schedule
Structure oriented along the Four Facets of Reproducibility:
How this workshop is run - This is a Carpentries Workshop - that means friendly learning environment - Code of Conduct - active learning - work with the people next to you - ask for help
Confirmation through competent replication is a founding principle of modern science. However, …
Confirmation through competent replication is a founding principle of modern science. However, biomedical researchers are rewarded for innovation, and not for confirmation, and confirmatory research is often stigmatized as unoriginal and as a consequence faces barriers to publication. As a result, the current biomedical literature is dominated by exploration, which to complicate matters further is often disguised as confirmation. Only recently scientists and the public have begun to realize that high-profile research results in biomedicine can often not be replicated. Consequently, confirmation has become central stage in the quest to safeguard the robustness of research findings. Research which is pushing the boundaries of or challenges what is currently known must necessarily result in a plethora of false positive results. Thus, since discovery, the driving force of scientific progress, is unavoidably linked to high false positive rates and cannot support confirmatory inference, dedicated confirmatory investigation is needed for pivotal results. In this chapter I will argue that the tension between the two modes of research, exploration and confirmation, can be resolved if we conceptually and practically separate them. I will discuss the idiosyncrasies of exploratory and confirmatory studies, with a focus on the specific features of their design, analysis, and interpretation.
This list of resources consists of resources for researchers, editors, and reviewers …
This list of resources consists of resources for researchers, editors, and reviewers interested in practicing open science principles, particularly in education research. This list is not exhaustive but meant as a starting point for individuals wanting to learn more about doing open science work specifically for qualitative research.This list was compiled by the following contributors: Rachel Renbarger, Sondra Stegenga, Thomas, Sebastian Karcher, and Crystal Steltenpohl. This resource list grew out of a hackathon at the Virtual Unconference on Open Scholarship Practices in Education Research.
This lesson in part of Software Carpentry workshop and teach novice programmers …
This lesson in part of Software Carpentry workshop and teach novice programmers to write modular code and best practices for using R for data analysis. an introduction to R for non-programmers using gapminder data The goal of this lesson is to teach novice programmers to write modular code and best practices for using R for data analysis. R is commonly used in many scientific disciplines for statistical analysis and its array of third-party packages. We find that many scientists who come to Software Carpentry workshops use R and want to learn more. The emphasis of these materials is to give attendees a strong foundation in the fundamentals of R, and to teach best practices for scientific computing: breaking down analyses into modular units, task automation, and encapsulation. Note that this workshop will focus on teaching the fundamentals of the programming language R, and will not teach statistical analysis. The lesson contains more material than can be taught in a day. The instructor notes page has some suggested lesson plans suitable for a one or half day workshop. A variety of third party packages are used throughout this workshop. These are not necessarily the best, nor are they comprehensive, but they are packages we find useful, and have been chosen primarily for their usability.
From Data Carpentry: Data Carpentry’s aim is to teach researchers basic concepts, …
From Data Carpentry: Data Carpentry’s aim is to teach researchers basic concepts, skills, and tools for working with data so that they can get more done in less time, and with less pain. The lessons below were designed for those interested in working with social sciences data in R.This is an introduction to R designed for participants with no programming experience. These lessons can be taught in a day (~ 6 hours). They start with some basic information about R syntax, the RStudio interface, and move through how to import CSV files, the structure of data frames, how to deal with factors, how to add/remove rows and columns, how to calculate summary statistics from a data frame, and a brief introduction to plotting.
Data Carpentry lesson part of the Social Sciences curriculum. This lesson teaches …
Data Carpentry lesson part of the Social Sciences curriculum. This lesson teaches how to analyse and visualise data used by social scientists. Data Carpentry’s aim is to teach researchers basic concepts, skills, and tools for working with data so that they can get more done in less time, and with less pain. The lessons below were designed for those interested in working with social sciences data in R. This is an introduction to R designed for participants with no programming experience. These lessons can be taught in a day (~ 6 hours). They start with some basic information about R syntax, the RStudio interface, and move through how to import CSV files, the structure of data frames, how to deal with factors, how to add/remove rows and columns, how to calculate summary statistics from a data frame, and a brief introduction to plotting.
Efforts to Instill the Fundamental Principles of Rigorous ResearchRigorous experimental procedures and …
Efforts to Instill the Fundamental Principles of Rigorous ResearchRigorous experimental procedures and transparent reporting of research results are vital to the continued success of the biomedical enterprise at both the preclinical and the clinical levels; therefore, NINDS convened major stakeholders in October 2018 to discuss how best to encourage rigorous biomedical research practices. The attendees discussed potential improvements to current training resources meant to instill the principles of rigorous research in current and future scientists, ideal attributes of a potential new educational resource, and cultural factors needed to ensure the success of such training. Please see the event website for more information about this workshop, including video recordings of the discussion, or the recent publication summarizing the workshop.Rigor ChampionsAs described in this publication, enthusiastic individuals ("champions") who want to drive improvements in rigorous research practices, transparent reporting, and comprehensive education may come from all career stages and sectors, including undergraduate students, graduate students, postdoctoral fellows, researchers, educators, institutional leaders, journal editors, scientific societies, private industry, and funders. We encouraged champions to organize themselves into intra- and inter-institutional communities to effect change within and across scientific institutions. These communities can then share resources and best practices, propose changes to current training and research infrastructure, build new tools to support better research practices, and support rigorous research on a daily basis.If you are interested learning more, you can join this grassroots online workspace or email us at RigorChampions@nih.gov.Rigor ResourcesIn order to understand the current landscape of training in the principles of rigorous research, NINDS is gathering a list of public resources that are, or can be made, freely accessible to the scientific community and beyond. We hope that compiling these resources will help identify gaps in training and stimulate discussion about proposed improvements and the building of new resources that facilitate training in transparency and other rigorous research practices. Please peruse the resources compiled thus far below, and contact us at RigorChampions@nih.gov to let us know about other potential resources.NINDS does not endorse any of these resources and leaves it to the scientific community to judge their quality.Resources TableCategories of resources listed in the table include Books and Articles, Guidelines and Protocols, Organizations and Training Programs, Software and Other Digital Resources, and Videos and Courses.
The information provided on this website is designed to assist the extramural …
The information provided on this website is designed to assist the extramural community in addressing rigor and transparency in NIH grant applications and progress reports. Scientific rigor and transparency in conducting biomedical research is key to the successful application of knowledge toward improving health outcomes.
Definition Scientific rigor is the strict application of the scientific method to ensure unbiased and well-controlled experimental design, methodology, analysis, interpretation and reporting of results.
Goals The NIH strives to exemplify and promote the highest level of scientific integrity, public accountability, and social responsibility in the conduct of science. Grant applications instructions and the criteria by which reviewers are asked to evaluate the scientific merit of the application are intended to:
• ensure that NIH is funding the best and most rigorous science, • highlight the need for applicants to describe details that may have been previously overlooked, • highlight the need for reviewers to consider such details in their reviews through updated review language, and • minimize additional burden.
The reliability of experimental findings depends on the rigour of experimental design. …
The reliability of experimental findings depends on the rigour of experimental design. Here we show limited reporting of measures to reduce the risk of bias in a random sample of life sciences publications, significantly lower reporting of randomisation in work published in journals of high impact, and very limited reporting of measures to reduce the risk of bias in publications from leading United Kingdom institutions. Ascertainment of differences between institutions might serve both as a measure of research quality and as a tool for institutional efforts to improve research quality.
No restrictions on your remixing, redistributing, or making derivative works. Give credit to the author, as required.
Your remixing, redistributing, or making derivatives works comes with some restrictions, including how it is shared.
Your redistributing comes with some restrictions. Do not remix or make derivative works.
Most restrictive license type. Prohibits most uses, sharing, and any changes.
Copyrighted materials, available under Fair Use and the TEACH Act for US-based educators, or other custom arrangements. Go to the resource provider to see their individual restrictions.