Bayesian hypothesis testing presents an attractive alternative to p value hypothesis testing. …
Bayesian hypothesis testing presents an attractive alternative to p value hypothesis testing. Part I of this series outlined several advantages of Bayesian hypothesis testing, including the ability to quantify evidence and the ability to monitor and update this evidence as data come in, without the need to know the intention with which the data were collected. Despite these and other practical advantages, Bayesian hypothesis tests are still reported relatively rarely. An important impediment to the widespread adoption of Bayesian tests is arguably the lack of user-friendly software for the run-of-the-mill statistical problems that confront psychologists for the analysis of almost every experiment: the t-test, ANOVA, correlation, regression, and contingency tables. In Part II of this series we introduce JASP (http://www.jasp-stats.org), an open-source, cross-platform, user-friendly graphical software package that allows users to carry out Bayesian hypothesis tests for standard statistical problems. JASP is based in part on the Bayesian analyses implemented in Morey and Rouder’s BayesFactor package for R. Armed with JASP, the practical advantages of Bayesian hypothesis testing are only a mouse click away.
Experienced Registered Reports editors and reviewers come together to discuss the format …
Experienced Registered Reports editors and reviewers come together to discuss the format and best practices for handling submissions. The panelists also share insights into what editors are looking for from reviewers as well as practical guidelines for writing a Registered Report. ABOUT THE PANELISTS: Chris Chambers | Chris is a professor of cognitive neuroscience at Cardiff University, Chair of the Registered Reports Committee supported by the Center for Open Science, and one of the founders of Registered Reports. He has helped establish the Registered Reports format for over a dozen journals. Anastasia Kiyonaga | Anastasia is a cognitive neuroscientist who uses converging behavioral, brain stimulation, and neuroimaging methods to probe memory and attention processes. She is currently a postdoctoral researcher with Mark D'Esposito in the Helen Wills Neuroscience Institute at the University of California, Berkeley. Before coming to Berkeley, she received her Ph.D. with Tobias Egner in the Duke Center for Cognitive Neuroscience. She will be an Assistant Professor in the Department of Cognitive Science at UC San Diego starting January, 2020. Jason Scimeca | Jason is a cognitive neuroscientist at UC Berkeley. His research investigates the neural systems that support high-level cognitive processes such as executive function, working memory, and the flexible control of behavior. He completed his Ph.D. at Brown University with David Badre and is currently a postdoctoral researcher in Mark D'Esposito's Cognitive Neuroscience Lab. Moderated by David Mellor, Director of Policy Initiatives for the Center for Open Science.
Discrepancies between pre-specified and reported outcomes are an important source of bias …
Discrepancies between pre-specified and reported outcomes are an important source of bias in trials. Despite legislation, guidelines and public commitments on correct reporting from journals, outcome misreporting continues to be prevalent. We aimed to document the extent of misreporting, establish whether it was possible to publish correction letters on all misreported trials as they were published, and monitor responses from editors and trialists to understand why outcome misreporting persists despite public commitments to address it. Methods We identified five high-impact journals endorsing Consolidated Standards of Reporting Trials (CONSORT) (New England Journal of Medicine, The Lancet, Journal of the American Medical Association, British Medical Journal, and Annals of Internal Medicine) and assessed all trials over a six-week period to identify every correctly and incorrectly reported outcome, comparing published reports against published protocols or registry entries, using CONSORT as the gold standard. A correction letter describing all discrepancies was submitted to the journal for all misreported trials, and detailed coding sheets were shared publicly. The proportion of letters published and delay to publication were assessed over 12 months of follow-up. Correspondence received from journals and authors was documented and themes were extracted. Results Sixty-seven trials were assessed in total. Outcome reporting was poor overall and there was wide variation between journals on pre-specified primary outcomes (mean 76% correctly reported, journal range 25–96%), secondary outcomes (mean 55%, range 31–72%), and number of undeclared additional outcomes per trial (mean 5.4, range 2.9–8.3). Fifty-eight trials had discrepancies requiring a correction letter (87%, journal range 67–100%). Twenty-three letters were published (40%) with extensive variation between journals (range 0–100%). Where letters were published, there were delays (median 99 days, range 0–257 days). Twenty-nine studies had a pre-trial protocol publicly available (43%, range 0–86%). Qualitative analysis demonstrated extensive misunderstandings among journal editors about correct outcome reporting and CONSORT. Some journals did not engage positively when provided correspondence that identified misreporting; we identified possible breaches of ethics and publishing guidelines. Conclusions All five journals were listed as endorsing CONSORT, but all exhibited extensive breaches of this guidance, and most rejected correction letters documenting shortcomings. Readers are likely to be misled by this discrepancy. We discuss the advantages of prospective methodology research sharing all data openly and pro-actively in real time as feedback on critiqued studies. This is the first empirical study of major academic journals’ willingness to publish a cohort of comparable and objective correction letters on misreported high-impact studies. Suggested improvements include changes to correspondence processes at journals, alternatives for indexed post-publication peer review, changes to CONSORT’s mechanisms for enforcement, and novel strategies for research on methods and reporting.
Registered Reports: Peer review before results are known to align scientific values …
Registered Reports: Peer review before results are known to align scientific values and practices.
Registered Reports is a publishing format used by over 250 journals that emphasizes the importance of the research question and the quality of methodology by conducting peer review prior to data collection. High quality protocols are then provisionally accepted for publication if the authors follow through with the registered methodology.
This format is designed to reward best practices in adhering to the hypothetico-deductive model of the scientific method. It eliminates a variety of questionable research practices, including low statistical power, selective reporting of results, and publication bias, while allowing complete flexibility to report serendipitous findings.
This page includes information on Registered Reports including readings on Registered Reports, Participating Journals, Details & Workflow, Resources for Editors, Resources For Funders, FAQs, and Allied Initiatives.
Dashboards for data visualisation, such as R Shiny and Tableau, allow an …
Dashboards for data visualisation, such as R Shiny and Tableau, allow an interactive exploration of data by means of drop-down lists and checkboxes, with no coding for the user. The apps can be useful for both the data analyst and the public.
Breaking Continuous Flash Suppression (bCFS) has been adopted as an appealing means …
Breaking Continuous Flash Suppression (bCFS) has been adopted as an appealing means to study human visual awareness, but the literature is beclouded by inconsistent and contradictory results. Although previous reviews have focused chiefly on design pitfalls and instances of false reasoning, we show in this study that the choice of analysis pathway can have severe effects on the statistical output when applied to bCFS data. Using a representative dataset designed to address a specific controversy in the realm of language processing under bCFS, namely whether psycholinguistic variables affect access to awareness, we present a range of analysis methods based on real instances in the published literature, and indicate how each approach affects the perceived outcome. We provide a summary of published bCFS studies indicating the use of data transformation and trimming, and highlight that more compelling analysis methods are sparsely used in this field. We discuss potential interpretations based on both classical and more complex analyses, to highlight how these differ. We conclude that an adherence to openly available data and analysis pathways could provide a great benefit to this field, so that conclusions can be tested against multiple analyses as standard practices are updated.
Many clinical trials conducted by academic organizations are not published, or are …
Many clinical trials conducted by academic organizations are not published, or are not published completely. Following the US Food and Drug Administration Amendments Act of 2007, “The Final Rule” (compliance date April 18, 2017) and a National Institutes of Health policy clarified and expanded trial registration and results reporting requirements. We sought to identify policies, procedures, and resources to support trial registration and reporting at academic organizations. Methods We conducted an online survey from November 21, 2016 to March 1, 2017, before organizations were expected to comply with The Final Rule. We included active Protocol Registration and Results System (PRS) accounts classified by ClinicalTrials.gov as a “University/Organization” in the USA. PRS administrators manage information on ClinicalTrials.gov. We invited one PRS administrator to complete the survey for each organization account, which was the unit of analysis. Results Eligible organization accounts (N = 783) included 47,701 records (e.g., studies) in August 2016. Participating organizations (366/783; 47%) included 40,351/47,701 (85%) records. Compared with other organizations, Clinical and Translational Science Award (CTSA) holders, cancer centers, and large organizations were more likely to participate. A minority of accounts have a registration (156/366; 43%) or results reporting policy (129/366; 35%). Of those with policies, 15/156 (11%) and 49/156 (35%) reported that trials must be registered before institutional review board approval is granted or before beginning enrollment, respectively. Few organizations use computer software to monitor compliance (68/366; 19%). One organization had penalized an investigator for non-compliance. Among the 287/366 (78%) accounts reporting that they allocate staff to fulfill ClinicalTrials.gov registration and reporting requirements, the median number of full-time equivalent staff is 0.08 (interquartile range = 0.02–0.25). Because of non-response and social desirability, this could be a “best case” scenario. Conclusions Before the compliance date for The Final Rule, some academic organizations had policies and resources that facilitate clinical trial registration and reporting. Most organizations appear to be unprepared to meet the new requirements. Organizations could enact the following: adopt policies that require trial registration and reporting, allocate resources (e.g., staff, software) to support registration and reporting, and ensure there are consequences for investigators who do not follow standards for clinical research.
Clinical trial registries can improve the validity of trial results by facilitating …
Clinical trial registries can improve the validity of trial results by facilitating comparisons between prospectively planned and reported outcomes. Previous reports on the frequency of planned and reported outcome inconsistencies have reported widely discrepant results. It is unknown whether these discrepancies are due to differences between the included trials, or to methodological differences between studies. We aimed to systematically review the prevalence and nature of discrepancies between registered and published outcomes among clinical trials. Methods We searched MEDLINE via PubMed, EMBASE, and CINAHL, and checked references of included publications to identify studies that compared trial outcomes as documented in a publicly accessible clinical trials registry with published trial outcomes. Two authors independently selected eligible studies and performed data extraction. We present summary data rather than pooled analyses owing to methodological heterogeneity among the included studies. Results Twenty-seven studies were eligible for inclusion. The overall risk of bias among included studies was moderate to high. These studies assessed outcome agreement for a median of 65 individual trials (interquartile range [IQR] 25–110). The median proportion of trials with an identified discrepancy between the registered and published primary outcome was 31 %; substantial variability in the prevalence of these primary outcome discrepancies was observed among the included studies (range 0 % (0/66) to 100 % (1/1), IQR 17–45 %). We found less variability within the subset of studies that assessed the agreement between prospectively registered outcomes and published outcomes, among which the median observed discrepancy rate was 41 % (range 30 % (13/43) to 100 % (1/1), IQR 33–48 %). The nature of observed primary outcome discrepancies also varied substantially between included studies. Among the studies providing detailed descriptions of these outcome discrepancies, a median of 13 % of trials introduced a new, unregistered outcome in the published manuscript (IQR 5–16 %). Conclusions Discrepancies between registered and published outcomes of clinical trials are common regardless of funding mechanism or the journals in which they are published. Consistent reporting of prospectively defined outcomes and consistent utilization of registry data during the peer review process may improve the validity of clinical trial publications.
This webinar (recorded Sept. 27, 2017) introduces how to connect other services …
This webinar (recorded Sept. 27, 2017) introduces how to connect other services as add-ons to projects on the Open Science Framework (OSF; https://osf.io). Connecting services to your OSF projects via add-ons enables you to pull together the different parts of your research efforts without having to switch away from tools and workflows you wish to continue using. The OSF is a free, open source web application built to help researchers manage their workflows. The OSF is part collaboration tool, part version control software, and part data archive. The OSF connects to popular tools researchers already use, like Dropbox, Box, Github and Mendeley, to streamline workflows and increase efficiency.
This video will go over three issues that can arise when scientific …
This video will go over three issues that can arise when scientific studies have low statistical power. All materials shown in the video, as well as the content from our other videos, can be found here: https://osf.io/7gqsi/
A collection of course syllabi from any discipline featuring content to examine …
A collection of course syllabi from any discipline featuring content to examine or improve open and reproducible research practices. Email to join project, access articles, or add other syllabi.
Curate Science is a unified curation system and platform to verify that …
Curate Science is a unified curation system and platform to verify that research is transparent and credible. It will allow researchers, journals, universities, funders, teachers, journalists, and the general public to ensure:- Transparency: Ensure research meets minimum transparency standards appropriate to the article type and employed methodologies.- Credibility: Ensure follow-up scrutiny is linked to its parent paper, including critical commentaries, reproducibility/robustness re-analyses, and new sample replications.
We can regard the wider incentive structures that operate across science, such …
We can regard the wider incentive structures that operate across science, such as the priority given to novel findings, as an ecosystem within which scientists strive to maximise their fitness (i.e., publication record and career success). Here, we develop an optimality model that predicts the most rational research strategy, in terms of the proportion of research effort spent on seeking novel results rather than on confirmatory studies, and the amount of research effort per exploratory study. We show that, for parameter values derived from the scientific literature, researchers acting to maximise their fitness should spend most of their effort seeking novel results and conduct small studies that have only 10%–40% statistical power. As a result, half of the studies they publish will report erroneous conclusions. Current incentive structures are in conflict with maximising the scientific value of research; we suggest ways that the scientific ecosystem could be improved.
Background All clinical research benefits from transparency and validity. Transparency and validity …
Background All clinical research benefits from transparency and validity. Transparency and validity of studies may increase by prospective registration of protocols and by publication of statistical analysis plans (SAPs) before data have been accessed to discern data-driven analyses from pre-planned analyses. Main message Like clinical trials, recommendations for SAPs for observational studies increase the transparency and validity of findings. We appraised the applicability of recently developed guidelines for the content of SAPs for clinical trials to SAPs for observational studies. Of the 32 items recommended for a SAP for a clinical trial, 30 items (94%) were identically applicable to a SAP for our observational study. Power estimations and adjustments for multiplicity are equally important in observational studies and clinical trials as both types of studies usually address multiple hypotheses. Only two clinical trial items (6%) regarding issues of randomisation and definition of adherence to the intervention did not seem applicable to observational studies. We suggest to include one new item specifically applicable to observational studies to be addressed in a SAP, describing how adjustment for possible confounders will be handled in the analyses. Conclusion With only few amendments, the guidelines for SAP of a clinical trial can be applied to a SAP for an observational study. We suggest SAPs should be equally required for observational studies and clinical trials to increase their transparency and validity.
Python is a general purpose programming language that is useful for writing …
Python is a general purpose programming language that is useful for writing scripts to work effectively and reproducibly with data. This is an introduction to Python designed for participants with no programming experience. These lessons can be taught in one and a half days (~ 10 hours). They start with some basic information about Python syntax, the Jupyter notebook interface, and move through how to import CSV files, using the pandas package to work with data frames, how to calculate summary information from a data frame, and a brief introduction to plotting. The last lesson demonstrates how to work with databases directly from Python.
Data Carpentry lesson from Ecology curriculum to learn how to analyse and …
Data Carpentry lesson from Ecology curriculum to learn how to analyse and visualise ecological data in R. Data Carpentry’s aim is to teach researchers basic concepts, skills, and tools for working with data so that they can get more done in less time, and with less pain. The lessons below were designed for those interested in working with ecology data in R. This is an introduction to R designed for participants with no programming experience. These lessons can be taught in a day (~ 6 hours). They start with some basic information about R syntax, the RStudio interface, and move through how to import CSV files, the structure of data frames, how to deal with factors, how to add/remove rows and columns, how to calculate summary statistics from a data frame, and a brief introduction to plotting. The last lesson demonstrates how to work with databases directly from R.
Python is a general purpose programming language that is useful for writing …
Python is a general purpose programming language that is useful for writing scripts to work effectively and reproducibly with data. This is an introduction to Python designed for participants with no programming experience. These lessons can be taught in a day (~ 6 hours). They start with some basic information about Python syntax, the Jupyter notebook interface, and move through how to import CSV files, using the pandas package to work with data frames, how to calculate summary information from a data frame, and a brief introduction to plotting. The last lesson demonstrates how to work with databases directly from Python.
Data Carpentry trains researchers in the core data skills for efficient, shareable, …
Data Carpentry trains researchers in the core data skills for efficient, shareable, and reproducible research practices. We run accessible, inclusive training workshops; teach openly available, high-quality, domain-tailored lessons; and foster an active, inclusive, diverse instructor community that promotes and models reproducible research as a community norm.
No restrictions on your remixing, redistributing, or making derivative works. Give credit to the author, as required.
Your remixing, redistributing, or making derivatives works comes with some restrictions, including how it is shared.
Your redistributing comes with some restrictions. Do not remix or make derivative works.
Most restrictive license type. Prohibits most uses, sharing, and any changes.
Copyrighted materials, available under Fair Use and the TEACH Act for US-based educators, or other custom arrangements. Go to the resource provider to see their individual restrictions.