In 2016 the LIS-Bibliometrics Forum commissioned the development of a set of …
In 2016 the LIS-Bibliometrics Forum commissioned the development of a set of bibliometric competencies (2017 Model), available at https://thebibliomagician.wordpress.com/2017-competencies-archived/. The work, sponsored by a small research grant from Elsevier Research Intelligence Division, was led by Dr. Andrew Cox at the University of Sheffield, and Dr. Sabrina Petersohn of the Bergische Universität Wuppertal, Germany. The aim of the competency statements was to ensure that bibliometric practitioners were equipped to do their work responsibly and well.
The Competency Model was updated in July 2021 and includes a colour gradient to reflect the Levels and how they build upon one another. In particular, the 2021 competencies can help:
To identify skills gaps To support progression through career stages for practitioners in the field of bibliometrics To prepare job descriptions
The work underpinning the paper is available here: http://journals.sagepub.com/doi/abs/10.1177/0961000617728111. It is intended that the competencies are a living document and will be reviewed over time.
Created as a supplement for the Impact Measurement collection of the ScholarlyCommunication …
Created as a supplement for the Impact Measurement collection of the ScholarlyCommunication Notebook (SCN) to describe some of the core literature in the field as well asresources that cannot be included on the SCN, because they are not openly licensed but arefree to read.This annotated bibliography is separated into three sections: Peer reviewed scholarly articles,Blog posts, initiatives, and guides, and Resources for further education and professionaldevelopment. The first section is intended to help practitioners in the field of researchassessment and bibliometrics to understand high-level core concepts in the field. The secondsection offers resources that are more applicable to practice. The final section includes links toblogs, communities, discussion lists, paid and free educational courses, and archivedconferences, so that practitioners and professionals can stay abreast of emerging trends,improve their skills, and find community. Most of these resources could not be included on theScholarly Communication Notebook, because they are not openly licensed. However, allresources on this bibliography are freely available to access and read.
Virginia Tech's Open Access Week 2020 keynote speaker, Elizabeth (Lizzie) Gadd, Research …
Virginia Tech's Open Access Week 2020 keynote speaker, Elizabeth (Lizzie) Gadd, Research Policy Manager (Publications) at Loughborough University in the UK, gives a talk about how what we reward through recruitment, promotion and tenure processes is not always what we actually value about research activity. The talk explores how we can pursue value-led evaluations - and how we can persuade senior leaders of their benefits.
The keynote talk is followed by a panel discussion with faculty members at Virginia Tech: Thomas Ewing (Associate Dean for Graduate Studies and Research and Professor of History), Carla Finkielstein (Associate Professor of Biological Sciences), Bikrum Gill (Assistant Professor of Political Science), and Sylvester Johnson (Professor and Director of the Center for Humanities. The panel is moderated by Tyler Walters (Dean, University Libraries).
The slides from this presentation are in Loughborough University's repository under a CC BY-NC-SA 4.0 license. https://repository.lboro.ac.uk/articles/presentation/Counting_what_counts_in_recruitment_promotion_and_tenure/13113860
We have empirically assessed the distribution of published effect sizes and estimated …
We have empirically assessed the distribution of published effect sizes and estimated power by analyzing 26,841 statistical records from 3,801 cognitive neuroscience and psychology papers published recently. The reported median effect size was D = 0.93 (interquartile range: 0.64–1.46) for nominally statistically significant results and D = 0.24 (0.11–0.42) for nonsignificant results. Median power to detect small, medium, and large effects was 0.12, 0.44, and 0.73, reflecting no improvement through the past half-century. This is so because sample sizes have remained small. Assuming similar true effect sizes in both disciplines, power was lower in cognitive neuroscience than in psychology. Journal impact factors negatively correlated with power. Assuming a realistic range of prior probabilities for null hypotheses, false report probability is likely to exceed 50% for the whole literature. In light of our findings, the recently reported low replication success in psychology is realistic, and worse performance may be expected for cognitive neuroscience.
A focus on novel, confirmatory, and statistically significant results leads to substantial …
A focus on novel, confirmatory, and statistically significant results leads to substantial bias in the scientific literature. One type of bias, known as “p-hacking,” occurs when researchers collect or select data or statistical analyses until nonsignificant results become significant. Here, we use text-mining to demonstrate that p-hacking is widespread throughout science. We then illustrate how one can test for p-hacking when performing a meta-analysis and show that, while p-hacking is probably common, its effect seems to be weak relative to the real effect sizes being measured. This result suggests that p-hacking probably does not drastically alter scientific consensuses drawn from meta-analyses.
Get an overview of journal-level bibliometrics such as Journal Impact Factor, CiteScore, …
Get an overview of journal-level bibliometrics such as Journal Impact Factor, CiteScore, Eigenfactor Score, and others. Find out how they are calculated and where they can be found! Recommended for faculty, graduate students, post-doctorates, or anyone interested in scholarly publications.
For a self-graded quiz and Certificate of Completion, go to https://bit.ly/scs-quiz1
More information about journal-level metrics: https://https://bit.ly/scs-impact-find
This review was commissioned by the joint UK higher education (HE) funding …
This review was commissioned by the joint UK higher education (HE) funding bodies as part of the Future Research Assessment Programme (FRAP). It revisits the findings of the 2015 review The Metric Tide to take a fresh look at the use of indicators in research management and assessment.
While this review feeds into the larger FRAP process, the authors have taken full advantage of their independence and sought to stimulate informed and robust discussion about the options and opportunities of future REF exercises. The report should be read in that spirit: as an input to ongoing FRAP deliberations, rather than a reflection of their likely or eventual conclusions.
The report is written in three sections. Section 1 plots the development of the responsible research assessment agenda since 2015 with a focus on the impact of The Metric Tide review and progress against its recommendations. Section 2 revisits the potential use of metrics and indicators in any future REF exercise, and proposes an increased uptake of ‘data for good’. Section 3 considers opportunities to further support the roll-out of responsible research assessment policies and practices across the UK HE sector. Appendices include an overview of progress against the recommendations of The Metric Tide and a literature review.
The programme aims to equip learners with the skills and knowledge required …
The programme aims to equip learners with the skills and knowledge required to engage in the use of a range of metrics around research impact and gain understanding of the research landscape. This is a flexible programme – you can do as much or as little as suits you. While some Things are interlinked, each of the Things is designed to be completed separately, in any order and at any level of complexity. Choose your own adventure!
There are three levels for each Thing:
Getting started is for you if you are just beginning to learn about each topic Learn more is if you know a bit but want to know more Challenge me is often more in-depth or assumes that you are familiar with at least the basics of each topic
What does it mean to have meaningful metrics in today’s complex higher …
What does it mean to have meaningful metrics in today’s complex higher education landscape? With a foreword by Heather Piwowar and Jason Priem, this highly engaging and activity-laden book serves to introduce readers to the fast-paced world of research metrics from the unique perspective of academic librarians and LIS practitioners. Starting with the essential histories of bibliometrics and altmetrics, and continuing with in-depth descriptions of the core tools and emerging issues at stake in the future of both fields, Meaningful Metrics is a convenient all-in-one resource that is designed to be used by a range of readers, from those with little to no background on the subject to those looking to become movers and shakers in the current scholarly metrics movement. Authors Borchardt and Roemer, offer tips, tricks, and real-world examples illustrate how librarians can support the successful adoption of research metrics, whether in their institutions or across academia as a whole.
This resource links to the full course (all 13 weeks of modules) …
This resource links to the full course (all 13 weeks of modules) on the Internet Archive. The video lectures for the courses are also available on YouTube at https://www.youtube.com/watch?v=maRP_Wvc4eY&list=PLWYwQdaelu4en5MZ0bbg-rSpcfb64O_rd
This series was designed and taught by Chris Belter, Ya-Ling Lu, and Candace Norton at the NIH Library. It was originally presented in weekly installments to NIH Library staff from January-May 2019 and adapted for web viewing later the same year.
The goal of the series is to provide free, on-demand training on how we do bibliometrics for research evaluation. Although demand for bibliometric indicators and analyses in research evaluation is growing, broadly available and easily accessible, training on how to provide those analyses is scarce. We have been providing bibliometric services for years, and we wanted to share our experience with others to facilitate the broader adoption of accurate and responsible bibliometric practice in research assessment. We hope this series acts as a springboard for others to get started with bibliometrics so that they feel more comfortable moving beyond this series on their own.
Navigating the Series The training series consists of 13 individual courses, organized into 7 thematic areas. Links to each course in the series are provided on the left. Each course includes a training video with audio transcription, supplemental reading to reinforce the concepts introduced in the course, and optional practice exercises.
We recommend that the courses be viewed in the order in which they are listed. The courses are listed in the same order as the analyses that we typically perform to produce one of our standard reports. Many of the courses also build on concepts introduced in previous courses, and may be difficult to understand if viewed out of order. We also recommend that the series be taken over the course of 13 consecutive weeks, viewing one course per week. A lot is covered in these courses, so it is a good idea to take your time with them to make sure you understand each course before moving on to the next. We also recommend you try to complete the practice exercises that accompany many of the courses, because the best way to learn bibliometrics is by doing it.
Objective To investigate the replication validity of biomedical association studies covered by …
Objective To investigate the replication validity of biomedical association studies covered by newspapers. Methods We used a database of 4723 primary studies included in 306 meta-analysis articles. These studies associated a risk factor with a disease in three biomedical domains, psychiatry, neurology and four somatic diseases. They were classified into a lifestyle category (e.g. smoking) and a non-lifestyle category (e.g. genetic risk). Using the database Dow Jones Factiva, we investigated the newspaper coverage of each study. Their replication validity was assessed using a comparison with their corresponding meta-analyses. Results Among the 5029 articles of our database, 156 primary studies (of which 63 were lifestyle studies) and 5 meta-analysis articles were reported in 1561 newspaper articles. The percentage of covered studies and the number of newspaper articles per study strongly increased with the impact factor of the journal that published each scientific study. Newspapers almost equally covered initial (5/39 12.8%) and subsequent (58/600 9.7%) lifestyle studies. In contrast, initial non-lifestyle studies were covered more often (48/366 13.1%) than subsequent ones (45/3718 1.2%). Newspapers never covered initial studies reporting null findings and rarely reported subsequent null observations. Only 48.7% of the 156 studies reported by newspapers were confirmed by the corresponding meta-analyses. Initial non-lifestyle studies were less often confirmed (16/48) than subsequent ones (29/45) and than lifestyle studies (31/63). Psychiatric studies covered by newspapers were less often confirmed (10/38) than the neurological (26/41) or somatic (40/77) ones. This is correlated to an even larger coverage of initial studies in psychiatry. Whereas 234 newspaper articles covered the 35 initial studies that were later disconfirmed, only four press articles covered a subsequent null finding and mentioned the refutation of an initial claim. Conclusion Journalists preferentially cover initial findings although they are often contradicted by meta-analyses and rarely inform the public when they are disconfirmed.
Background There is increasing interest to make primary data from published research …
Background There is increasing interest to make primary data from published research publicly available. We aimed to assess the current status of making research data available in highly-cited journals across the scientific literature. Methods and Results We reviewed the first 10 original research papers of 2009 published in the 50 original research journals with the highest impact factor. For each journal we documented the policies related to public availability and sharing of data. Of the 50 journals, 44 (88%) had a statement in their instructions to authors related to public availability and sharing of data. However, there was wide variation in journal requirements, ranging from requiring the sharing of all primary data related to the research to just including a statement in the published manuscript that data can be available on request. Of the 500 assessed papers, 149 (30%) were not subject to any data availability policy. Of the remaining 351 papers that were covered by some data availability policy, 208 papers (59%) did not fully adhere to the data availability instructions of the journals they were published in, most commonly (73%) by not publicly depositing microarray data. The other 143 papers that adhered to the data availability instructions did so by publicly depositing only the specific data type as required, making a statement of willingness to share, or actually sharing all the primary data. Overall, only 47 papers (9%) deposited full primary raw data online. None of the 149 papers not subject to data availability policies made their full primary data publicly available. Conclusion A substantial proportion of original research papers published in high-impact journals are either not subject to any data availability policies, or do not adhere to the data availability instructions in their respective journals. This empiric evaluation highlights opportunities for improvement.
These research metric source cards provide the citation for a scholarly work …
These research metric source cards provide the citation for a scholarly work and the research metrics of that work, which can include: the Altmetric Attention Score, the scholarly citation counts from different data sources, and field-weighted citation indicators; in addition, abstracts and important context to some of the metrics is also included, e.g., citation statements, titles of select online mentions, such as news and blog article titles, Wikipedia pages, patent citations, and the context behind those online mentions. There are four printable source cards (front and back) followed by activity questions for each source card. These cards help students engage in and interrogate the meaning behind bibliometrics and altmetrics of specific scholarly works as well as evaluate the credibility, authority, and reliability of the scholarly work itself.
The reliability of experimental findings depends on the rigour of experimental design. …
The reliability of experimental findings depends on the rigour of experimental design. Here we show limited reporting of measures to reduce the risk of bias in a random sample of life sciences publications, significantly lower reporting of randomisation in work published in journals of high impact, and very limited reporting of measures to reduce the risk of bias in publications from leading United Kingdom institutions. Ascertainment of differences between institutions might serve both as a measure of research quality and as a tool for institutional efforts to improve research quality.
The Declaration on Research Assessment (DORA) recognizes the need to improve the …
The Declaration on Research Assessment (DORA) recognizes the need to improve the ways in which the outputs of scholarly research are evaluated. The declaration was developed in 2012 during the Annual Meeting of the American Society for Cell Biology in San Francisco. It has become a worldwide initiative covering all scholarly disciplines and all key stakeholders including funders, publishers, professional societies, institutions, and researchers. The DORA initiative encourages all individuals and organizations who are interested in developing and promoting best practice in the assessment of scholarly research to sign DORA.
Other resources are available on their website, such as case studies of universities and national consortia that demonstrate key elements of institutional change to improve academic career success.
By combining a range of research outputs, including articles, grants, patents, and …
By combining a range of research outputs, including articles, grants, patents, and clinical trials, Dimensions is a state-of-the-art academic database that aims to enhance research discovery and evaluation. This study examines the search methods used in Dimensions, highlighting its user-friendly interface, comprehensive publishing database, and advanced contextual search capabilities. Through features that connect related outputs and provide analytical views that gather relevant data, the platform allows users to explore the relationships among research entities. Additionally, Dimensions offers both free and subscription-based options; the latter offers more sophisticated tools for in-depth research. Dimensions helps researchers and evaluators make well-informed decisions in academic contexts by encouraging a comprehensive grasp of the research ecosystem.
Background Sharing research data provides benefit to the general scientific community, but …
Background Sharing research data provides benefit to the general scientific community, but the benefit is less obvious for the investigator who makes his or her data available. Principal Findings We examined the citation history of 85 cancer microarray clinical trial publications with respect to the availability of their data. The 48% of trials with publicly available microarray data received 85% of the aggregate citations. Publicly available data was significantly (p = 0.006) associated with a 69% increase in citations, independently of journal impact factor, date of publication, and author country of origin using linear regression. Significance This correlation between publicly available data and increased literature impact may further motivate investigators to share their detailed research data.
Journal policy on research data and code availability is an important part …
Journal policy on research data and code availability is an important part of the ongoing shift toward publishing reproducible computational science. This article extends the literature by studying journal data sharing policies by year (for both 2011 and 2012) for a referent set of 170 journals. We make a further contribution by evaluating code sharing policies, supplemental materials policies, and open access status for these 170 journals for each of 2011 and 2012. We build a predictive model of open data and code policy adoption as a function of impact factor and publisher and find higher impact journals more likely to have open data and code policies and scientific societies more likely to have open data and code policies than commercial publishers. We also find open data policies tend to lead open code policies, and we find no relationship between open data and code policies and either supplemental material policies or open access journal status. Of the journals in this study, 38% had a data policy, 22% had a code policy, and 66% had a supplemental materials policy as of June 2012. This reflects a striking one year increase of 16% in the number of data policies, a 30% increase in code policies, and a 7% increase in the number of supplemental materials policies. We introduce a new dataset to the community that categorizes data and code sharing, supplemental materials, and open access policies in 2011 and 2012 for these 170 journals.
Presentation from a University of York Library workshop on bibliometrics. The session …
Presentation from a University of York Library workshop on bibliometrics. The session covers how published research outputs are measured at the article, author and journal level; with discussion of the limitations of a bibliometric approach.
Science advances through rich, scholarly discussion. More than ever before, digital tools …
Science advances through rich, scholarly discussion. More than ever before, digital tools allow us to take that dialogue online. To chart a new future for open publishing, we must consider alternatives to the core features of the legacy print publishing system, such as an access paywall and editorial selection before publication. Although journals have their strengths, the traditional approach of selecting articles before publication (“curate first, publish second”) forces a focus on “getting into the right journals,” which can delay dissemination of scientific work, create opportunity costs for pushing science forward, and promote undesirable behaviors among scientists and the institutions that evaluate them. We believe that a “publish first, curate second” approach with the following features would be a strong alternative: authors decide when and what to publish; peer review reports are published, either anonymously or with attribution; and curation occurs after publication, incorporating community feedback and expert judgment to select articles for target audiences and to evaluate whether scientific work has stood the test of time. These proposed changes could optimize publishing practices for the digital age, emphasizing transparency, peer-mediated improvement, and post-publication appraisal of scientific articles.
No restrictions on your remixing, redistributing, or making derivative works. Give credit to the author, as required.
Your remixing, redistributing, or making derivatives works comes with some restrictions, including how it is shared.
Your redistributing comes with some restrictions. Do not remix or make derivative works.
Most restrictive license type. Prohibits most uses, sharing, and any changes.
Copyrighted materials, available under Fair Use and the TEACH Act for US-based educators, or other custom arrangements. Go to the resource provider to see their individual restrictions.