This is not a comprehensive list of resources, but some topical resources that may be of interest. If you are aware of other resources that you feel are important to highlight, please forward to us at

Equity, Diversity and Inclusion

“Science has a racism problem. And it is not limited to scientific discoveries and their attendant usage. The scientific establishment, scientific education, and the metrics used to define scientific success have a racism problem as well.”

– From

  • Erosheva EA et al. (2020). NIH peer review: Criterion scores completely account for racial disparities in overall impact scores. Science Advances, 6:eaaz4868. Examined the full set of research project grants (R01) applications submitted by black and white principal investigators as reviewed by US National Institute of Health’s Center for Scientific Review for years 2014–2016. The authors found that the overall award rate for black applications was 55% of that for white applications (10.2% versus 18.5%), resulting in a funding gap of 45%. Black investigators, on average, received worse preliminary scores on all five criteria—Significance, Investigator(s), Innovation, Approach, and Environment—even after matching on key variables that include career stage, gender, degree type, and area of science.
  • Guthrie S et al. (2019). Measuring bias, burden and conservatism in research funding. F1000Research, 8:851. This article highlights innovations in grant funding approaches based on a rapid review of evidence and key informant interviews. Table 11 provides a conceptual mapping of the grant funding process and potential implications for bias, burden, and conservatism.
  • Tamblyn R et al. (2018). Assessment of potential bias in research grant peer review in Canada. CMAJ, 190(16):E489–E499. Examined all grant applications to the CIHR investigator-initiated open operating grant competition between the years 2012 and 2014. There was evidence of potential systematic biases in peer review that penalized female applicants and these were associated with peer reviewer characteristics. This bias was of sufficient magnitude to change application scores from fundable to nonfundable.
  • Witteman HO et al. (2019). Are gender gaps due to evaluations of the applicant or the science? A natural experiment at a national funding agency. Lancet, 393:531–4. Examined application success among grant applications from principal investigators in the investigator-initiated CIHR grant programs from 2011–2016. The authors found that gender gaps in grant funding were attributable to less favourable assessments of women as principal investigators rather than the quality of their proposed research.

Recognition and Funding Decisions

  • Academic Recognition of Team Science: How to Optimize the Canadian Academic System, published in 2017, is a comprehensive report from the Canadian Academy of Health Sciences that proposes 12 recommendations, which address cultural and behavioural as well as measurement and assessment/evaluation issues in team science.
  • David Moher from the Ottawa Health Research Institute and other leaders in the field synthesized six principles for assessing scientists and associated research and policy implications, which culminated from an expert panel workshop conducted in Washington DC in January 2017. The principles are well-described in Table 2 of the paper: Moher D et al. (2018). Assessing scientists for hiring, promotion, and tenure. PLoS Biology, 16(3):e2004089.
  • Science Europe’s 2020 Position Statement and Recommendations on Research Assessment Processes identifies 27 recommendations related to 1) approaches used to assess and select proposals and researchers; 2) challenges faced during assessment processes; 3) current developments in the assessment of proposals and researchers.
  • Kerzendorf WE et al. (2020). Distributed peer review enhanced with natural language processing and machine learning. Nature Astronomy, 4:711–7. Describes the three features of the “distributed peer review” approach: 1) when a scientist submits a proposal for evaluation, she or he is first asked to review several of their competitors’ papers, a way of lessening the amount of papers one is asked to review; 2) using machine learning, funding agencies match up the reviewer with proposals of fields in which they are experts. This process reduces human bias by taking self-reported expertise out of the equation; 3) a feedback system is implemented such that the person who submitted the proposal can judge if the feedback they received was helpful. Ultimately, it is hoped that scientists who consistently provide constructive criticism will be recognized for their contribution.
  • Liu M et al. (2020). The acceptability of using a lottery to allocate research funding: a survey of applicants. Research Integrity and Peer Review, 5:3. Describes applicants views of the Explorer Grant program, a program to support early stage ‘transformative’ research, offered by the Health Research Council of New Zealand. Anonymized applications are initially reviewed for fundability. Those applications deemed fundable are assigned a random number and are then selected up to the available budget for the program. Applicants may re-apply regardless of previous outcomes. [Application guidelines for the 2020 program are available here:]

Rigor and Reproducibility in Science

  • The journal, Nature, has a curated series of articles on the challenges of reproducibility, which can be accessed at
  • The “Reproducibility Project: Cancer Biology” initiative independently replicates selected results from high-profile papers in the field of cancer biology. This project is a collaboration between the Center for Open Science and Science Exchange. For more information, see
  • Wass MN et al. (2019). Understanding of researcher behavior is required to improve data reliability. GigaScience, 8(5): giz017. This paper provides a succinct review of data reproducibility and the gaps in understanding this issue and how best to address it. As the title of the paper indicates, the authors make a pitch for improved knowledge of researcher behaviour to inform development of ways to incentivize researchers to adhere to the highest standards in their research pursuits.

Measuring/Assessing Research Impact

Trainee Support

  • University of Florida’s ReTOOL program: The long-term goal of the ReTOOL program is to increase the pool of minority prostate cancer researchers in Florida. It targets minority undergraduate and graduate students from diverse disciplines.
  • Hurst JH et al. (2019). Cultivating Research Skills During Clinical Training to Promote Pediatric-Scientist Development. Pediatrics, 144(2): e20190745. Describes Duke University’s Pediatric Research Scholars Program for Physician-Scientist Development (DPRS).
  • Wortman-Wunder E, Wefes I. (2020). Scientific Writing Workshop Improves Confidence in Critical Writing Skills among Trainees in the Biomedical Sciences. Journal of Microbiology & Biology Education, 21(1): 21.1.5. An approach used at the University of Colorado Denver to enhance written communication skills among pre- and postdoctoral trainees.
  • Yin C et al. (2017). Training the next generation of Canadian Clinician-Scientists: charting a path to success. Clinical and Investigative Medicine (Online),40(2):E95-E101. Proposes strategies to improve Canada’s MD-PhD programs.