Over the past few years, criticism on the singularly focused metrics evaluation of scientists has grown, making it clear that a cultural shift is needed to modernise our assessment system. Many universities and funding agencies worldwide have already signed the San Francisco Declaration of Research Assessment (DORA), thereby committing to a broader and more overarching assessment of researchers and their research proposals.
For the last several decades, quantitative indices such as the number of publications, the h-index or the journal impact factor have served as near-singular measures of scientific success. Other key areas, such as education, leadership, and institutional and societal engagement have been undervalued. While being a good educator, having strong leadership skills and serving the scientific community are appreciated, these parameters have become a requirement in addition to an impressive publication record. This means scientists need to excel in all academic activities (research, teaching, service) in order to be considered successful, which places unrealistic expectations on individuals and significantly increases their workload.
By allowing for more diversity in academic career paths and a broader definition of what constitutes scientific excellence, there would be more options for honoring and nurturing individual talents and motivations. This could lead to a more balanced academic system that is better equipped to tackle today’s scientific challenges, including a stronger focus on team performance. Many opponents of a revised assessment system, however, fear that moving away from quantitative measures will make it more difficult to objectively assess and compare academics, leading to a loss of quality. Qualitative characterizations are more difficult to compare and rely on the make-up of the evaluating team, but that is not reason to dismiss them as part of the evaluation process.
In this great debate, we query: (1) Is there a mechanism to integrate qualitative assessment with quantitative metrics when evaluating academics, (2) how could a revised assessment system be organised and implemented universally, as adoption by all is needed for it to be effective, and (3) how will broader assessment criteria strengthen scientific leadership in the future?
Invited panelists:
Olivier Pourret, Caroline Slomp, Fabio Crameri and Catherine McCammon