Peer review of grant applications: criteria used and qualitative study of reviewer practices.

Fiche du document

Date

2012

Discipline
Type de document
Périmètre
Langue
Identifiants
Relations

Ce document est lié à :
info:eu-repo/semantics/altIdentifier/doi/10.1371/journal.pone.0046054

Ce document est lié à :
info:eu-repo/semantics/altIdentifier/pmid/23029386

Collection

Archives ouvertes




Citer ce document

Hendy Abdoul et al., « Peer review of grant applications: criteria used and qualitative study of reviewer practices. », HAL-SHS : sociologie, ID : 10.1371/journal.pone.0046054


Métriques


Partage / Export

Résumé En

BACKGROUND: Peer review of grant applications has been criticized as lacking reliability. Studies showing poor agreement among reviewers supported this possibility but usually focused on reviewers' scores and failed to investigate reasons for disagreement. Here, our goal was to determine how reviewers rate applications, by investigating reviewer practices and grant assessment criteria. METHODS AND FINDINGS: We first collected and analyzed a convenience sample of French and international calls for proposals and assessment guidelines, from which we created an overall typology of assessment criteria comprising nine domains relevance to the call for proposals, usefulness, originality, innovativeness, methodology, feasibility, funding, ethical aspects, and writing of the grant application. We then performed a qualitative study of reviewer practices, particularly regarding the use of assessment criteria, among reviewers of the French Academic Hospital Research Grant Agencies (Programmes Hospitaliers de Recherche Clinique, PHRCs). Semi-structured interviews and observation sessions were conducted. Both the time spent assessing each grant application and the assessment methods varied across reviewers. The assessment criteria recommended by the PHRCs were listed by all reviewers as frequently evaluated and useful. However, use of the PHRC criteria was subjective and varied across reviewers. Some reviewers gave the same weight to each assessment criterion, whereas others considered originality to be the most important criterion (12/34), followed by methodology (10/34) and feasibility (4/34). Conceivably, this variability might adversely affect the reliability of the review process, and studies evaluating this hypothesis would be of interest. CONCLUSIONS: Variability across reviewers may result in mistrust among grant applicants about the review process. Consequently, ensuring transparency is of the utmost importance. Consistency in the review process could also be improved by providing common definitions for each assessment criterion and uniform requirements for grant application submissions. Further research is needed to assess the feasibility and acceptability of these measures.

document thumbnail

Par les mêmes auteurs

Sur les mêmes sujets

Sur les mêmes disciplines

Exporter en