Assessing the quality of supervisors' completed clinical evaluation reports

Med Educ. 2008 Aug;42(8):816-22. doi: 10.1111/j.1365-2923.2008.03105.x. Epub 2008 Jun 14.

Abstract

Context: Although concern has been raised about the value of clinical evaluation reports for discriminating among trainees, there have been few efforts to formalise the dimensions and qualities that distinguish effective versus less useful styles of form completion.

Methods: Using brainstorming and a modified Delphi technique, a focus group determined the key features of high-quality completed evaluation reports. These features were used to create a rating scale to evaluate the quality of completed reports. The scale was pilot-tested locally; the results were psychometrically analysed and used to modify the scale. The scale was then tested on a national level. Psychometric analysis and final modification of the scale were completed.

Results: Sixteen features of high-quality reports were identified and used to develop a rating scale: the Completed Clinical Evaluation Report Rating (CCERR). The reliability of the scale after a national field test with 55 raters assessing 18 in-training evaluation reports (ITERs) was 0.82. Further revisions were made; the final version of the CCERR contains nine items rated on a 5-point scale. With this version, the mean ratings of three groups of 'gold-standard' ITERs (previously judged to be of high, average and poor quality) differed significantly (P < 0.05).

Discussion: The CCERR is a validated scale that can be used to help train supervisors to complete and assess the quality of evaluation reports.

Publication types

  • Research Support, Non-U.S. Gov't
  • Validation Study

MeSH terms

  • Clinical Competence / standards*
  • Documentation*
  • Education, Medical, Undergraduate
  • Pilot Projects
  • Reproducibility of Results