Wednesday, August 21, 2013

Article review: Faculty skills impact their rating of residents

0


One of the frustrating things about reviewing evaluation cards of medical students and residents is the degree of variability in how faculty rate them. There are some faculty who can be generalized as "hawks" and "doves"-- really tough or really benign graders, respectively.

Why do faculty rate the same learner differently? This has been a topic of much debate over the years. In this study in Academic Medicine, Dr. Kogan et al make an interesting hypothesis:
  • Faculty with better clinical skills and experience would be more stringent raters.
  • These relationships would be stronger in competency-specific domains ("i.e., faculty with more complete history-taking approaches would rate history-taking more stringently").
Methodology:
  • 48 faculty volunteered to participate in this study (paid)
  • Each faculty member completed eight 15-minute standardized patient (SP) encounters on common outpatient scenarios.
  • The standardized patients scored each faculty member using checklists on history, physical exam, counseling, interpersonal, and professionalism skills.
  • Subsequently, each faculty member reviewed 4 videos of residents participating in SP encounters.
  • Each residents was graded using a mini-Clinical Evaluation Exercise (CEX) tool, which assessed 7 competencies on a 9-point scale (interviewing, physical exam, humanistic/professional qualities, clinical judgment, counseling skills, organization/efficiency, overall competence)
Results:
Before I present the results, let's review "linear correlation coefficient", usually designated as "r". This measures the strength and direction of a linear relationship between two variables. Generally a r greater than 0.8 or less than -0.8 suggests a strong correlation.
  • There was a significant negative correlation between faculty's history-taking scores and their ratings of residents in interviewing skills (r= - 0.55) and organizational skills (r= - 0.35). This means faculty who were better history-takers were more stringent raters in the areas of interviewing and organizational skills.
  • Faculty with better process performance scores (eg. not interrupting patients in the chief complaint, using open-ended questions, avoiding jargon) were more stringent raters in the areas of interviewing, physical exam, and organization (r= - 0.41, - 0.42, - 0.36, respectively).

Bottom Line:
This study shows that variability in learner assessment is dependent on the faculty's clinical skills. Should we only have our most skilled faculty evaluate our learners then? This study highlights the importance of faculty competencies in residency competency assessment.

Thinking about myself, I know that I usually give learners relatively high scores unless they are blatantly poor performers. I'm more of a "dove" than a "hawk". So, what does that say about me? Can I extrapolate that I'm not as clinically skilled as my peers? Wait, I'm offended by the implication...

References
Kogan JR, Hess BJ, Conforti LN, Holmboe ES. What drives faculty ratings of residents' clinical skills? The impact of faculty's own clinical skills. Academic Medicine. 2010, 85:S25-S28. PMID: 20881697
.

0 comments:

Post a Comment

 
Design by ThemeShift | Bloggerized by Lasantha - Free Blogger Templates | Best Web Hosting