Assessment of otoscopy: how does observation compare to a review of clinical evidence?
Permanent link
https://hdl.handle.net/10037/11407Date
2015Type
Journal articleTidsskriftartikkel
Peer reviewed
Author
Davis, Simon; Norvik, Jon Viljar; Hansen, Kristin Elisa Ruud; Vognild, Ingrid; Reierth, EirikAbstract
Background and Purpose: To investigate how much the method of observation agrees with a standardised review of evidence of clinical examination, for the assessment of clinical otoscopic competence.
Methods: 65 medical students took part in an Objective Structured Clinical Examination (OSCE) station using patients with real pathology. Examiners assessed otoscopic competency in tympanic membrane examination solely by distant observation. An external examiner later reviewed candidates’ documented findings on a schematic drawing of the tympanic membranes. Observed agreement of the two methods and Cohen’s kappa coefficient were calculated.
Results: Mean otoscopy scores for examiner 1 and examiner 2 were 67.7% and 29.4% respectively. There was a significant difference using the Mann-Whitney U-test. OSCE observation declared 47.7% of candidates (31/65) to be clinically competent. Drawing-based analysis however deemed only 4.6% (3/65) to have achieved this competency. This represented more than a ten-fold overestimation of clinical competency by OSCE assessment. Observed agreement between assessment methods was 59.6%. Cohen’s kappa coefficient was 0.1.
Conclusions: OSCE observational assessment of otoscopic clinical competency correlates poorly with review of evidence from clinical examination. If evidence review is acceptable as a better marker for competency, observation should not to be used alone in OSCE assessment. Evidence review itself is vulnerable to candidate guesswork. OSCE could possibly explore candidate demonstration with explanation of findings, by use of digital otoscopy offering a shared view of the tympanic membranes, as an improved standard of clinical competency assessment.
Methods: 65 medical students took part in an Objective Structured Clinical Examination (OSCE) station using patients with real pathology. Examiners assessed otoscopic competency in tympanic membrane examination solely by distant observation. An external examiner later reviewed candidates’ documented findings on a schematic drawing of the tympanic membranes. Observed agreement of the two methods and Cohen’s kappa coefficient were calculated.
Results: Mean otoscopy scores for examiner 1 and examiner 2 were 67.7% and 29.4% respectively. There was a significant difference using the Mann-Whitney U-test. OSCE observation declared 47.7% of candidates (31/65) to be clinically competent. Drawing-based analysis however deemed only 4.6% (3/65) to have achieved this competency. This represented more than a ten-fold overestimation of clinical competency by OSCE assessment. Observed agreement between assessment methods was 59.6%. Cohen’s kappa coefficient was 0.1.
Conclusions: OSCE observational assessment of otoscopic clinical competency correlates poorly with review of evidence from clinical examination. If evidence review is acceptable as a better marker for competency, observation should not to be used alone in OSCE assessment. Evidence review itself is vulnerable to candidate guesswork. OSCE could possibly explore candidate demonstration with explanation of findings, by use of digital otoscopy offering a shared view of the tympanic membranes, as an improved standard of clinical competency assessment.