dc.contributor.author | Davis, Simon | |
dc.contributor.author | Norvik, Jon Viljar | |
dc.contributor.author | Hansen, Kristin Elisa Ruud | |
dc.contributor.author | Vognild, Ingrid | |
dc.contributor.author | Reierth, Eirik | |
dc.date.accessioned | 2017-09-06T09:46:38Z | |
dc.date.available | 2017-09-06T09:46:38Z | |
dc.date.issued | 2015 | |
dc.description.abstract | Background and Purpose: To investigate how much the method of observation agrees with a standardised review of evidence of clinical examination, for the assessment of clinical otoscopic competence.
<br>
Methods: 65 medical students took part in an Objective Structured Clinical Examination (OSCE) station using patients with real pathology. Examiners assessed otoscopic competency in tympanic membrane examination solely by distant observation. An external examiner later reviewed candidates’ documented findings on a schematic drawing of the tympanic membranes. Observed agreement of the two methods and Cohen’s kappa coefficient were calculated.
<br>
Results: Mean otoscopy scores for examiner 1 and examiner 2 were 67.7% and 29.4% respectively. There was a significant difference using the Mann-Whitney U-test. OSCE observation declared 47.7% of candidates (31/65) to be clinically competent. Drawing-based analysis however deemed only 4.6% (3/65) to have achieved this competency. This represented more than a ten-fold overestimation of clinical competency by OSCE assessment. Observed agreement between assessment methods was 59.6%. Cohen’s kappa coefficient was 0.1.
<br>
Conclusions: OSCE observational assessment of otoscopic clinical competency correlates poorly with review of evidence from clinical examination. If evidence review is acceptable as a better marker for competency, observation should not to be used alone in OSCE assessment. Evidence review itself is vulnerable to candidate guesswork. OSCE could possibly explore candidate demonstration with explanation of findings, by use of digital otoscopy offering a shared view of the tympanic membranes, as an improved standard of clinical competency assessment. | en_US |
dc.description | Source at <a href=http://dx.doi.org/10.22037/jme.v15i3.13866> http://dx.doi.org/10.22037/jme.v15i3.13866 </a> | en_US |
dc.identifier.citation | Davis S, Norvik JV, Hansen KE, Vognild, Reierth E. Assessment of otoscopy: how does observation compare to a review of clinical evidence?. Journal of Medical Education. 2015 | en_US |
dc.identifier.cristinID | FRIDAID 1482105 | |
dc.identifier.issn | 1735-4005 | |
dc.identifier.uri | https://hdl.handle.net/10037/11407 | |
dc.language.iso | eng | en_US |
dc.publisher | Shaheed Beheshti University of Medical Sciences and Health Services | en_US |
dc.relation.journal | Journal of Medical Education | |
dc.rights.accessRights | openAccess | en_US |
dc.subject | VDP::Medisinske Fag: 700::Klinisk medisinske fag: 750::Otorhinolaryngologi: 755 | en_US |
dc.subject | VDP::Medical disciplines: 700::Clinical medical disciplines: 750::Otolaryngology: 755 | en_US |
dc.title | Assessment of otoscopy: how does observation compare to a review of clinical evidence? | en_US |
dc.type | Journal article | en_US |
dc.type | Tidsskriftartikkel | en_US |
dc.type | Peer reviewed | en_US |