Royal College of Surgeons in Ireland
Browse
Investigating a self-scoring interview simulation for learning an.pdf (668.14 kB)

Investigating a self-scoring interview simulation for learning and assessment in the medical consultation.

Download (668.14 kB)
Version 2 2021-12-06, 15:58
Version 1 2019-11-22, 16:37
journal contribution
posted on 2021-12-06, 15:58 authored by Catherine BruenCatherine Bruen, Clarence Kreiter, Vincent Wade, Teresa PawlikowskaTeresa Pawlikowska

Experience with simulated patients supports undergraduate learning of medical consultation skills. Adaptive simulations are being introduced into this environment. The authors investigate whether it can underpin valid and reliable assessment by conducting a generalizability analysis using IT data analytics from the interaction of medical students (in psychiatry) with adaptive simulations to explore the feasibility of adaptive simulations for supporting automated learning and assessment. The generalizability (G) study was focused on two clinically relevant variables: clinical decision points and communication skills. While the G study on the communication skills score yielded low levels of true score variance, the results produced by the decision points, indicating clinical decision-making and confirming user knowledge of the process of the Calgary-Cambridge model of consultation, produced reliability levels similar to what might be expected with rater-based scoring. The findings indicate that adaptive simulations have potential as a teaching and assessment tool for medical consultations.

History

Comments

The original article is available at https://www.dovepress.com

Published Citation

Bruen C, Kreiter C, Wade V, Pawlikowska T. Investigating a self-scoring interview simulation for learning and assessment in the medical consultation. Advances in Medical Education and Practice. 2018;8:353-358

Publication Date

2017-05-26

PubMed ID

28603434

Department/Unit

  • Health Professions Education Centre