Investigating a self-scoring interview simulation for learning an.pdf (668.14 kB)
0/0

Investigating a self-scoring interview simulation for learning and assessment in the medical consultation.

Download (668.14 kB)
journal contribution
posted on 22.11.2019 by Catherine Bruen, Clarence Kreiter, Vincent Wade, Teresa Pawlikowska

Experience with simulated patients supports undergraduate learning of medical consultation skills. Adaptive simulations are being introduced into this environment. The authors investigate whether it can underpin valid and reliable assessment by conducting a generalizability analysis using IT data analytics from the interaction of medical students (in psychiatry) with adaptive simulations to explore the feasibility of adaptive simulations for supporting automated learning and assessment. The generalizability (G) study was focused on two clinically relevant variables: clinical decision points and communication skills. While the G study on the communication skills score yielded low levels of true score variance, the results produced by the decision points, indicating clinical decision-making and confirming user knowledge of the process of the Calgary-Cambridge model of consultation, produced reliability levels similar to what might be expected with rater-based scoring. The findings indicate that adaptive simulations have potential as a teaching and assessment tool for medical consultations.

History

Comments

The original article is available at https://www.dovepress.com

Published Citation

Bruen C, Kreiter C, Wade V, Pawlikowska T. Investigating a self-scoring interview simulation for learning and assessment in the medical consultation. Advances in Medical Education and Practice. 2018;8:353-358

Publication Date

26/05/2017

Publisher

Dove Press

PubMed ID

28603434

Exports