Royal College of Surgeons in Ireland
Browse
Evaluation of MCQs from MOOCs for common item writing flaws.pdf (733.25 kB)

Evaluation of MCQs from MOOCs for common item writing flaws

Download (733.25 kB)
Version 2 2022-01-21, 12:45
Version 1 2019-11-22, 16:37
journal contribution
posted on 2019-11-22, 16:37 authored by Eamon Costello, Jane C. Holland, Colette Kirwan

Objective: There is a dearth of research into the quality of assessments based on Multiple Choice Question (MCQ) items in Massive Open Online Courses (MOOCs). This dataset was generated to determine whether MCQ item writing flaws existed in a selection of MOOC assessments, and to evaluate their prevalence if so. Hence, researchers reviewed MCQs from a sample of MOOCs, using an evaluation protocol derived from the medical health education literature, which has an extensive evidence-base with regard to writing quality MCQ items.

Data description: This dataset was collated from MCQ items in 18 MOOCs in the areas of medical health education, life sciences and computer science. Two researchers critically reviewed 204 questions using an evidence-based evaluation protocol. In the data presented, 50% of the MCQs (112) have one or more item writing flaw, while 28% of MCQs (57) contain two or more flaws. Thus, a majority of the MCQs in the dataset violate item-writing guidelines, which mirrors findings of previous research that examined rates of flaws in MCQs in traditional formal educational contexts.

History

Comments

The original article is available at www.biomedcentral.com

Published Citation

Costello E, Holland JC, Kirwan C. Evaluation of MCQs from MOOCs for common item writing flaws. BMC Research Notes. 2018;11(1):849.

Publication Date

2018-12-01

Publisher

BioMed Central

PubMed ID

30509321

Usage metrics

    Royal College of Surgeons in Ireland

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC