AI-generated biochemistry test item parameters in MST test conditions


POLAT M., KARADAĞ E.

BMC MEDICAL EDUCATION, cilt.25, sa.1, 2025 (SCI-Expanded, SSCI, Scopus) identifier identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 25 Sayı: 1
  • Basım Tarihi: 2025
  • Doi Numarası: 10.1186/s12909-025-08292-3
  • Dergi Adı: BMC MEDICAL EDUCATION
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Social Sciences Citation Index (SSCI), Scopus, MEDLINE, Directory of Open Access Journals
  • Anadolu Üniversitesi Adresli: Evet

Özet

BackgroundThis study investigated whether ChatGPT 4o could accurately estimate the difficulty of medical assessment items by comparing its predictions with empirically-derived parameters from multistage testing simulations.MethodsUsing a hybrid simulation-validation design, the researchers had ChatGPT 4o generate 80 multiple-choice biochemistry questions with difficulty estimates (b-parameters), which were then administered via simulated multistage testing to 5,000 virtual examinees.ResultsThe analysis revealed moderate agreement between AI-generated and simulation-derived difficulty parameters (r = 0.612, 95% CI [0.472, 0.725]), though ChatGPT systematically overestimated item difficulty with a mean bias of 0.240 (SD = 0.503). While the mean absolute error was relatively modest at 0.447, with 91% of items showing errors below 1.0 logits, the AI's estimates were particularly inaccurate for very easy items, where 83% exhibited absolute errors exceeding 0.5 logits compared to only 29% for medium difficulty items. These findings suggest that while ChatGPT 4o demonstrates promise as a tool for preliminary item generation in medical education assessment, it requires empirical calibration and expert oversight before operational implementation, as the systematic bias indicates the AI lacks access to real-world performance feedback.ConclusionsThe study's conclusions are tempered by important limitations, including its reliance on simulation-based validation rather than actual student performance data and its single-institution sample, underscoring the need for rigorous psychometric validation when integrating artificial intelligence into medical education assessment.