Prompt engineering to increase GPT3.5's performance on the Plastic Surgery In-Service Exams

J Plast Reconstr Aesthet Surg. 2024 Nov:98:158-160. doi: 10.1016/j.bjps.2024.09.001. Epub 2024 Sep 5.

Abstract

This study assesses ChatGPT's (GPT-3.5) performance on the 2021 ASPS Plastic Surgery In-Service Examination using prompt modifications and Retrieval Augmented Generation (RAG). ChatGPT was instructed to act as a "resident," "attending," or "medical student," and RAG utilized a curated vector database for context. Results showed no significant improvement, with the "resident" prompt yielding the highest accuracy at 54%, and RAG failing to enhance performance, with accuracy remaining at 54.3%. Despite appropriate reasoning when correct, ChatGPT's overall performance fell in the 10th percentile, indicating the need for fine-tuning and more sophisticated approaches to improve AI's utility in complex medical tasks.

Keywords: ChatGPT; In-Service Examination; Performance evaluation; Plastic surgery resident; Prompt engineering; Simulation.

MeSH terms

  • Clinical Competence*
  • Educational Measurement* / methods
  • Humans
  • Internship and Residency
  • Surgery, Plastic*