Healthcare professionals and the public sentiment analysis of ChatGPT in clinical practice

Sci Rep. 2025 Jan 7;15(1):1223. doi: 10.1038/s41598-024-84512-y.

Abstract

To explore the attitudes of healthcare professionals and the public on applying ChatGPT in clinical practice. The successful application of ChatGPT in clinical practice depends on technical performance and critically on the attitudes and perceptions of non-healthcare and healthcare. This study has a qualitative design based on artificial intelligence. This study was divided into five steps: data collection, data cleaning, validation of relevance, sentiment analysis, and content analysis using the K-means algorithm. This study comprised 3130 comments amounting to 1,593,650 words. The dictionary method showed positive and negative emotions such as anger, disgust, fear, sadness, surprise, good, and happy emotions. Healthcare professionals prioritized ChatGPT's efficiency but raised ethical and accountability concerns, while the public valued its accessibility and emotional support but expressed worries about privacy and misinformation. Bridging these perspectives by improving reliability, safeguarding privacy, and clearly defining ChatGPT's role is essential for its practical and ethical integration into clinical practice.

Keywords: Artificial intelligence; Attitude; ChatGPT; Clinical competence; Medicine.

MeSH terms

  • Adult
  • Artificial Intelligence / ethics
  • Attitude of Health Personnel
  • Emotions*
  • Female
  • Health Personnel* / psychology
  • Humans
  • Male
  • Social Media