现在的位置: 首页时讯速递, 进展交流>正文
[JAMA Netw Open发表论文]:由人工智能生成的针对患者邮箱信息的回信草稿
2024年06月04日 时讯速递, 进展交流 [JAMA Netw Open发表论文]:由人工智能生成的针对患者邮箱信息的回信草稿已关闭评论

Original Investigation 

Health Informatics

March 20, 2024

Artificial Intelligence–Generated Draft Replies to Patient Inbox Messages

Patricia Garcia, Stephen P. Ma, Shreya Shah, et al

JAMA Netw Open. 2024;7(3):e243201. doi:10.1001/jamanetworkopen.2024.3201

Key Points

Question  What is the adoption of and clinician experience with clinical practice deployment of a large language model used to draft responses to patient inbox messages?

Findings  In this 5-week, single-group, quality improvement study of 162 clinicians, the mean draft utilization rate was 20%, there were statistically significant reductions in burden and burnout score derivatives, and there was no change in time.

Meaning  These findings suggest that the use of large language models in clinical workflows was spontaneously adopted, usable, and associated with improvement in clinician well-being.

Abstract

Importance  The emergence and promise of generative artificial intelligence (AI) represent a turning point for health care. Rigorous evaluation of generative AI deployment in clinical practice is needed to inform strategic decision-making.

Objective  To evaluate the implementation of a large language model used to draft responses to patient messages in the electronic inbox.

Design, Setting, and Participants  A 5-week, prospective, single-group quality improvement study was conducted from July 10 through August 13, 2023, at a single academic medical center (Stanford Health Care). All attending physicians, advanced practice practitioners, clinic nurses, and clinical pharmacists from the Divisions of Primary Care and Gastroenterology and Hepatology were enrolled in the pilot.

Intervention  Draft replies to patient portal messages generated by a Health Insurance Portability and Accountability Act–compliant electronic health record–integrated large language model.

Main Outcomes and Measures  The primary outcome was AI-generated draft reply utilization as a percentage of total patient message replies. Secondary outcomes included changes in time measures and clinician experience as assessed by survey.

Results  A total of 197 clinicians were enrolled in the pilot; 35 clinicians who were prepilot beta users, out of office, or not tied to a specific ambulatory clinic were excluded, leaving 162 clinicians included in the analysis. The survey analysis cohort consisted of 73 participants (45.1%) who completed both the presurvey and postsurvey. In gastroenterology and hepatology, there were 58 physicians and APPs and 10 nurses. In primary care, there were 83 physicians and APPs, 4 nurses, and 8 clinical pharmacists. The mean AI-generated draft response utilization rate across clinicians was 20%. There was no change in reply action time, write time, or read time between the prepilot and pilot periods. There were statistically significant reductions in the 4-item physician task load score derivative (mean [SD], 61.31 [17.23] presurvey vs 47.26 [17.11] postsurvey; paired difference, −13.87; 95% CI, −17.38 to −9.50; P < .001) and work exhaustion scores (mean [SD], 1.95 [0.79] presurvey vs 1.62 [0.68] postsurvey; paired difference, −0.33; 95% CI, −0.50 to −0.17; P < .001).

Conclusions and Relevance  In this quality improvement study of an early implementation of generative AI, there was notable adoption, usability, and improvement in assessments of burden and burnout. There was no improvement in time. Further code-to-bedside testing is needed to guide future development and organizational strategy.

抱歉!评论已关闭.

×
腾讯微博