商务合作
动脉网APP
可切换为仅中文
As the burden of documentation and various other administrative duties has increased, physician burnout has reached historical levels. In response, EHR vendors are embedding generative AI tools to aid physicians by drafting their responses to patient messages. However, there is a lot that we don’t yet know about these tools’ accuracy and effectiveness..
随着文件和其他各种行政职责的负担增加,医生的职业倦怠已达到历史水平。作为回应,EHR供应商正在嵌入生成性人工智能工具,通过起草他们对患者信息的回应来帮助医生。然而,关于这些工具的准确性和有效性,我们还有很多尚不清楚。。
Researchers at Mass General Brigham recently conducted research to learn more about how these generative AI solutions are performing. They published a study last week in The Lancet Digital Health showing that these AI tools can be effective at reducing physicians’ workloads and improving patient education — but also that these tools have limitations that require human oversight..
Mass General Brigham的研究人员最近进行了研究,以更多地了解这些生成性人工智能解决方案的性能。他们上周在《柳叶刀》数字健康杂志上发表的一项研究表明,这些人工智能工具可以有效减少医生的工作量和改善患者教育,但这些工具也有局限性,需要人类监督。。
For the study, the researchers used OpenAI’s GPT-4 large language model to produce 100 different hypothetical questions from patients with cancer.
在这项研究中,研究人员使用OpenAI的GPT-4大型语言模型从癌症患者身上产生了100个不同的假设问题。
Sponsored Post
赞助帖子
With the Rise of AI, What IP Disputes in Healthcare Are Likely to Emerge?
随着人工智能的兴起,医疗保健领域可能会出现哪些知识产权纠纷?
Munck Wilson Mandala Partner Greg Howison shared his perspective on some of the legal ramifications around AI, IP, connected devices and the data they generate, in response to emailed questions.
Munck Wilson Mandala合伙人格雷格·豪森(GregHowison)在回答电子邮件中提出的问题时,分享了他对人工智能、IP、连接设备及其产生的数据产生的一些法律后果的看法。
The researchers had GPT-4 answer these questions, as well as six radiation oncologists who responded manually. Then, the research team provided those same six physicians with the GPT-4-generated responses, which they were asked to review and edit.
研究人员让GPT-4回答这些问题,还有六位放射肿瘤学家手动回答。然后,研究团队向这六位医生提供了GPT-4生成的回复,并要求他们进行审查和编辑。
The oncologists could not tell whether GPT-4 or a human physician had written the responses — and in nearly a third of cases, they believed that a GPT-4-generated response had been written by a physician.
肿瘤学家无法判断是GPT-4还是人类医生写了这些反应-在近三分之一的病例中,他们认为GPT-4产生的反应是由医生写的。
The study showed that physicians usually wrote shorter responses than GPT-4. The large language model’s responses were longer because they usually included more educational information for patients — but at the same time, these responses were also less direct and instructional, the researchers noted..
研究表明,医生通常比GPT-4写的反应更短。研究人员指出,大型语言模型的反应时间较长,因为它们通常为患者提供更多的教育信息,但同时,这些反应也不太直接和具有指导意义。。
Overall, the physicians reported that using a large language model to help draft their patient message responses was helpful in reducing their workload and associated burnout. They deemed GPT-4-generated responses to be safe in 82% of cases and acceptable to send with no further editing in 58% of cases..
总的来说,医生们报告说,使用大型语言模型来帮助起草患者的信息反应有助于减少他们的工作量和相关的倦怠。他们认为GPT-4生成的回复在82%的情况下是安全的,在58%的情况下不需要进一步编辑即可发送。。
presented by
提交人
Sponsored Post
赞助帖子
A Personalized Approach to Medication Nonadherence
个性化的药物不依从性方法
At the Abarca Forward conference earlier this year, George Van Antwerp, managing director at Deloitte, discussed how social determinants of health and a personalized member experience can improve medication adherence and health outcomes.
在今年早些时候的Abarca Forward会议上,德勤董事总经理乔治·范安特卫普(GeorgevanAntwerp)讨论了健康的社会决定因素和个性化的会员体验如何改善药物依从性和健康结果。
But it’s important to remember that large language models can be dangerous without a human in the loop. The study also found that 7% of GPT-4-produced responses could pose a risk to the patient if left unedited. Most of the time, this is because the GPT-4-generated response has an “inaccurate conveyance of the urgency with which the patient should come into clinic or be seen by a doctor,” said Dr.
但重要的是要记住,如果没有人参与,大型语言模型可能会很危险。该研究还发现,如果不进行编辑,7%的GPT-4产生的反应可能会给患者带来风险。大多数情况下,这是因为GPT-4产生的反应“不准确地传达了患者应该进入诊所或由医生看病的紧急程度”,Dr。
Danielle Bitterman, who is an author of the study and Mass General Brigham radiation oncologist..
DanielleBitterman是这项研究的作者,也是马萨诸塞州布莱根将军放射肿瘤学家。。
“These models go through a reinforcement learning process where they are kind of trained to be polite and give responses in a way that a person might want to hear. I think occasionally, they almost become too polite, where they don’t appropriately convey urgency when it is there,” she explained in an interview..
她在一次采访中解释说:“这些模型经历了一个强化学习过程,在这个过程中,他们被训练成有礼貌,并以一个人可能想听的方式给出回应。我认为偶尔,他们几乎变得太礼貌了,当紧急情况出现时,他们不会适当地传达紧急情况。”。。
Moving forward, there needs to be more research about how patients feel about large language models being used to interact with them in this way, Dr. Bitterman noted.
Bitterman博士指出,展望未来,需要更多研究患者对使用大型语言模型以这种方式与他们互动的感受。
Photo: Halfpoint, Getty Images
照片:Halfpoint,Getty Images
Topics
主题
burnout
职业倦怠
ChatGPT
ChatGPT
clinical messaging
临床信息
EHR
EHR
generative AI
生成AI
Mass General Brigham
弥撒将军布莱根
patient messaging
患者信息
MedCity News Daily Newsletter
MedCity新闻每日通讯
Sign up and get the latest news in your inbox.
注册并在收件箱中获取最新消息。
Enter your email address
输入您的电子邮件地址
Subscribe Now
立即订阅
We will never sell or share your information without your consent. See our privacy policy.
未经您同意,我们绝不出售或共享您的信息。请参阅我们的隐私政策。
最近内容 查看更多
Watershed Health为其护理协调平台筹集1400万美元
17 小时前
弗吉尼亚州医疗补助计划将气流健康用于哺乳服务
2 天前
远程医疗服务提供商Wisp开始通过与TBD Health的合作提供诊断服务
3 天前
产业链接查看更多
所属赛道