现在的位置: 首页研究点评, 进展交流>正文
[JAMA发表述评]:脓毒症警报是否有帮助?
2024年12月22日 研究点评, 进展交流 [JAMA发表述评]:脓毒症警报是否有帮助?已关闭评论

Editorial 

December 10, 2024

Do Sepsis Alerts Help?

Derek C. Angus

JAMA. Published online December 10, 2024. doi:10.1001/jama.2024.25818

A great hope of moving from paper to electronic health records (EHRs) is that health data could be scanned in real-time, alerting the care team of potential gaps in care before untoward consequences occur. While leveraging this type of case-specific clinical decision support could dramatically improve the quality and consistency of health care delivery, reality has thus far fallen far short of expectation. Sepsis serves as an illustrative example.

Sepsis, the development of life-threatening organ dysfunction following infection, is the most common cause of in-hospital death and one of the most common reasons for hospital readmission.1 Guidelines for the prompt identification and treatment of sepsis are associated with better outcomes, but adherence is poor.2,3 Concerns include delayed recognition, failure to order appropriate diagnostic tests, and delayed or inadequate treatment with antibiotics and resuscitation measures. A proposed solution is automated screening of the EHR to generate sepsis alerts for the care team. Many such alerts have been developed, with mixed results.4 Although many studies reported benefit, they largely relied on observational designs, limiting confidence that findings were not due to residual confounding. Alerts have also been criticized as weak, inaccurate, biased in underrepresented populations, a cause of alert fatigue in the care team, and leading to overtreatment or care discordant with patient preferences.

Against this background, Arabi et al5 report their findings from the SCREEN (Stepped-wedge Cluster Randomized Trial of Electronic Early Notification of Sepsis in Hospitalized Ward Patients) trial, the first large, randomized trial of sepsis alerts. SCREEN was a stepped-wedge randomized clinical trial of a sepsis alert in 45 wards in 5 hospitals, comparing outcomes among 30 613 patients admitted prior to alert activation and 29 442 after alert activation.

The alert was based on the quick Sequential Organ Failure Assessment (qSOFA) score, a composite measure of hypotension, tachycardia, and altered mental status that is strongly associated with adverse outcome in patients with suspected infection.6 An alert was triggered when any 2 of the 3 criteria were met within a 12-hour window. The alert ran in silent mode during all control periods, with no feedback to the clinical team. Once a ward entered the intervention period, alerts consisted of pop-up messages in the EHR to both the bedside nurse and on-call physician, together with an audible and visual alarm on a handheld device carried by the ward charge nurse. Prior to study launch, all clinical staff received an educational program emphasizing the importance of sepsis and reviewing sepsis care guidelines. Alert-specific training was then provided in each ward 1 month before activation. Nurses were asked to assess the patient and contact the physician; physicians were asked to document their assessment of whether the patient had sepsis. Diagnostic and therapeutic interventions were at the discretion of the clinical team. An alert adherence dashboard with audit and feedback was provided to the clinical team via each hospital’s quality improvement service.

Alerts occurred in about 1 in 6 patients, and these patients were at significantly increased odds of death compared with patients in whom no alert occurred. The alert measures organ dysfunction, not infection, and could occur for any patient. However, most patients for whom an alert occurred were already receiving antibiotics and many already had an established infection source. During the intervention period, 3 of 4 alerts were acknowledged by the bedside team, and one-third of those were considered to have sepsis by the physician. Receiving the alert prompted the team to more frequently order lactate and start intravenous fluids. The primary outcome, 90-day hospital mortality in the entire population, was significantly improved during the intervention period (adjusted relative risk, 0.85; 95% CI, 0.77-0.93; P < .001). This finding remained robust to extensive sensitivity analyses.

This large-scale and thorough randomized evaluation of an EHR-based sepsis alert is a substantial addition to the existing literature. At first blush, the results seem highly encouraging; the clinical team responded to alerts most of the time, they adopted more aggressive resuscitation efforts, and mortality improved. But, before recommending that health care systems adopt similar measures, several issues about both the design and results of this study require some consideration.

An EHR alert is a complex health care delivery intervention. The nature, accuracy, and timeliness of the information used by the alert; the accuracy and performance of the prediction model or logic commands underlying the alert; the manner by which the alert is provided to the clinical team; and the manner in which the team is prepared to receive it all affect the alert’s effectiveness. As such, a stepped-wedge cluster design is a good choice. The stepped wedge acknowledges that, once the team has been trained and gained experience with the alert, reverting back to no alert would not be a valid control. Importantly, though, it still allows random concurrent exposure in any given time period to screening or no screening, and thus leverages randomization to isolate effects of the intervention from secular trends.

The randomization of wards rather than patients also avoids the confusing situation for a clinical team of wondering why alerts did not occur in some patients and yet occurred in others, potentially eroding trust in the alert. It also avoids the possibility of any alert-induced changes in practice from drifting into the care of control patients. Importantly, though, EHR alerts could have holistic effects on overall practice. For example, as the clinical team gains experience in response to alerts about patients with 2 new qSOFA points, they may change their care for patients with any new qSOFA point. These potential halo effects are one reason for measuring the primary outcome on all patients, and not only those in whom an alert occurs.

The authors of the SCREEN trial understood their intervention was complex and thus chose the appropriate evaluation design. But while the design robustly assessed if the intervention worked, it provides little help in discerning why. Although the authors collected considerable information on patterns of care, the findings are puzzling, and any inference is largely circumstantial. Although some patterns of care changed, it is not known what changes were most important, nor is it known which patients were most likely to be helped (or harmed) by the intervention. Importantly, despite mortality benefit for the entire patient population studied, the authors found no effect of the intervention on mortality for patients with an alert, raising the possibility that the intervention was exerting broader effects on care patterns. Furthermore, the complexity of the intervention and the use of a cluster design makes subsequent analyses of treatment heterogeneity and mediating factors difficult to conduct and interpret.

Because the clinical team is an intricate part of these types of intervention, a qualitative assessment of their experience can provide insight on how the intervention worked and on features considered unhelpful. A mixed-methods approach can also help to characterize the context and setting, crucial for understanding the generalizability of findings about complex interventions. Of note, the COVID-19 pandemic occurred in the middle of this study. The study design afforded good protection against confounding from the pandemic’s effects on secular trends in case-mix and outcome, and it is reassuring that the benefits of the alert appeared similar in both the pandemic and nonpandemic periods.

Leveraging the EHR to provide smart clinical decision support is arguably one of the greatest, and least realized, opportunities to improve the quality of health care delivery. Sepsis is a prime use case, but enthusiasm has outstripped the quality of the evidence base. The SCREEN trial provides the first large-scale demonstration of benefit in a robust randomized evaluation. However, it also raises questions around just how such alerts might actually work, in whom, and in what settings. While the authors are to be congratulated, perhaps the next step is not widespread adoption, but rather to encourage other investigators and health care systems to engage in similar evaluations. Only when efforts such as SCREEN become routine will we likely be best placed to understand when and how to deliver on the true potential of digital health records, not just for sepsis but across many areas of medicine.

抱歉!评论已关闭.

×
腾讯微博