Online tools to synthesize real-world evidence of comparative effectiveness research to enhance formulary decision making

用于综合真实世界比较疗效研究证据的在线工具,以改进处方集决策。

阅读:1

Abstract

Results of randomized controlled trials (RCTs) provide valuable comparisons of 2 or more interventions to inform health care decision making; however, many more comparisons are required than available time and resources to conduct them. Moreover, RCTs have limited generalizability. Comparative effectiveness research (CER) using real-world evidence (RWE) can increase generalizability and is important for decision making, but use of nonrandomized designs makes their evaluation challenging. Several tools are available to assist. In this study, we comparatively characterize 5 tools used to evaluate RWE studies in the context of making health care adoption decision making: (1) Good Research for Comparative Effectiveness (GRACE) Checklist, (2) IMI GetReal RWE Navigator (Navigator), (3) Center for Medical Technology Policy (CMTP) RWE Decoder, (4) CER Collaborative tool, and (5) Real World Evidence Assessments and Needs Guidance (REAdi) tool. We describe each and then compare their features along 8 domains: (1) objective/user/context, (2) development/scope, (3) platform/presentation, (4) user design, (5) study-level internal/external validity of evidence, (6) summarizing body of evidence, (7) assisting in decision making, and (8) sharing results/making improvements. Our summary suggests that the GRACE Checklist aids stakeholders in evaluation of the quality and applicability of individual CER studies. Navigator is a collection of educational resources to guide demonstration of effectiveness, a guidance tool to support development of medicines, and a directory of authoritative resources for RWE. The CMTP RWE Decoder aids in the assessment of relevance and rigor of RWE. The CER Collaborative tool aids in the assessment of credibility and relevance. The REAdi tool aids in refinement of the research question, study retrieval, quality assessment, grading the body of evidence, and prompts with questions to facilitate coverage decisions. All tools specify a framework, were designed with stakeholder input, assess internal validity, are available online, and are easy to use. They vary in their complexity and comprehensiveness. The RWE Decoder, CER Collaborative tool, and REAdi tool synthesize evidence and were specifically designed to aid formulary decision making. This study adds clarity on what the tools provide so that the user can determine which best fits a given purpose. DISCLOSURES: This work was supported by the Health Tech Fund, which was provided to the University of Washington School of Pharmacy by its Corporate Advisory Board. This consortium of pharmaceutical and biotech companies supports the research program of the University of Washington School of Pharmacy across the competitive space. The sponsors seeded the idea for the project and contributed to study design and improvement. The authors had full control of all content development, manuscript drafting, and submission for publication. The REAdi tool was developed by the authors. Chen, Bansal, Barthold, Carlson, Veenstra, Basu, Devine, Yun, Ta, and Beal were supported by a training grant from the University of Washington-Allergan Fellowship, unrelated to this work. Basu reports personal fees from Salutis Consulting, unrelated to this work. Graff is an employee of the National Pharmaceutical Council, which was a partner in the development of the CER Collaborative and funding partner for the CMTP RWE Decoder and the GRACE Checklist. A previous version of this work was presented as an invited workshop at AMCP Nexus 2018; October 22-25, 2018; Orlando, FL.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。