The fourth installment of Ropes & Gray’s podcast series, Non-binding Guidance, dives into the use of real-world evidence in drug development. In this episode, Ropes & Gray lawyers Kellie Combs and Sarah Blankstein discuss FDA’s current thinking on real-world evidence and how industry has been using real-world evidence studies in FDA submissions, as well as with payors and in product promotion. The speakers highlight key takeaways from FDA’s framework for real-world evidence, issued in December 2018, including the types of data sources and study designs that FDA considers to be “real-world,” and the Agency’s approach to assessing real-world evidence intended to support a determination of effectiveness. Additionally, our presenters discuss highlights from FDA and the Duke-Margolis Center’s public workshop on real-world evidence, which took place on July 11-12. Tune in to this discussion to learn more about FDA’s approach to real-world evidence, the impact on industry, and what companies should be thinking about in this space.
Transcript:
Kellie Combs: Hi, I'm Kellie Combs, a partner in the life sciences regulatory and compliance practice group at Ropes & Gray, and based in our Washington, D.C. office. Welcome to Non-binding Guidance, a podcast series from Ropes & Gray focused on current trends in FDA regulatory law, as well as other important developments affecting the life sciences industry. I'm here today with my colleague Sarah Blankstein, who's based in our Boston office and rejoined the firm last year after serving as commercial and regulatory legal counsel for a Boston-based biotech company. Today's podcast will discuss the use of real-world evidence in drug development, looking at FDA's framework for real-world evidence, guidance, and how industry has been using real-world evidence studies.
The 21st Century Cures Act, which was passed at the end of 2016, required FDA to establish a program to evaluate the potential use of real-world evidence to help support approval of a new indication for a drug, and to help satisfy post-approval study requirements. The Act also required the Agency to issue a framework for the Agency's real-world evidence program, which FDA did in December 2018. Sarah, to kick us off, what are some of the key takeaways from FDA's framework?
Sarah Blankstein: So FDA's framework did a number of things. One of the key things that the framework did, though, was to define the concepts of real-world data (RWD) and real-world evidence (RWE), taking a pretty broad view of what FDA considers to be reliance on real-world evidence to support an effectiveness determination for a drug. So taking a step back, real-world data is data relating to patient health status or health care delivery routinely collected from various sources, so sources like electronic health records, claims databases, registries, and things like that. Real-world evidence, on the other hand, is clinical evidence about a medicinal product derived from analysis of that real-world data. And real-world evidence can really range from sort of minimal applications, so a randomized controlled clinical trial where real-world data is used to develop enrollment criteria or select outcomes for the study, but not actually developed in the course of that study. An example where real-world data is utilized in the course of the study would be a randomized clinical trial with a pragmatic study design where some of the outcomes, for example, are derived from electronic health records for the subjects. And then you can move into something that really does rely more on real-world evidence collected outside of the study – so that would be, for example, a single-arm clinical trial that uses real-world data to develop an external or synthetic control arm for the study. And then you can also have something like an observational study utilizing real-world data, for example, from a claims database or electronic health records, and that evidence can be collected prospectively or retrospectively to support a determination of efficacy. Not surprisingly, FDA in its framework expressed much greater comfort with study designs relying less on real-world data, so those RCTs that I had mentioned, and FDA expressed pretty significant skepticism about observational real-world evidence studies. And FDA's skepticism about observational studies using real-world data is something that has definitely received a lot of pushback from industry, including PhRMA in comments on the docket for the framework that FDA published.
In addition to defining real-world data and real-world evidence, FDA's framework also lays out at a fairly high-level what the Agency's approach might be to assessing real-world evidence for effectiveness, and that will be guided by fitness of the underlying real-world data, scientific adequacy of the study design, and the study conduct, so things like monitoring and data collection considerations. The framework also discusses FDA's plans for stakeholder engagement, various demonstration projects to get more information on how real-world evidence might be used, and guidance development around real-world evidence. On July 11th and 12th, FDA and the Duke-Margolis Center held a public workshop as part of the engagement with stakeholders on real-world evidence that FDA promised in its framework. Kellie, what should we know about that workshop, and how might we expect it to impact FDA's thinking on real-world evidence in drug development
Kellie Combs: Well, Sarah, this was a two-day workshop, but it was focused on a very narrow subset of real-world evidence – and what I mean by that is just real-world evidence that was generated within a randomized clinical trial. Now, we obviously don't have time here today to talk about everything that we learned from the workshop, but we can hit some of the high-level takeaways. Speakers and panelists included a number of people from FDA, such as Robert Temple, who's the CDER Deputy Director, and Peter Stein, who is the Director of FDA's Office of New Drugs, as well as several clinical trial experts both from industry and from academia. In addition to discussing several examples of real-world evidence studies that have been conducted so far, and various scientific and statistical considerations associated with these types of studies, one clear takeaway from the workshop is that FDA is really very open to hearing from industry with proposals for clinical trials that incorporate various elements of real-world evidence, particularly if they are randomized clinical trials. At the workshop, FDA really encouraged sponsors to reach out to review divisions as early as possible in the drug development process, and also to contact CDER's real-world evidence subcommittee of the Medical Policy and Program Review Council. This subcommittee was established a couple of years ago to guide policy development and provide advisory recommendations in this space, particularly as those relate to whether underlying data and study design elements may be appropriate to ultimately provide support for a regulatory decision about effectiveness.
While FDA expressed willingness at the workshop to work with industry on incorporating elements of real-world evidence in drug development programs, many in industry have pointed out that we should not actually expect to see industry start to conduct a lot of real-world evidence studies until we see significantly more clarity from FDA about how the Agency thinks about these studies. FDA may be willing, for example, to discuss study design and work with sponsors, but as a practical matter, there may be significant back-and-forth between the sponsor and the Agency before any sort of clarity or agreement is reached on study design, use of real-world evidence, and other issues that are critical to drug development in clinical studies, like endpoint selection. Many big players in industry may ultimately decide, at least in the short-term then, that it's more efficient to do a traditional clinical study until we get the clarity that we're looking for from FDA. At this point, we haven't seen a significant number of products approved, or supplemental approvals, involving the use of real-world evidence, but we have seen some. And here, Sarah, I'd really be interested in your perspective on some of the ways we've already seen real-world evidence being incorporated into drug development programs and product approval.
Sarah Blankstein: That's right, Kellie. Real-world evidence has been used to support several drug approvals, though the number is certainly still limited, and we've seen that in a few different ways. So I would bucket it into three categories. The first is using real-world evidence for safety signal evaluation, either pre- or post-approval. And then on the efficacy side, we've seen real-world evidence being used to develop a synthetic control arm in single-arm clinical trials, both for initial approvals of drugs and also for supplemental approvals. Also on the efficacy side, real-world evidence can be used from observational studies looking at things like registries and claims databases to support an expanded approval for a drug product.
As I mentioned, real-world evidence has been used for safety assessments as part of product applications, and this can either be in post-market studies to further assess safety signals, or pre-market assessments of a safety signal that's seen in a clinical trial. Amgen, for example, used its deCODE genetics database, which includes the health care records of the entire population of Iceland, to assess a cardiovascular safety signal for its osteoporosis drug Evenity that was seen in one of its Phase III clinical trials. FDA has, of course, been doing a lot of its own work using real-world evidence on the safety side, using its Sentinel database, and is generally more comfortable with the use of real-world evidence in safety assessments than in efficacy.
On the efficacy side, as I mentioned, a number of drugs have been approved that have used real-world evidence to develop a synthetic control arm. And this has really been limited to ultra-rare diseases and oncology indications to date, but we have seen a number of products approved in this way. FDA highlighted Amgen's Blincyto approval in its real-world evidence framework, noting that the oncology drug initially received accelerated approval based on a single-arm trial, the response rate of which was compared to historical data from E.U. and U.S. patient records. Further study and a randomized control trial was required by FDA in this case to verify the clinical benefit. Another example is Novartis' drug Zolgensma, which was approved in 2019 for spinal muscular atrophy and relied on a natural history study to provide context for the single-arm clinical trial results. Likewise, BioMarin's drug Brineura, approved in 2017 for CLN2, was based on a non-randomized single-arm trial and comparison with patients from an untreated natural history cohort. Both of these drugs, Zolgensma and Brineura, have labeling that discusses the natural history real-world evidence comparator arm.
And the last bucket I mentioned was reliance on an observational study to develop real-world evidence in support of a new indication for an already-approved drug. This is more unusual than something like synthetic control arms, but we did see a recent approval in April of this year for Pfizer's Ibrance to expand its indication to treat breast cancer in men. That approval was based on data from electronic health records and post-marketing reports of the real-world use of Ibrance in male patients sourced from three different databases – the IQVIA Insurance database, Flatiron Health Breast Cancer database, and the Pfizer global safety database.
Kellie Combs: And, Sarah, that's a really interesting point because, in at least some of the examples you've been describing, manufacturers have worked with health technology companies to develop the data sets. Is that a trend that we can expect to continue?
Sarah Blankstein: I think we can definitely expect that trend to continue. There have been a number of health care technology companies partnering with both industry and the FDA to conduct real-world evidence studies utilizing vast databases of real-world data. In 2018, for example, Roche acquired Flatiron Health and its database of 2.1 million patients for $1.9 billion. And it is not only Roche who's been relying on Flatiron's database to collect real-world data, but Flatiron has also partnered with other companies, including Pfizer and BMS. Another health data company, Aetion, is partnering with Brigham & Women's Hospital on an FDA demonstration project seeking to replicate and predict the results of over 30 completed and ongoing randomized clinical trials. And a new player in this space, Acorn AI, was launched this April by Medidata, and has already brought on board FDA's former principal deputy commissioner, Rachel Sherman, as Chief Scientific and Medical Officer. Of course, the current principal deputy commissioner at FDA is Amy Abernethy, coming to the Agency from a position as CMO and CSO at Flatiron. So I think we can expect continued momentum on real-world evidence at FDA and in the industry.
It's not just in the regulatory approval and submissions context where industry is generating real-world evidence, though. Companies are using real-world evidence studies extensively in the commercial context as well. Kellie, how have you seen companies using real-world evidence studies outside the FDA regulatory approval and submissions context?
Kellie Combs: Well, of course, Sarah, as you know, we've spent a lot of time here at Ropes advising companies on product communications, whether scientific communications with payors or health care professionals, or even with patients in some cases. And companies now, for a very long time, have been using real-world evidence in the promotional context, and in the context of scientific exchange, particularly in the payor communications space. Payors, of course, seek information from a variety of sources when making coverage and reimbursement decisions. And so for now many years, payors have both been requesting information and evidence that may not appear in a product label, and in some cases even doing their own analysis of real-world evidence, including information that speaks to product effectiveness, safety, and value in the real-world setting. When real-world evidence is not available to payors from the companies themselves, payors have in fact conducted their own real-world evidence analyses, particularly when it's helpful for assessing comparative effectiveness of products that are indicated to treat the same disease or condition. Just as one example, Kaiser described in its comments to FDA's real-world evidence framework that it had conducted an analysis of European real-world data related to Inflectra, which is the Remicade biosimilar, to assess comparative safety and efficacy to support conversions of patients in its network to the biosimilar product.
Companies' incentives to develop real-world evidence and use it in promotion, not just with payors but also with prescribers and other audiences, has really been bolstered in the past couple of years by two guidance documents that FDA has recently released – the payor communications guidance and the guidance on communications consistent with labeling, both of which were issued in final form by FDA in the summer of 2018. The Payor Guidance provides substantially more clarity to manufacturers around health care economic information in particular, and a lot of health care economic information, or HCEI, relies in some way on real-world data. The Communications Consistent with the Labeling Guidance, again, is also critical when thinking about the use of real-world evidence in promotion, because it makes clear that evidence that is consistent with the label need not meet the substantial evidence standard that's traditionally been required for promotional claims of drugs and biologic products. The new standard, according to the guidance, is that the evidence be scientifically appropriate and statistically sound. And we help clients on a daily basis analyze data sets, including real-world data sets, to determine whether they meet that substantiation standard. Interestingly enough, one of the examples that FDA provides in this guidance relates to promotional discussion of information from a post-marketing registry study. Now, the example is specific to medical devices and not from drugs, but, of course, FDA is at least contemplating that real-world evidence could be used in promotion in this way.
Well, unfortunately, I think we're out of time for today. Thanks very much, everyone, for tuning in to our podcast Non-binding Guidance, which is brought to you by our attorneys in the life sciences regulatory and compliance practice at Ropes & Gray. For more information about our practice, or other topics of interest to life sciences companies, please visit our FDA regulatory and practice page at www.ropesgray.com. You can also subscribe to Non-binding Guidance and other RopesTalk podcasts in Ropes & Gray's podcast newsroom, or on our website, by searching for Ropes & Gray podcasts in Apple or Spotify. Thanks again for listening.
Stay Up To Date with Ropes & Gray
Ropes & Gray attorneys provide timely analysis on legal developments, court decisions and changes in legislation and regulations.
Stay in the loop with all things Ropes & Gray, and find out more about our people, culture, initiatives and everything that’s happening.
We regularly notify our clients and contacts of significant legal developments, news, webinars and teleconferences that affect their industries.