The Ropes & Gray Decoding Digital Health podcast series discusses the digital health industry and related legal, business and regulatory issues. On this episode, IP transactions partner Regina Sam Penti joins the Digital Health Initiative co-leads, Kellie Combs, Christine Moundas and Megan Baca, to discuss the increasing prominence of artificial intelligence (“AI”) and machine learning technology in the health care and life sciences industry. The discussion reviews key applications of AI, the IP protections around machine-learning, transactional and licensing issues that arise in the context of AI, as well as the health care and FDA regulatory issues to consider.
Transcript:
Kellie Combs: Welcome to Decoding Digital Health, a Ropes & Gray podcast series focused on legal, business and regulatory issues impacting the digital health space. My name is Kellie Combs, and I am joined today by my co-leads of the Ropes & Gray Digital Health Initiative, Christine Moundas and Megan Baca, as well as Regina Sam Penti from our intellectual property transactions group. Ropes & Gray's Digital Health Initiative is comprised of a cross-practice team of attorneys that advise pharma and biotech, medical device, health technology companies, investors and others on a variety of legal and business issues that arise in digital health transactions, litigation, and regulatory matters. On this episode, we will discuss the increasing prominence of artificial intelligence and machine learning technology in health care and life sciences. Before we get started, let's take a moment and introduce ourselves to our listeners. Megan, would you like to go first?
Megan Baca: Hi, everyone. My name is Megan Baca, and I am a partner in Ropes & Gray's Silicon Valley office. My practice is in intellectual property and technology transactions, including working on complex licensing and collaborations in life sciences. I represent pharmaceutical and biotech companies, as well as software and other technology companies. My background was originally in computer science, but with my array of different experience in life sciences and health care, I am also well situated to help co-lead Ropes & Gray's Digital Health Initiative, which I do together with Christine and Kellie. Christine, why don't you go next?
Christine Moundas: Thanks, Megan. My name's Christine Moundas. I'm a partner in Ropes & Gray's New York office. I sit in the health care practice and I also actively participate in our data practice. Generally, I represent digital health companies large and small, health care providers, pharmaceutical companies, medical device manufacturers, and other startups in the space. I focus on regulatory, transactional, and data matters. Before joining Ropes, I worked at the HHS Office of Inspector General. Kellie, would you like to do your introduction?
Kellie Combs: I'm a partner in the life sciences regulatory and compliance practice, and based in Ropes' Washington, D.C. office. I provide legal and strategic advice to pharmaceutical, biotechnology, medical device companies, as well as hospitals and academic institutions on a broad range of FDA regulatory issues. With respect to digital health, in particular, I have extensive experience advising clients on product development and regulatory classification issues, as well as regulatory strategy and post-approval compliance. Let me now hand it over to Regina.
Regina Sam Penti: Thank you, Kellie. I am Regina Penti. I'm a partner in our tech transactions practice group here at Ropes & Gray. I focus primarily on advising emerging tech companies and their investors on IP and technology-related issues across the entire corporate life cycle. I regularly advise in connection with structuring and negotiating strategic transactions, such as mergers, acquisitions, asset purchases, and bet-the-company collaborations that are driven by intellectual property, technology, and data. I also provide strategic counseling regarding the development and management of critical IP and data assets, and so that's where I focus specifically on a lot of AI-related issues in the IP sector.
To kick us off today, I wanted to just share a brief overview of what we mean when we say “AI.” I think of AI as the demonstration of intelligence by machines. It generally refers to the concept of machines mimicking human intelligence, such as by learning and problem solving. However, some will actually argue that this definition underestimates the power of AI to surpass human intelligence, as we've seen in some cases. So, one way to think about AI is by referencing the systems that change behaviors without being explicitly programmed to do so based on data that's collected, usage analysis, and other observations. AI learns by using large collections of data. And so in many ways, the AI revolution that we're seeing has been enabled by our ability to generate very large amounts of data, as well as the fact that we can now harness that data at a small fraction of the cost, using relatively simple, everyday devices like cell phones and laptops.
With that, I wanted to turn it over to you, Christine, to tell us a bit about some key applications of AI in the health care and life services space.
Christine Moundas: Sure. I think what we're seeing is really an explosion of AI application in the health care and life sciences space, and the applications are really varied. We're seeing everything from artificial intelligence being implemented in the drug development process, to radiology and imaging applications, as well as pathology applications, as well as other things like assisting clinical decision making or aiding robotic surgery. So, the applications are quite varied. The players that we're seeing in the space are really increasing every year. The acceleration has really shifted how the market is approaching these deals. Now, we're really seeing a cumulative experience with approaching these AI applications in a way that we didn't have even just a few years ago. So overall, the hope for these initiatives is really varied. Sometimes it's to accelerate drug development. In other cases, it's to have different diagnostic procedures be more accurate. Other times, it's trying to just increase efficiency in the health care space. But overall, we are now seeing, and we think we've reached the critical mass of really knowing how to tackle and address all the different regulatory and transactional issues that we're coming across. So Regina, do you want to talk a little bit about some of those key issues?
Regina Sam Penti: Sure. Thank you, Christine. I wanted to touch a little bit on how companies that are generating, developing, and deploying AI systems think about IP protection, and maybe some of the challenges that they face. Typically, for a long time, the gold standard for protecting technological innovations, such as algorithms as are used in AI, has been through patents. But existing patent frameworks, in addition to being really quite expensive for many companies, are generally not that well suited to the protection of AI systems—that's due to a number of reasons. One is that oftentimes, the core of the AI innovation is the machine-generated program that comes out after the algorithm has ingested the data. Many patent regimes really aren't suited, or at least would not recognize machines as inventors, and so, that raises some really fundamental issues when you think about how you protect something where the core innovation or the crux of it really is generated by a machine. In addition, historically, it's been difficult not just specific to the AI systems, but with respect to software systems in general, to get and enforce broad patent claims for software innovations. This is due to changes in the U.S. patent system, the introduction of the AIA (the America Invents Act), which expanded administrative paths for challenge and business method patents. And we've seen similar issues arise in connection with copyright law, which is an alternative path that one could use to protect software innovations in some cases. Many of you may recall the so-called “monkey selfie” case, where the Ninth Circuit determined that no one owned the copyright in a picture that was taken by a monkey.
So, where does that leave us? I should point out that some of the issues that I described above aren't necessarily universal. There are some jurisdictions that are more welcoming of AI innovation than others, and the law is certainly evolving and trying to catch up with this rapidly evolving and growing technology. So, it's certainly worth exploring the specific rules, patent rules, of the jurisdictions that are of interest when you're thinking about applying for a patent application for AI. In addition, when it comes to IP protection, it really helps to think of the AI system in terms of its components. If you have the software algorithm that's human generated, you have the data that's fed to that algorithm, and then you have the output that's generated by the machine after the data is processed by the algorithm, and so, when it comes to IP protection, you really want to think carefully and specifically about each of these components because the regime, IP regimes, treat them somewhat differently. A sound IP protection strategy will be multifaceted, and so, it will address each of these components. For example, for the software algorithm, trade secrets can be particularly impactful, where you determine that patents aren't particularly well suited, or even if they are, in addition to patent protection. A few years ago, Congress passed the Defend Trade Secrets Act, which provides a federal course of action and strong remedies for trade secret misappropriation. And some states also have their own statues. So certainly, AI systems are well suited to be protected as trade secrets, particularly since a lot of times the secret source of the algorithm, the human-generated code, doesn't necessarily need to be exposed to the consumer of the algorithm.
Many of the elements of AI systems can be protected. We're talking about the structure and components of things like your neuro networks, the training sets and the test data, software code, and the algorithms that drive the AI system. One example, just to put a finer point on it, is in a case involving an AI-driven online chat. The Ninth Circuit held that XML data, which is generated by the chat platform analytics, could constitute a trade secret under New York law as it reflected the application of the plaintiff's rules and models to test real world situations. That's just one example, but there are many instances. The core really to being able to avail yourself of trade secret protection is to make sure you have the right practices in place. So, you want internal controls, you want to restrict access to the source code to those who absolutely need access to it in order to perform their jobs, and you want to make sure that you've taken reasonable steps with respect to users and other external actors to protect that software and other components. Of course, AI software can also be protected by copyright law, although that protection would not extend to the functionality of the software because copyright is more about artistic expression, and so, it can be somewhat limited.
Turning to the data—protecting the data requires a slightly more nuanced approach. In the U.S. and really in most global IP frameworks, there is no built-in statutory protection for data, and so, data protection from an IP standpoint, is primarily driven by contracts. Careful thought is needed to ensure that contracts are drafted to protect the data, both the data that is generated by the algorithm, as well as the data that is used by the algorithm.
I will now turn it over to Megan to talk about some transactional and licensing issues that come in the context of AI.
Megan Baca: Thanks, Regina. Next, we thought we would highlight some of the interesting transactional and licensing issues that come up with AI matters. For the purposes of the discussion here, I'll focus on agreements between the AI company, the AI provider on one hand, and their partners, or customers, or users on the other. So, these could be structured as license agreements, or services agreements, or collaborations—there's really quite a wide variety these days as to how these agreements get structured. But for the purposes of the conversation, let's call the AI company the “provider,” the third party the “user,” and they'll sign, let's call it, an “AI agreement.” There are so many important agreements relating to AI—you can have data licenses for the inputs, you can have the services agreements, you can have training agreements regarding the training algorithm—but for now, let's just talk about the agreement between the AI provider and its ultimate users. So first, I'll think about some of the key issues from the perspective of the AI provider, and then from the user.
If you are an AI provider, first, I think it's fair to observe that you don't typically grant a license to the AI software itself. So, it becomes pretty important how you define the software and the platform, and what the scope of that license is. Absent some sort of co-development arrangement, the source code would typically stay confidential to the AI provider and would not be accessed in any way by the user. If in a case of co-development or other interesting arrangement you do provide access, that should certainly contain adequate restrictions, field limitations, territorial restrictions, and all that that matches the business relationship. As a footnote to that, I would observe that sometimes you see escrow relationships in software licenses where, for example, a key customer of a software provider might have the source code put into escrow in case the software company goes out of business. In my experience so far, I haven't run into the escrow arrangement in AI companies. That certainly could change, but, because it's not your typical service arrangement where it's just static software by which a service is provided that could easily be taken over, I think that AI will possibly be treated differently, where we would see fewer escrow-type relationships.
Second, there will likely be some form of basic access rights to the user, with respect to either something like a web portal or the output from the AI. If it's a web portal, one important thing to think about is how the web terms and conditions on the web portal, and including the privacy policies, would dovetail with the partnering agreement, the AI agreement. And you want to make sure that those did in fact dovetail correctly without conflict in terms of how the content on that website, web portal, could be used.
Third, I think it's important to realize you do need a license from your users to whatever inputs they are providing, whether it be images, or video, or data feeds, or other information that feeds into the AI. So, think about carefully how to define those inputs in the contract. Also, you would want corresponding representations and warranties on things like ownership of that content and rights to use it and disclose it, and perhaps other reassurances on some of the compliance matters that Christine will talk about next.
Finally, you do frequently need to give your user a license to whatever the outputs are: What should be specifically defined? What exactly are the outputs? How do you define them in a clear way to make it distinct from the AI itself? And then, how can the user continue to use the outputs, either on a one-time basis or for ongoing use. Should it be internal use? Should it be broader? That brings up one of the more interesting questions when it comes to the outputs, and that is: What should be the downstream economics of those outputs? If the user has the ability to really use, leverage, commercialize, and draw economic benefits from the output, I think it's important for the AI company to consider whether they should have any kind of reasonable share in those economics. That might arise both for the output itself, but perhaps even more broadly, insights that are drawn from or inventions that are created from the AI output, which, of course, gets difficult when those are unpredictable, downstream, hypothetical rights. But one way to handle this might be, instead of trying to come up with every possible downstream situation or context that might give rise to economics, you could instead just simply license the output for internal use only, and then require the user to further negotiate any rights for downstream uses if they think that they're going to take it that way, at which point you can evaluate the economics.
So with that, let's switch gears to the user side. If you are the recipient of these services or the recipient of the output, in addition to getting the rights to the output that you want and need, and negotiating terms and economics, I think one of the more interesting topics comes up in thinking about what is the real value of the inputs that you, as the user, are providing? This could be images, or video, or data, etc. On one hand, it might be that you're simply one of a thousand customers providing data in which this AI will run or process and create some output for you. Or it could be that, in fact, what you're providing is really rare, unique, valuable, and useful—it could be a highly curated, highly rare data set that no one else has. In that case, it is really important to think about: What is the value to the AI company of having access to that kind of data, which could then be used to improve and create new versions of, new features of, and improvements to the AI software?
Now, there is no universal market for this yet, so no very predictable terms for the type of data that a user might provide. But it is, I think, important to consider the menu of options available—things just like discounted service, if there is something of value being provided to the AI company or something like a service credit. But AI companies are also coming up with programs for more interconnected contributions. If a user, for example, suggests a new feature, provides data that gives rise to a new service, or a new improvement to the technology, perhaps the user in some cases could earn a cut of the AI company's economics that are attributable to that feature or that service in the future. Again, this is all very highly fact-specific and highly negotiated, but as a user of another company's AI, an important fact to consider. So, there are lots more interesting issues to explore—complexities around joint ownership, AI improvements, and lots of non-IP provisions of these agreements. But I hope that gives listeners a sense of how interesting and unique these transactional problems are facing AI companies and AI users these days.
With that, I will turn it over to Christine to discuss some of the health care and regulatory issues that arise in this context.
Christine Moundas: Thanks, Megan. We are typically involved in helping our clients to navigate all parts of the process. Typically, that means trying to structure a deal for them, whether it's at the development or deployment phase, that not only meets their business needs as well as their IP goals, but also ensures that they're compliant to relevant regulatory restrictions that are sometimes in place. Generally, this starts all the way from really the training of AI. And this is frequently a very hot button area because we need to make sure that the training of the AI is done in a way that's actually compliant with applicable laws so that the AI itself that's ultimately created is not tainted by some underlying regulatory issue or a claim that they actually didn't have the rights to use typically the data, specimens, or images to actually train their AI in the first place. So, when we look at that, what we first need to analyze is: What is actually the data that's going into the training of the AI, and what's the context around the training? Frequently, that means that we have to deal with de-identified data from a HIPAA perspective. There's more complexity if you're looking to European data and GDPR. From the U.S. perspective, you're usually trying to make sure that you're first under the HIPAA de-identification standard, and that even if the data is de-identified, you still have the appropriate rights to use the data in the way that's discussed. For certain types of data, there actually might be regulatory issues that make the use of that data even more restricted. So, confirming that data is de-identified under HIPAA alone is not sufficient. Sometimes, for instance, if you have data that's actually come from another clinical trial, there could've been consents with the research subjects in place that actually say that the data can't be used for further research or other purposes. So, you really have to do a good amount of diligence and make sure that there are not additional restrictions on the data to begin with.
Then, you also need to look at whether the actual training activities themselves need to actually be conducted under a research protocol pursuant to IRB oversight. I'll give an example—in the pathology space, frequently, you're not just using de-identified images. Sometimes you actually have to do some re-sectioning or re-staining of sample blocks of human tissue or otherwise, and actually then image that material and then create the de-identified images. The actual act of even performing that re-sectioning or re-staining could be viewed as something that actually needs to be subject to a protocol in IRB oversight. So, we need to make sure that actually, before you even get to the data, the data's being collected and harvested in a way that's compliant. Sometimes, that requires working out with the parties who's going to undertake that responsibility, who's going to actually establish the protocols, who's actually involved in the research, who are the PIs, who's sponsoring the research, etc. Then, once you get out from under that, you also need to think if it's an AI partnership where there are two different parties, and the different parties are bringing different things to the tables. Frequently, we work on AI company and academic medical center-type collaborations—you have a tax exempt entity partnering with a for-profit entity, and you need to be aware of issues related to the AMC providing things of value to that for-profit company, and making sure that their data is appropriately valued, and you can make arguments that there is a fair market value exchange. Increasingly, the fair market value issues, the private inurement issues, the tax exempt issues, all of those are increasingly coming to the fore, where both parties need to be comfortable that they can say that the exchange and the transaction was compliant. So, those are just a couple of examples on the training end and establishing those partnerships that sometimes build into the training.
Once we get to AI that's developed and is on the threshold of being approved and deployed, then I frequently work with Kellie, because then we get into the FDA regulatory framework of AI and machine learning being treated as a device. So, with that, maybe Kellie, you could talk a little bit more about what we think about from an FDA perspective?
Kellie Combs: Sure. Thanks, Christine. There are a variety of FDA regulatory issues that are implicated by AI and ML technology. Back in March, my colleagues Greg Levine, Sarah Blankstein and I actually did a deep dive podcast on FDA regulation of this type of software as a medical device, on another Ropes podcast called Non-binding Guidance. If you are an FDA regulatory attorney or just need to know how to spot potential FDA issues in a deal or project that you're working on, I'd recommend that you give that deep dive podcast episode a listen. So today, I'm just going to hit a few topics at a very high level.
First, the FDA regulatory framework was just not really built to handle software as a medical device when it incorporates AI and ML technology, and in particular, when that device is continually evolving, undergoing updates and improvements over time. There are lots of questions about whether and when changes to an algorithm may ultimately impact safety or effectiveness of a device, such that FDA review of a new submission should be required. Additionally, there are all sorts of issues related to early development, validation, and training of algorithms that may implicate FDA regulations as well as many other issues that Christine has already touched upon. You should also consider whether training studies must be submitted to FDA or otherwise held for FDA inspection. It's important to consider these sorts of issues if your company is developing its own technology or just licensing or acquiring the technology from another party. Even though the Agency has really been focused on policy developments in this space for quite some time, most recently with the development of an AI/ML Action Plan, there's still a lot of ambiguity here. So, we certainly recommend that developers liaise with FDA early and often to ensure the agency expectations are understood with respect to regulatory requirements.
As Christine, Megan, Regina, others, and I are working together on AI and ML projects, we're commonly thinking of key diligence considerations in the context of a deal and what we should be focused on. With respect to FDA issues, in particular, we're thinking about things like what is the proposed use of the technology, both currently and eventually. So, for example, is it research or diagnostic use? It's important in this context to consider longer-term possibilities, like next-generation technology as well. We're also thinking about what sorts of data have already been collected, and what's planned for an FDA submission, if applicable. We're thinking about whether the company has already sought FDA feedback in some form, for example, through the pre-submission process. And then, we're also really attuned to whether there’s other relevant regulatory or compliance history that should come into play. So, for example, giving a very close review of any FDA correspondence, thinking about things like FDA inspection history, as well as internal policies and procedures to assure FDA compliance. Oftentimes, we find in these deals that developers of technology may not be fully up to speed on FDA implications and have not given these issues the careful attention that they deserve, even in the early stages of the process.
With that, I think we're unfortunately out of time today. Thanks so much to Regina, Megan, and Christine for joining us. And thanks to our listeners. We appreciate you tuning into our Decoding Digital Health podcast series. If we can help you navigate any of the topics that we've been discussing, please don't hesitate to get in touch with us. For more information about our practice or other topics of interest in the digital health space, or to sign up for our mailing list with access to alerts and updates on medical developments, as well as invitations to digital health-focused events, please visit ropesgray.com/digitalhealth. You can also subscribe to this podcast series wherever you regularly listen to podcasts, including on Apple and Spotify. Thanks again for listening.
Stay Up To Date with Ropes & Gray
Ropes & Gray attorneys provide timely analysis on legal developments, court decisions and changes in legislation and regulations.
Stay in the loop with all things Ropes & Gray, and find out more about our people, culture, initiatives and everything that’s happening.
We regularly notify our clients and contacts of significant legal developments, news, webinars and teleconferences that affect their industries.