Print BCTV: Learning By Doing -- Rethinking the rules dividing medical care from scientific inquiry

Learning By Doing

Transcript of BioCentury This Week TV Episode 141

 

GUESTS

Joel Kupersmith, Chief Research and Development Officer, Veteran's Health Administration

Sean Tunis, Founder, President and CEO of the Center for Medical Technology Policy

Claudia Grossman, Senior Program Officer, Institute of Medicine

 

PRODUCTS, COMPANIES, INSTITUTIONS AND PEOPLE MENTIONED

Veterans Administration

Hastings Center

Center for Medical Technology

Institute of Medicine

 

HOST

Steve Usdin, Senior Editor

 

SEGMENT 1

 

STEVE USDIN: Can our healthcare system learn to fix itself and protect patients at the same time? I'm Steve Usdin. Welcome to BioCentury This Week.

 

NARRATOR: Your trusted source for biotechnology information and analysis, BioCentury This Week.

 

STEVE USDIN: Revelations of unethical medical experiments, especially horrific research on African-American men who were denied treatment for syphilis, led the United States to adopt strict rules to protect research subjects in the late 1970s. Separating medical practice from research is a core principle of those protections.

 

Patients cannot be enrolled in research without their informed consent, and the research plans must be approved by oversight committees called Institutional Review Boards. But researchers, now, are using techniques that were unimagined four decades ago, like searching electronic medical records for patterns that will help identify which patients can benefit from a treatment or might be harmed.

 

In this way, medical practice is able to learn from its past. For example, the Veterans Administration is leading efforts to improve the quality of care based on data collected during routine clinical practice. Many ethicists, government officials, and academic researchers say it's time to reevaluate the rules.

 

They warn that requirements for informed consent and institutional review board approval could make it impossible to create the learning health care system of the future. But there are still concerns that relaxing the restrictions could harm patients. The Hastings Center, a Bioethics Research Institution in New York, has published a set of papers arguing both sides of the debate, and today we're joined by three the contributors.

 

Dr. Joel Kupersmith is the Chief Research and Development Officer of the Veterans Health Administration. Dr. Sean Tunis is the Founder, President, and CEO of the Center for Medical Technology Policy, a nonprofit focused on improving the quality and relevance of clinical research. Claudia Grossmann is a Senior Program Officer with the Institute of Medicine's Roundtable on Value and Science Driven Healthcare. She leads the roundtable's work on clinical effectiveness research, and using health information technology to create a learning healthcare system. I want to start.  We're talking about the ethics of creating a learning healthcare system. Let's start by talking about what is a learning healthcare system?

 

JOEL KUPERSMITH: Well a learning healthcare system is a system in which every aspect of clinical care becomes part of learning. So when the doctor sees a patient, or the health system does a program, these are evaluated. These now can be recorded on electronic health records and other kinds of information technology. And they become part of learning to improve the system for the future.

 

STEVE USDIN: So that sounds like something that anybody would support. How is that in conflict with the ethics rules that we have today?

 

SEAN TUNIS: Yeah. Well, the current ethics rules, in some ways, depend on a couple of fundamental distinctions between what's research and what's clinical practice or health-care delivery. And perhaps the most problematic of those, from the perspective of a learning healthcare system, is that research is typically characterized as an activity that has the intent to produce generalizable knowledge. And that's a key feature of what, historically, has characterized research.

 

But if you listen to this description of the learning healthcare system, the intention of the learning healthcare system is to produce generalizable knowledge that will then inform how patients are managed in the future. And so, in a sense, the whole notion of a learning healthcare system would be considered under the existing framework to be a research activity.

 

STEVE USDIN: So the idea, Claudia, would be that under that idea, then, everybody would be a research subject. And it would be untenable, because you'd have to consent to everybody and have IRB approval for every activity, right?

 

CLAUDIA GROSSMANN: Yes. And a key component of that is that not all research is equal, obviously. So the system is set up to make a bright line distinction between research and not research. And what we have now is a series of activities that really encompass a gray area between the two.

 

And so subjecting research that is of minimal additional risk to patients to the same kind of oversight that a clinical trial, which is trying a new drug that has never been used in a wide way before, creates an untenable system, where there is just a lot of bureaucracy and leads to a lot of useful activities that could lead to more research and improvement, just not being done, for reasons that have little to do with the kinds of positive results.

 

STEVE USDIN: And maybe one way to make this kind of real for people is to give some examples of what we're talking about. From the VA, Dr. Kupersmith, can you describe the Point of Care Program?

 

JOEL KUPERSMITH: Yes. The Point of Care research is research in which -- and I'll give you the exact example -- we're studying administration of insulin after a surgery in patients with diabetes. And there are various ways that doctors give this. They can give it based on the patient's weight, or what's called a sliding scale where they keep changing it.

 

So what we're doing is comparing those two ways of giving insulin. The doctor that treats the patient is the one that does the study. The study is totally recorded on the patient's own electronic health record and the data is collected from the patient's own electronic health record. So it's really part of the patient's clinical care.

 

It's just two ways of giving insulin, which are used commonly, everywhere, are being compared. And that is really right on the borderline, as Claudia was saying, of research and clinical care. Right now we're treating it as research. There's informed consent. We go through the entire process. But as you can see, it's different. And the patients aren't being treated any differently than any other patients who are not in the research.

 

STEVE USDIN: So the decision between using one of these methods of administering insulin, and the other one is, in a way, kind of random, where there's no evidence to suggest that one way is better than the other?

 

JOEL KUPERSMITH: That's correct. And it's random, so the doctor does not actually make the decision. The decision is made randomly. And so that's a difference. But other than that, the treatment is exactly the same, and the electronic health record records the same information.

 

SEAN TUNIS: And I think a key point in that example -- and I can give another example that's very similar -- but a key point is that, in general, these will be studies where either of the practices to which patients are randomized are commonly used practices. So clinicians will do these things. They're approved by the FDA and they're done consistent with medical practice, but we don't have good, robust knowledge about which is actually better for patients.

 

STEVE USDIN: So are there a lot of situations like that, where it's really kind of a flip of the coin between doing one thing and another thing for a particular patient who is in front of a doctor?

 

SEAN TUNIS: Oh, yeah. I mean, there's an almost infinite number. Another example, not too dissimilar from this one, is for a blood pressure medicine that's been approved by the FDA. It's not well known whether it's better to give that drug at night time or in the morning, in terms of how the patient's blood pressure is controlled over a 24 hour period.

 

So a study to actually compare a morning dose versus a night time dose of the same blood pressure medication would actually be quite useful, and potentially important. And either of those is something that any given clinician would choose to do just based on their own judgment. So the idea that a study like that would require the same sort of ethical oversight as a randomized Phase III drug trial of a new product that isn't yet FDA approved, doesn't make a lot of sense.

 

STEVE USDIN: And we're going and we're going to talk more about that, and what the obligations of the system are in a moment. The Hastings Center's report proposes seven ethical obligations for integrating research and medical practice. We'll discuss what they mean in a moment. First, here's what the authors say about the obligations of patients.

 

NARRATOR: You're watching BioCentury This Week.

 

SEGMENT 2

 

STEVE USDIN: We're discussing the ethics of 21st century medicine with Joel Kupersmith, Sean Tunis, and Claudia Grossman. We just saw a slide there that said that the authors of one of the papers in the Hastings Report are suggesting that patients have a moral obligation, an ethical obligation to contribute knowledge for a learning healthcare system. What does that really mean?

 

JOEL KUPERSMITH: Well, it means that they have an obligation to participate in learning and to have their information and some other aspects a part of it. My own view on that -- and I wrote it in the commentary that I did on that -- is that I do think patients have a moral obligation. But they have a right not to be part of it, in my view. And if they wish not to be part of it, they can express that. And this is related, of course, to informed consent.

 

But I think there is a moral obligation. But it's not the same as the moral obligation on the physician to do this because the patient incurs some at least bother or possibly bother, or I don't think incurs much risk. But there is risks of loss of privacy in databases and all that. So I think the patient is in a different position. I have a slight disagreement on that with others.

 

STEVE USDIN: OK, Sean, Claudia, what do you think about it?

 

SEAN TUNIS: Yeah, I think the moral obligation, in a sense, derives from a notion of common purpose. And, in a way, given that everyone who is a user of the healthcare system, a patient, is benefiting from knowledge that's generated from the care of others, then there's a sort of a reciprocal obligation to be willing to consider participation in helping to generate that knowledge.

 

And I think the subtleties around how much opportunity there is to opt out and in what ways patients are informed that their data, the data from their care will be used for learning, I think those things remain to be debated and discussed. But I think that the general notion is important.

 

CLAUDIA GROSSMAN: Yeah, and I would agree. And I would add that nobody's suggesting that everybody should be part of a clinical trial. But it's a cycle. So you can't learn without having an input of information. And that learning is only made richer by greater participation. And so as patients will benefit from the learning, there is somewhat of an obligation to participate in the input.

 

STEVE USDIN: So isn't there a reciprocal obligation for the system to actually make use of that information?

 

CLAUDIA GROSSMAN: Absolutely.

 

STEVE USDIN: Because now -- and we talked about this on the show before -- most clinical trials, more than 50% of them, never get reported, at least not reported in a timely way. And the information and a lot of them never even get completed. How does that fit into this?

 

JOEL KUPERSMITH: I absolutely agree with that. I think that there are two aspects. One, it has to be of high scientific quality so it's usable. And the other is there has to be an intention to use it. Otherwise, there's no point in doing it.

 

I want to just say one more thing. I do agree that the notion of the common good is what creates a moral obligation on the part of the patient. I think the other side of that is autonomy and that the patient has a certain right to autonomy. And that's where this discussion is. And I also agree that if we -- a lot of the discussion is going to center around how you approach this, not so much what the basic philosophy is, but how you approach consent and that sort of thing.

 

STEVE USDIN: And what are the kind of ideas about changing consent and how you might do that?

 

SEAN TUNIS: Yeah. Well, I think one approach would be that patients would be informed through signs in their rooms and newsletters and other kind of forms of communication that the institution where they're receiving care is a learning health institution and that, in many cases, data that's generated by their care may be used in one study or another. And, in some cases, there will be a notification that a particular study is going on. But what wouldn't be routine for certain kinds of studies is asking for individual consent from the patients, whether or not it's OK to use the data for that particular study.

 

STEVE USDIN: And is there a notion maybe also that you have some kind of consent that would allow things to happen in the future, because one of the problems now, I think, is that people get consented for something particular, and then there's a question, well, can you use the database with their information for a different kind of study in the future.

 

CLAUDIA GROSSMAN: Yeah, absolutely. And, I mean, I think it harkens back to this need for an appropriate set of oversights for the specific kind of study and the specific kind of use that we're talking about.

 

STEVE USDIN: The Hastings Report also says an ethical learning healthcare system has an obligation to address health inequalities in society. Here are the key points.

 

[MUSIC PLAYING]

 

NARRATOR: BioCentury, named the 2012 Commentator of the Year by the European Mediscience Awards for excellence in communications and clear, concise commentary.

 

[MUSIC PLAYING]

 

SEGMENT 3

 

NARRATOR: Now, back to BioCentury This Week.

 

STEVE USDIN: We're back with Joel Kupersmith, Sean Tunis, and Claudia Grossmann to continue our conversation about protecting patients and making medical progress. We just saw a graph there with some ideas about protecting against or addressing inequalities in health. Can you talk a little bit about how a learning healthcare system and the ethics of clinical trial participation, or trial research participation, can address inequalities?

 

JOEL KUPERSMITH: Well, first of all, we have a lot of information on patients now, with electronic health records and in some other ways, so that we can look and see if there's been disparities in care. And one of the things about the VA is the economic aspect is not there. So we can look at, directly, disparities in care among different groups of patients that is apart from their economic status. So we do a fair amount of studies on this area.

 

And we've done intervention studies. We've looked at how to inform people of certain things. We've looked at how to, for example, discuss replacement of knees, surgically. And how informed consents should be different in African Americans than Caucasians. And how the information given that might be presented in a somewhat different way. And we've done other studies along those lines. But this is a big area of research in the VA and, I think, elsewhere as well.

 

STEVE USDIN: We mentioned, in the introduction, the Tuskegee experiment. And that's kind of the elephant in the room for any of these discussions about inequalities and about research subject protections. Is that's something that, kind of, has to be explicitly discussed to move forward here?

 

SEAN TUNIS: Well, I think the main point to make about Tuskegee, which was a large part of the impetus of the original framework for informed consent and IRB review, is that there's nothing in the new proposed framework of a learning healthcare system that would lower the amount of oversight that would go into high risk studies. And, of course, a study like the Tuskegee study would never be approved under any conceivable IRB system.

 

But one of the legacies, of course, of Tuskegee and many other experiences is some greater level of distrust of the research enterprise amongst minority communities and underrepresented populations. And I think part of the inclusion of special attention to disparities and vulnerable populations is to just highlight the need to be sure that they're not left out of the learning healthcare system and the potential benefits.

 

STEVE USDIN: The whole idea of the learning healthcare system, I think, when I think about it, it's really about integrating electronic health records. A big part of it is about integrating electronic health records. And when you're talking about that, you're talking about another kind of risk, aren't you? About the risk of loss of privacy.

 

CLAUDIA GROSSMANN: Yeah, absolutely. I think it's an evolving discussion. Certainly privacy and the way that people think about privacy now is very different than it was several years ago, even five years ago. And it's probably going to change in the future. But certainly it's something that exists, and it's something that is well taken care of, in many ways, with some of the existing rules.

 

And one of the other problems is that some of the oversight rules, for example, between HIPAA and the Common Rule, they're not necessarily consistent. So a lot of systems are stuck in a place where they don't have clear guidance on where to go.

 

STEVE USDIN: And HIPAA is the, well, I can't remember the acronym now, but it's the privacy rule?

 

CLAUDIA GROSSMANN: Right.

 

JOEL KUPERSMITH: It's part of HIPAA. The privacy rule is part of HIPAA. Yeah, obviously, we have had electronic health records for 15 years, so it's a big concern for us. I think that there is a lot more possibility of loss of privacy as many of these large databases become public. People can compare, for example, voting registration lists with electronic health record data, and all kinds of things, so that they can re-identify people.

 

STEVE USDIN: So I want to go to another issue that was brought up in the Hastings report, which is the notion that everyone in the healthcare system has an obligation to contribute to learning, and that means providers, but also payers, and other hospitals, for example. Do you think that's true? And how does it work?

 

JOEL KUPERSMITH: Well, I think it is true. And I think the VA has acted on that. In fact, it was the brilliance of the people who created the VA system that created it as a learning healthcare system back in the 1940s. We have a research program. We have a teaching program. And now, in the last 15 years, we've collected large amounts of data which we use. So that's very much a learning healthcare system.

 

SEAN TUNIS: Yeah. And I would focus on, say, the obligation of clinicians to contribute to learning. I think that's really, mainly, an extension of the obligations of professionalism. But also the primary commitment to do what's best for their patients. Not just the individual that's in front of them at the time, but all the patients that they care for. And clearly the ability to learn to be able to provide better care in the future is part of that obligation.

 

STEVE USDIN: But the report also suggests that that obligation is on payers, and can even on players like drug companies. Is that realistic? And how does that fit into the kind of system of healthcare that we have today?

 

CLAUDIA GROSSMANN: Sure. I mean, I think there is some complications in there, and then there's some competing interests, for sure. But I think at the end, the idea behind a learning health system is really one that can continuously improve. And that's really to the advantage of all players, you know, higher efficiencies, better qualities, higher value overall, is something that, I think, is a shared priority for all.

 

SEAN TUNIS: And maybe just one additional thought, which is, while a lot of this is assumed to derive from some degree of altruism, there is also, say specifically for the life-sciences industry, a learning healthcare system could be a tremendously important platform for them to actually more efficiently develop products. And to sort of demonstrate what their relative effectiveness and value are. So there's a self-interest argument, too, of a learning healthcare system from that perspective.

 

STEVE USDIN: And doesn't that also suggest a greater degree of transparency?

 

SEAN TUNIS: The need for a greater degree of transparency?

 

STEVE USDIN: Yeah.

 

SEAN TUNIS: Yeah, I would say. And that's, obviously, a whole separate conversation. But given that there are a combination of self-interests and altruism here, and central to all of it is high-quality, meaningful data, I think the issues of transparency and how that data is shared and protected is critical.

 

STEVE USDIN: We're going to be right back with some final thoughts about blurring the lines between research and care.

 

NARRATOR: Now in its 21st year, visit biocentury.com for the most in-depth biotech news and analysis. And visit biocenturytv.com for exclusive free content.

 

SEGMENT 4

 

STEVE USDIN: Let's get some final thoughts from Joel Kupersmith of the VA, Sean Tunis of the Center for Medical Technology Policy, and Claudia Grossmann from the Institute of Medicine. I want to ask all three of you maybe to start with, is it necessary to have either legislation, or changes in regulation, changes in the Common Rule in order to make some of these changes that we've been talking about today happen?

 

JOEL KUPERSMITH: I think a lot can be just done by change in practice, but I don't think it all can be. So I think there will have to be something else that's done. I don't know offhand whether it's -- we often in government talk about regulation and rule-making and laws and whatnot, and how binding they are -- or, not how binding they are, but how permanent they are in a way. So I do think something else has to be done here to reinforce this concept. And if I -- can I just --

 

STEVE USDIN: Yeah, sure.

 

JOEL KUPERSMITH: I also want to say that we're not trying to change the system in the sense to having no regulations -- just a different kind of regulation. And one thing that's changed over the years since the Common Rule and the other bases of these is that clinical oversight is much better than it used to be. We have a lot of quality reviews, and that sort of thing. So I think that has changed enough so that can be the center of regulation here.

 

SEAN TUNIS: I do think there's a lot of room for improvement within the existing regulatory framework, although I agree that ultimately I think it will need to be revisited. But there's enough examples of institutional review boards, and health systems that have very successfully instituted integrated learning into their delivery that it's clear that it's possible within the existing regulatory framework to have this kind of work go forward. But what we are seeing too much of is folks who have to kind of do -- make up workarounds to kind of pretend that what they're doing is not meant to produce generalizable knowledge so that it doesn't come under the authority of the IRB. And those kinds of workarounds take a lot of resources and time, and I think ultimately put patients at higher risk.

 

CLAUDIA GROSSMANN: Yeah. I would agree. I think that there are clear examples. And there's clear room to move and to operate in the current system.

 

But I think there are two fundamental concepts that, at the end, are going to need some sort of rethinking. And that's this bright line of research and care, and sort of lumping everything into research, which is generalizable. And then, the risk issue that not everything is in one bucket of research. Not all -- that one bucket is the same. And so, let's have a smarter way of oversight that allows for more efficiencies and allows the resources and the efforts of IRBs to be dedicated to those efforts that really are high risk, and so protect patients where they need it.

 

STEVE USDIN: So the idea there really is to be able to tie risk and tie regulation to risk?

 

CLAUDIA GROSSMANN: Exactly.

 

STEVE USDIN: Well, that's this week's show. I'd like to thank Joel Kupersmith, Sean Tunis, and Claudia Grossmann. Remember to share your thoughts about today's show on Twitter. Join the conversation by using the hashtag#BioCenturyTV. I'm Steve Usdin. Thanks for watching.