Print BCTV: Emotional Contagion -- Facebook's emotional research project: lessons for biomedical community

Emotional Contagion

Transcript of BioCentury This Week TV Episode 203

 

GUESTS

 

Dr. Michelle Huckaby Lewis, Research Scholar, Berman Institute of Bioethics, Johns Hopkins University

 

Dr. Daniel Lieberman, Vice Chair for Clinical Affairs, Department of Psychiatry, George Washington University

Dr. Deborah Peel, Founder and Chair, Patient Privacy Rights

 

PRODUCTS, COMPANIES, INSTITUTIONS AND PEOPLE MENTIONED

 

Facebook, Inc. (NASDAQ: FB)

Electronic Privacy Information Center

National Security Agency

 

HOST

Steve Usdin, Senior Editor

 

SEGMENT 1

 

STEVE USDIN: Facebook's massive study of emotional contagion has provoked a huge backlash. Today we'll investigate the lessons for medical research. I'm Steve Usdin. Welcome to BioCentury This Week.

 

NARRATOR: Connecting patients, scientists, innovators, and policymakers to the future of medicine. BioCentury This Week.

 

STEVE USDIN: Facebook conducted an experiment on 689,000 of its customers. The social media giant adjusted the flow of positive and negative expressions in newsfeeds and then watched to see what happened. Facebook users who were unaware of the experiment responded: more positive news increased their expressions of positive emotions, and more negative news led to more negative expressions.

 

Facebook allowed one of its researchers along with two university scientists to publish their findings in a prestigious journal. The response was immediate and not what Facebook anticipated. Experienced researchers and ordinary citizens around the world were outraged.

 

Much of the concern focused on the lack of informed consent. People felt they'd been used as guinea pigs. Facebook says that terms of service agreement users clicked when they signed up was sufficient informed consent.

 

Some social scientists and technology experts have defended Facebook, warning that requiring individual informed consent could have a chilling effect on research. The Facebook fallout illustrates debates that will become increasingly common for medical researchers as electronic health records become more common. Today, we'll hear from a health privacy advocate, a bioethicist, and the chair of an institutional review board that's responsible for oversight over human research. Facebook declined to send a representative.

 

To start the discussion of the Facebook fallout, I'm joined by Dr. Deborah Peel, founder of Patient Privacy Rights, a nonprofit advocacy group, and a member of the board of the Electronic Privacy Information Center. Dr. Peel, advertisers do research all the time manipulating people's emotions to try to get them to buy different things. What's different about what Facebook did?

 

DR. DEBORAH PEEL: What's different about what Facebook did is they were actually trying to change people's moods. And if this had been a hospital or a research institution, they would have had to seek patient consent for such an experiment and they would have had to present the proposal to the institutional research board to evaluate how much danger and risk there was in doing this research on people.

 

STEVE USDIN: So what Facebook claims is that people who sign up for Facebook click on the terms of service, that little agreement that you have to click on when you sign up for a new software website, and that that somehow constituted informed consent. What would be your response to that?

 

DR. DEBORAH PEEL: Absolutely not. No, absolutely not. Of course, it's not informed consent. It's really an outrage, because people have no choice. If they want to use Facebook, they have to say yes no matter what they're asked.

 

STEVE USDIN: So what's the connection between what happened with Facebook and the backlash against that, and medical research, especially in an era where electronic health records are going to become more and more common?

 

DR. DEBORAH PEEL: That's a great question. Actually, there's a lot of similarity between what Facebook did and what is happening with our electronic health records. Essentially, in the healthcare system, the holders of our data -- electronic health records, hospitals and doctors -- can and do use all of that information for research without our knowledge or permission today. They're doing the same thing.

 

STEVE USDIN: Well, they are and they aren't in one sense, because isn't there a kind of a critical distinction between observational research, where you're mining data that exists, and an intervention, like what Facebook did, where they actually change what people see or, in medicine, you change intervention that people get, and you see what the results is?

 

DR. DEBORAH PEEL: Well, actually, people feel very strongly about the use of personal information, health information, whether it's about their minds or their bodies without their knowledge. It feels like an intrusion. It feels like a betrayal. And it makes people not trust doctors and researchers. So there are some distinctions. But the problem really is being spied on, rather than being treated like a human being who has a right to know what's going on and who has a right to consent and participate in research.

 

STEVE USDIN: So we've done shows before, people talk about the creation of the so-called learning healthcare system. And it's predicated on the idea that patients provide their information, and that goes into some kind of databases and registries, other kinds of systems. And then is used to improve medical care and medical product development going forward. What's wrong with that?

 

DR. DEBORAH PEEL: It's the same problem. If people were asked to consent and could understand what the learning healthcare system was, what research would be done, which of their data would be used -- because, maybe, they would be uncomfortable with some parts of it being used -- then the majority of the public, when they're asked and when they understand the research that they're asked to participate in, they say yes. But when the public doesn't know, it really turns them against not just research. But my concern as a physician is it makes them do things that put their health at risk because they hide and omit information --

 

STEVE USDIN: Can you give me an example of that?

 

DR. DEBORAH PEEL: Yeah, sure. One in eight Americans today lies or omits information, refuses to take a test, or goes to another town to see a doctor in order to try to keep certain parts of their health information private. And millions more avoid or delay treatment for very sensitive and very serious conditions, like cancer and depression, and sexually transmitted diseases. So knowing that you don't control your own health information really makes people put their own lives at risk, their health and their lives at risk. So then you don't even get the accurate data that we all need and want for a learning healthcare system.

 

STEVE USDIN: So very quickly, are there steps that, you think, the government should take to better preserve privacy or is it something that industry and physicians have to do?

 

DR. DEBORAH PEEL: Everyone has to change this. As you know, Congress isn't going to pass anything, even though we need a strong law that puts people back in control of their health data. So we spend a lot of time talking to industry and talking to physicians, because we would never accept a paper medical record system if it caused 40 to 50 million people a year to put their health or their lives at risk. We should never have to accept such a flawed system just because it's electronic. We can get all the benefits we want if we ask patients, all the benefits and prevent the harms, if we just ask first.

 

STEVE USDIN: Well, thanks very much, Dr. Peel.

 

DR. DEBORAH PEEL: OK.

 

STEVE USDIN: We'll continue the discussion in just a moment with Michelle Huckaby Lewis, a bioethics researcher.

 

[MUSIC PLAYING]

 

NARRATOR: You're watching BioCentury This Week.

 

SEGMENT 2

 

STEVE USDIN: To discuss the ethical implications of research like Facebook's emotional contagion study, I'm joined by Michelle Huckaby Lewis, a research scholar at the Johns Hopkins University's Berman Institute for Bioethics. So Dr. Lewis, what's your response? What's the lesson for biomedical research from the fallout and the response to the Facebook research?

 

DR. MICHELLE HUCKABY LEWIS: I think one of the issues that's a really important lesson to be learned both for Facebook and for the biomedical community is that transparency is really key. As we're going forward, the collection of medical information and electronic health records offers unprecedented opportunities to improve human health through biomedical research. But transparency is going to be a really key part of that process.

 

STEVE USDIN: And so part of transparency, kind of the gateway to it, is this idea of informed consent. And again, do you think that clicking on terms of service agreement really could be considered informed consent in the case of Facebook. Or even people who have their electronic health records, if they just say, OK. They click something once in their life that says that research can be conducted with their electronic health record.

 

DR. MICHELLE HUCKABY LEWIS: Absolutely not. I think consent may not always be necessary in all circumstances for all kinds of research. However burying in a consent information in boilerplate language on a terms of use agreement or in the information about signing up for medical care is not really meaningful informed consent from an ethical perspective.

 

STEVE USDIN: So it seems to me that there's a distinction between research where you conduct an intervention, where you change things, and where you're just observing things. And if you're taking the second category where you're observing, you're looking at mining data from electronic health records, if it's anonymized -- if you can't tell the names of the patients -- what's the risk there to the patients? Why would patients need to be informed?

 

DR. MICHELLE HUCKABY LEWIS: Well, again, it goes to the respect for patients, respect for persons. And from a legal perspective, de-identified research is not governed by the same sets of rules that govern identifiable research with identifiable subjects or patients.

 

But from a respect for persons or if you even think about a public relations standpoint and building trust in the research enterprise and building trust in healthcare systems, then at least informing patients about how their information may be used, even if consent isn't always necessary. Letting them know and being open and transparent about how that information may be used will go a long way to building trust in that process.

 

STEVE USDIN: The rules, the basic rules, what's called the common rule covering human subjects research in United States and in much of the world was created before the internet, before electronic health records. Do you think that there needs to be a change in the way that the regulatory regime for medical research to take into account electronic health records and social media?

 

DR. MICHELLE HUCKABY LEWIS: I think we are exploring uncharted territory. The old way of doing research, you'd go through and do a chart review and go through paper charts, very time-consuming process. And what's happening now with electronic health records and health information exchanges allows us to have information in a very different way.

 

And we need to think about what our processes are to govern that information, access to that information. I think it's really important to think about who's minding the store. And what I mean by that is who has access to the information, for what purposes, and who decides.

 

STEVE USDIN: So there's been kind of a small backlash, kind of a flurry of attention to what happened with Facebook. But it seems to me that we're kind of on the edge of something happening that could be much bigger, that could create a much more negative response with all the kind of electronic health records that researchers have access to today.

 

DR. MICHELLE HUCKABY LEWIS: Absolutely. I think for that very reason it's really important to think about how this information may be used in a careful, thoughtful, responsible way so that we don't have a huge problem and don't have a backlash that creates a lack of trust in healthcare systems.

 

And you have to remember, the health information exchanges, for example, those are created to improve patient care. And so we want to maintain the ability to do that and the opportunities to really help patient outcomes. And by doing that, we can maintain trust both in the physicians, the healthcare system, and in the research enterprise.

 

STEVE USDIN: Is there some way going forward you think that patients will be able to have more nuanced control over who gets access to what parts of their medical records? And would that help solve some of the problems that you're talking about?

 

DR. MICHELLE HUCKABY LEWIS: I think that there may be a role for that kind of system. Patients may not care, for example, if someone has access to information about their blood pressure, and so that kind of consent may not be necessary. But they may be concerned about other kinds of private health information. And so there may be different ways to give patients control over who has access to that information.

 

STEVE USDIN: Dr. Lewis, there are distinctions legally and to some extent ethically between different kinds of research, for example between research that leads to generalizable knowledge and research that's intended for quality improvement. And the requirements about informed consent and things are quite different, aren't they?

 

DR. MICHELLE HUCKABY LEWIS: So quality improvement is not considered research. So quality improvement processes and mechanisms that are intended to just improve how a system works are viewed from a regulatory both legal and ethical perspective as different from biomedical research that is intended to create generalizable knowledge.

 

Quality improvement activities in general don't require consent, whereas human subjects research does require informed consent. But there's a lot of uncertainty in the field as to where that demarcation line is.

 

STEVE USDIN: So does that make sense? If you're looking at it, for example, going back to the Facebook example, if they had done everything exactly the same as they did but they hadn't published it in a peer-reviewed journal, they kept it internal and they just said, well, we're doing this for quality improvement because we want to improve the way that we market to our customers, then ethically that would be different somehow?

 

DR. MICHELLE HUCKABY LEWIS: Well, first of all, nobody would have known about it outside of Facebook probably. But from a both ethics and legal perspective, it's a little bit different. If you're promoting what you're doing and you're touting it as, this is research and we want others to learn about what we've done, that makes what they've done more public. And the potential risk to participants is greater. And so it is a little bit different.

 

STEVE USDIN: I think from the perspective of people what they were responding to was this notion that they were guinea pigs, that people were conducting experiments on them without informing them and without their consent. And I don't think most people would make that distinction and say, oh, well, if it's not been published, then it's OK.

 

DR. MICHELLE HUCKABY LEWIS: Well, no, absolutely right. I think from the participant perspective, it is absolutely an issue of, you were doing something. You were manipulating me without my knowledge or consent.

 

STEVE USDIN: So how's this distinction going to be made going forward? Because it's kind of a routine thing. A lot of medical centers, when patients go there, they sign a waiver that says that their information can be used for quality improvement for the institutions. Is that the same kind of problem of blanket informed consent like the terms of service with Facebook?

 

DR. MICHELLE HUCKABY LEWIS: Well, but there are limits on what can be done under QI -- under Quality Improvement -- activities. So there is a difference there. But you're right that hospitals, different health systems have the ability to use some information to help improve their systems. But that's internal. But you're right. That line and where those differences are and where those distinctions are is not really always very clear.

 

STEVE USDIN: Well thanks very much. We're going to continue the discussion in a moment with the chair of an institutional review board that has to make tough decisions about the ethics of research every day.

 

NARRATOR: Now in its 22nd year, visit biocentury.com for the most in-depth biotech news and analysis. And visit biocenturytv.com for exclusive free content. Now back to BioCentury This Week.

 

SEGMENT 3

 

NARRATOR: Now back to BioCentury This Week.

 

STEVE USDIN: To continue the conversation, I'm pleased to be joined by Dr. Daniel Lieberman, chairman of the George Washington University Institutional Review Board. Dr. Lieberman, Institutional Review Boards -- IRBs -- review research proposals like the one that Facebook had done if it had been done in an academic setting. How would you have approached an application to do the kind of research that Facebook did?

 

DR. DANIEL LIEBERMAN: The kind of work that IRBs do are based on the principles of the Belmont Report. And it basically has three elements, respect for persons, beneficence, and justice. Respect for persons is where the whole idea of informed consent comes in. So I think the first thing the IRB would've wanted to looked at very carefully is, does the study need informed consent? And if not, what is the justification?

 

STEVE USDIN: And typically if you're doing research that involves manipulating people's emotions, wouldn't you think that kind of obviously crosses the threshold of needing informed consent?

 

DR. DANIEL LIEBERMAN: I think so. I think a case can be made against it if informing people of what you're doing would make the research impossible. One can make a case about waiving informed consent.

 

But really what we look at most of all is, is this a prospective study going forward where we have an opportunity to interact with the subjects and get informed consent? If we can, we usually require it simply as an acknowledgement that people should have the right to decide whether or not they're going to be in a research study.

 

STEVE USDIN: And this idea that, well, if you inform people then you couldn't do the research. One way to look at that is to say, well, if you informed people they would refuse to do it. How does that justify not informing people? How could that possibly justify it?

 

DR. DANIEL LIEBERMAN: That's absolutely not justifiable. I think in this case, though, if people knew that they were in a study that was going to attempt to manipulate their emotions that might substantially interfere with the intervention and the results.

 

STEVE USDIN: I'm sure it would. But taking a step back and looking at not only what Facebook did but at biomedical research more generally, one of the things that IRBs have to do is to balance the potential harms against the potential benefits of any research. When you do that, do you do that at the level of the patient or at the level of society?

 

DR. DANIEL LIEBERMAN: You know, our primary responsibility is to the research participant. In many cases, this would be a patient. We do also have to think, though, at the level of society. For example, if a certain group is being studied, could the results of this study stigmatize the group or harm them in some other way? That becomes much more complicated, but we are obligated to think about those things.

 

STEVE USDIN: And that's on the harm side of it. When you're looking at the benefit side of it, does there have to be a potential for benefit for the individuals who are in the trial?

 

DR. DANIEL LIEBERMAN: No there does not. Ideally there will be, but one can justify putting research participants at risk if it's going to lead to a substantial benefit to society. Many times people who are suffering from illnesses, they want to give back.

 

They want somehow their experience to help other people. And they're very happy to enter these studies. We just have to make sure that they know that they should not expect any personal benefit.

 

STEVE USDIN: And then to make that contract, the kind of implicit contract you've got with the patients, real, don't you also then have to have a commitment that the information's actually going to benefit people? It's going to be published. It's going to generalizable, and it's going to actually affect society. It's going to affect medicine.

 

DR. DANIEL LIEBERMAN: Yes, you do, and once again that gets into a very difficult situation of to what degree does the IRB have to evaluate the quality of the science that's being done? And we usually try to avoid that as much as possible. But when the risks that subjects are being placed under become significant, then we do have to ask that question, what is the benefit to society?

 

STEVE USDIN: And do you have to ask the question about, how is this research going to be disseminated? How's it going to be used to change things? Or does that go kind of beyond the remit of what an IRB typically looks at?

 

DR. DANIEL LIEBERMAN: Again, it depends on how severe the risk is. If somebody is doing a very low-risk study and there is potentially very little benefit, that's OK. Oftentimes studies have no risk whatsoever, and that's OK. But if you're giving somebody an experimental drug that has a potential for great harm, then there needs to be substantial benefit.

 

STEVE USDIN: We're going to continue this conversation. I want to talk more about some of the risks of fear of research right when we come back. We'll be right back with Dr. Daniel Lieberman.

 

NARRATOR: Every month, BioCentury This Week will feature Profiles in Innovation, a special segment highlighting the stories of innovators whose work is improving lives and transforming the world of healthcare.

 

SEGMENT 4

 

STEVE USDIN: We're discussing the Facebook fallout with Dr. Daniel Lieberman, who is chairman of the George Washington University Institutional Review Board. So Dr. Lieberman, the Facebook study was called "Emotional Contagion." And one of the things I wonder is, is there an emotional contagion about this kind of research? Or is there a concern that things will go too far, and people will be afraid to participate in research -- will deny consent for research that actually could help them and could help society?

 

DR LIEBERMAN: Yes, I mean, this comes in the setting of a lot of concerns about privacy. The NSA wiretapping is all over the news. And I think that people are very sensitized to this, and it's not great that this is coming out the way it is.

 

STEVE USDIN: So there's a letter that a number of academics sent supporting Facebook and saying that it could be a chill on social sciences research and another kinds of research if there are new restrictions on research on informed consent as a result of this. Do you think that's a serious concern?

 

DR LIEBERMAN: I think it is. The regulations we work under now were developed for biomedical research, where you give people experimental drugs and other potentially dangerous interventions. Already, a lot of people argue that this is not appropriate for social behavioral research in which the risks are much less. A lot of people believe that the answer is less regulation not more.

 

STEVE USDIN: And is there a need to have a more clear distinction between research, which is observational in nature? For example, mining through electronic health records and research that's interventional -- making a change to somebody's therapy or the way that you treat somebody and seeing what happens.

 

DR LIEBERMAN: You know, the way we look at it now is we split between minimal risk studies and greater than minimal risk studies. Minimal risk is the kinds of things that you would encounter in your everyday life. So for example, one of the risks of this Facebook study is to lower people's mood. Well, if you're driving down the road and you hit three red lights in a row, your mood is probably going to go down more than if you're exposed to news feeds with negative words in them. So this would very much be considered a minimal risk study, and different standards should apply.

 

STEVE USDIN: There's a another kind of distinction -- we talked about earlier today -- about the distinction between generalizable research and quality improvement research. And under the ethical principles and law as I understand them today, if Facebook hadn't published their research, if they'd just kept it internal, would they had done everything else exactly the same? It wouldn't have raised the same kind of red flags, because they could have said, well this is just to improve our quality. Is that a legitimate distinction?

 

DR LIEBERMAN: Well, it's certainly a real distinction. It's a distinction that we as an IRB have to face every day in deciding whether or not something is research and falls under our review. I think it's more practical, though, than logical. I think it's simply that it would be impossible to regulate quality improvement -- regulate the kinds of things people look at -- simply to improve their own business. That doesn't necessarily mean that those kinds of things, though, are less risky for the people being studied.

 

STEVE USDIN: So doesn't there have to be some kind of ethical principles, even if there isn't going to be oversight as intensely over what companies do? There's still, the same kind of principles have to apply, don't they?

 

DR LIEBERMAN: I think there certainly should be voluntary principles that companies should follow. And when companies violate these principles, it should be made known to customers so that they can be punished by the market rather than by regulators.

 

STEVE USDIN: Thanks very much. That's today's show. I'd like to think my guests, Deborah Peel, Michelle Huckaby Lewis, and Dan Lieberman. Remember to share your thoughts about today's show on Twitter. Join the conversation by using the hashtag BioCenturyTV. I'm Steve Usdin. Thanks for watching.