Chief science and digital officer at the Duke Clinical Research Institute Eric Perakslis, PhD, tackles issues in health data privacy in the attention economy, telehealth in the pandemic age, and more. This episode is part of the Health IT series by the AMA-MSS Committee on Health Information Technology, hosted by Shivani Bhatnagar, medical student at the Texas College of Osteopathic Medicine.
In this episode of Making the Rounds, chief science and digital officer at the Duke Clinical Research Institute Eric Perakslis, PhD, tackles issues in health data privacy in the attention economy, telehealth in the pandemic age and more. This episode is part of the Health IT series by the MSS Committee on Health Information Technology, hosted by Shivani Bhatnagar, medical student at the Texas College of Osteopathic Medicine.
Listen on the go to the full episode on Apple Podcasts, Spotify or anywhere podcasts are available.
Bhatnagar: Hello and welcome to Making the Rounds, a podcast by the American Medical Association. Today's episode is part of our health IT series from the AMA MSS Committee on Health Information Technology. My name is Shivani Bhatnagar and I am a medical student at the Texas College of Osteopathic Medicine. And I'll be your host for today. We are delighted to introduce Dr. Eric Perakslis, who is a leader in health informatics research and development and the current chief science and digital officer at the Duke Clinical Research Institute. Welcome Dr. Perakslis. To start us off for any listeners out there who aren't familiar with your work yet, could you give us an overview of what your involvement within health IT has been?
Perakslis: Sure. Yeah. I started off as an engineer before I went and ahead and got into science and I've basically been working on technology in medicine since about 1985 when after one year of college, I got an internship at a local hospital west of Boston and I actually put in the first pulse oximeters that summer. They didn't exist before that, it was all arterial blood gases and all the things that goes with that on the anesthesia stations. Since then, I've spent almost 20 years in bio pharma, several years at the FDA. I helped start the department of biomedical informatics at Harvard Medical School and now I'm at Duke. So, I'm kind of someone who does all things technology but my specialty has to do with kind of patient safety and patient protection. For example, I spent several trips during 2014, 2015 Ebola outbreak in Sierra, Leone developing one of the first EHRs that had ever been deployed in a hot zone as an example of the type of thing I'm interested in.
Bhatnagar: Wow. That's definitely a very exciting resume. And what are some current projects that you're working on?
Perakslis: Currently, at Duke, I'm really trying to work on figuring out and delivering the promise of digitization of clinical trials. To me, it's never about the technology. To me, it's about equity. It's about people being able to be in trials, regardless of where they live. It's about people being able to trials, regardless of the type of insurance that they have. It's about people being in trials, regardless if they've got to get to soccer practice later that day and they still got to get their chemo done. So, I do think that digitization brings a promise of pushing clinical trials into the populations, a lot the way care is delivered. We have this somewhat artificial situation in the U.S. and that the FDA regulates the tools that doctors use but it doesn't regulate health. And so, we do have this big gap between biomedical product development and medical practice.
Bhatnagar: Yeah. I'm sure those patients are really glad to have you in their corner and fighting for them to be eligible for these types of trials that you are talking about. You have had almost two decades of experience in this field across multiple roles as you illustrated earlier. What do you feel like is the impact of your work? And if you don't mind sharing, are there any challenges that you've experienced along the way?
Perakslis: Sure. I think the impact of my work has been enabling people to think big. Back in 2007, when I was at Johnson & Johnson, at the time clinical genomics almost wasn't a really a thing yet. It was taking hold in rare diseases. It was taking hold in kind of niche medical centers with really bright geneticists and clinicians but it were wasn't impactful everywhere. And so, I designed a system called TranSMART. That really was the first multidimensional clinical Omix data warehouse. And then we gave it away. We open sourced it and now it's being used at 200 to 300 institutions around the world to do biomedical research. So, I always say the best thing I ever did, I gave away. So that's probably my proudest impact.
In fact, for the large NIH long COVID studies that were just funded, they actually selected that technology platform to run that. And so, for me, it's about dreaming big, doing big and then letting the impact follow the work. And I know that sounds kind of cliche but at this stage, at 55, at this stage, it's kind of the only reason I'm working is because I get to do really cool things with great people.
Bhatnagar: Yeah. That's definitely a great one. I want to keep you moving forward with this. Recently with the pandemic, how do you feel like COVID has affected your work and the projects that you have?
Perakslis: Well, first of all, I think it's been difficult for everybody and I think we've all struggled with work. It's harder to be an employee. It's harder to be a boss. It's harder to keep your projects on track, right? You know, that said it has given a boost to health technologies. Right, wrong or indifferent, obviously telehealth being the one that people talk about. That it's really pushed telehealth forward and expanded the horizons. But as somebody who specializes kind of in health security and health safety, I also realize that telehealth isn't a great option for people that are being hurt at home. It's not a great option for people that don't want everybody they live with necessarily knowing about their doctor's appointment. And so, I think the digital divide has also come into stark relief. So, to me, there's a force and a counter force. There's a benefit and a risk. And I do think COVID has moved technologies ahead but we have to be careful of what's getting left behind or what might be lost when that happens. I'm not exactly sure how to keep everybody safe.
Bhatnagar: Yeah, I'm really glad that you brought that up because that's definitely not something that we think about when it comes to the convenience that we associate with telehealth. Now, you recently had a piece in the New England Journal of Medicine titled "HIPAA and the leak of de-identified EHR data." In that, you discuss how sharing this de-identified patient data has led to the growth of multi-billion-dollar health data aggregation companies. Could you summarize what those companies are doing with this data?
Perakslis: Well, it's really doing two things. If you look at the way the internet works, I always think it's a lot like the oceans, there's five currents that drive all the ocean currents, right? And we used to learn about this technology from surfing the web, which was a thing when I was back in college, right? But the forces on the web are not always good. And so, one of the things that said two of the largest forces on the web are the surveillance economy and the attention economy. And unfortunately, patient data is actually fueling both of those. So, if you trace clicks of apps and companies that are advertising their products on Facebook and you actually look at what happens when people click on those and they follow them through, you realize that people are actually ending up on all types of lists. Some lists that are simply trying to market you stuff, some lists that could be used by insurers or other people to make inaccurate judgements about you.
And so, we think of privacy, especially in medicine, people think of HIPAA and they think of this, “Well, if you take these 18 identifiers off you've de-identified.” and it's like … well, my issue with that is what you're saying is that then it's, some data never has to be private. And I don't know that that's true in medicine, right? I mean, privacy in medicine is built upon the ethical pillar of autonomy and self-determination. And I think more than anything, what people don't realize about this is when your information and viewing history and the things you click on and when all your internet stuff is being used, what people are actually doing is selling themselves. They are the product when people are getting paid for clicks. And so, I think that it's one thing to educate patients about this, it's another thing for health care institutions to be leaking this data into this entire thing.
And I work with lots of patient groups and on one hand, you've got a breast cancer support group at Facebook that are trying to support each other. And through all the different parts of treatment, then you realize that mastectomy scars and stuff like that are being scraped and used for pornography. It's not okay. And these people should have a safe place to have that conversation, especially because it's probably somebody who's alone with nobody to talk to in their house that goes on Facebook one night and is like everybody who is looking for a support group. So, I think that there are real harms in some of these practices of leaking people's data and putting people on more lists than they need to be on. And most of them aren't illegal.
Bhatnagar: Wow. Thank you for explaining that. It sounds like this is a very complicated issue that has a largely negative impact. However, do you feel like this trend has any your positive contributions or can it positively contribute to any developments within health IT, moving forward?
Perakslis: If I'm being honest, not so much. Like I said, I'm a big believer in open source and open data. I've built systems and I give them away. De-identified research, IRBs don't regulate it. So, there's a lot of research going on. It's just kind of shotty research. Even if it's ethically not bad, just methodologically, it's not good. And so, I think on one hand it's easy to say more data is going to be a public good. At the same time we have well indoctrinated, educated and entrenched ways to do that research today. Right? With consent. I mean, so for example, I actually think there's really interesting things we should be doing with data that's consented. And I don't think we've pushed those envelopes yet of asking what we could do with people. It's as simple saying, “Would you be interested in being part of this study, can I share your data?” If they click yes or no, or whatever, opt out.
But I do think there's a lot of stuff that can be done. And I think the folks that are really excited about what can be done with people's data when they don't know about it. Usually, from my experience of advising a lot of these companies, they underestimate data science needs, they underestimate data alignment, they underestimate bias, they underestimate all the things that can go wrong. I've literally had people see this great study come out and jam and say, "Why couldn't we have just done that retrospectively with an EHR?" They don't understand blinding. So, is there a lot of stuff? Yes. The other thing is to remember there are ad tech capabilities out there that can go to a census block or a zip code and come up with 30,00-40,000 elements on the people living in there from the way data is shared around. Your supermarket loyalty card, your over-the-counter buying at CVS. Everything is shared.
And I don't know that we need to pour medical data into that if we could, if we could help it. So big believer in data sharing, big believer in open research, all those things. I just think there are ethical ways to do it. And what I've observed in counseling and being asked to consult with these companies, frankly, is a lot of kind of willful ignorance where they know they should probably get an IRB but nobody's making them. So, they're not going to. And to me, it's like, "Just get the IRB." Look at some of those big kinds of scandals, if they had just gone and gotten an IRB. Maybe that was going to be a really interesting, important study.
Come on. I mean, why just not do the right thing and be open about it? Because again, I actually want people's ideas to come forward so I want them to do quality research. What we saw with COVID, one of the things we saw was kind of an unprecedented initial period of retractions by major medical journals and a lot of those had to do with aggregated data that hadn't been checked carefully enough about what it was and it was misaligned or something like that.
Bhatnagar: Yeah, it sounds like "why not just do the right thing" is an overly prevalent question and frustration that's shared across health care in many aspects and it looks like health IT is not excluded from that. So, when it comes to finding solutions to these negative aspects of the data leakage that we're talking about, is it really just as simple as getting better consent and IRB approval?
Perakslis: I mean, I don't know. I actually think we need to evolve what we think privacy is. I don't know, consent works for some things. It doesn't make a lot of sense in other context. I am a big fan of GINA, the Genetic Information Non-discrimination Act and people talk about that sometimes like it's a privacy act. It's actually not, it's a non-discrimination act. It doesn't say you have to keep people's information a certain way. What it says is you can't deny employment. I actually think we need more of that. You know what I mean? We need more non-discrimination; we need actual penalties. So, one of the things we say in that piece is why have only one or two states made re-identification illegal? Why can't it be illegal?
I mean, because the good people aren't going to do the wrong thing but it's idea that they're just avoiding regulation at all costs. And there's been this musical chairs thing that's gone on, so for example, one of the debates I watched play out was, "Well, our de-identification is flawless. So, we shouldn't have to offer patients in our studies or anything like credit counseling or something, if anything goes bad." And I'm like, "Okay, so if it's flawless, why don't you just offer it? You'll never have to pay it." "Well, we don't offer it because it's flawless." Like, you know what? I walk out saying, it must not be flawless, because if it was really flawless, you would have no liability at all of offering protections to subjects in this research. It's that type of thing that always just makes ... It's just icky to me. In this day and age, we kind of should be able to do better.
And we saw it, right? I mean, one of the things we saw with COVID is that ... And even if you take the misinformation away, which is a big thing in another part of my research, just the trust. Trust in medicine is down, trust in sciences is down and people didn't answer the phones when it was an 800 number with a contact tracer calling. Yeah, pretty much people didn't answer the phone. And if I was a delivery driver for Walmart and a positive test meant I would get put on furlough without pay, I wouldn't have answered the phone either. We didn't put the type of wrap around things that we needed to, to make those policies work so people could afford to be compliant with them.
So, I just think people have to think that people are smarter than we think they are. Like, it's funny. I was in one meeting, with a patient advocate I work with who's a pillar in the Latina community, just really but also just an amazing advocate. And I heard someone say, "Well, your community wasn't really online." She's like, "No, they were online. They weren't answering. They're completely online. Don't make that assumption." So, I just think that at there's always benefits and risks, right? I've seen fist fights break out in low middle income countries when you're handing out $2 mosquito nets, what this person got to when this person didn't get any. Everything you do that's an intervention in health has an impact. We just have to understand the counterweight and we can do amazing research and still respect the autonomy of those subjects of that research.
Bhatnagar: What a complex issue. I really hope that we see some progress in this area sometime soon. On a similar note, on Twitter, you tackle some pretty huge topics like we just talked about within health IT. Everything from ethical patient data use and privacy to health equity and research and artificial intelligence in addition to telehealth and tele-research, does it ever get overwhelming to take on such expansive topics? And what advice do you have for any medical students or other people early in their careers who are interested in tackling these issues alongside you?
Perakslis: Yeah, it's a great question. I mean, first of all, I'm old. So, I've been around a long time and had a chance to work on lots of things. So, that's one thing. If you look at those topics, the way that I interact with them is that there's some very common themes in them. And it's really that I have these underlying common themes that make them interesting to me. I mean, I'm a polymath, I've always been in technology. I was doing molecular biology in the mid-nineties before I ever went to work anywhere else. And so, artificial intelligence is just new types of algorithms, machine learning, et cetera. Right? So, telehealth is just a different way to deliver care and it's an intersection with technology.
One of the things, it's really interesting ask people to give you a definition of digital health and you get all kinds of things. You get like the consulting world. I always say, "Look, it's the interaction of medicine and the internet and we should just respect that because it could be all things medicine and the internet. The good, the bad and the ugly." We know bad stuff goes on online but we also know good stuff goes on online, right? So, by taking that approach to it, obviously and so telehealth would look a certain way with that. Most HIT, EHRs, things like that, all these things that are connected to the patient portals are connected to the internet. So, for me, it's the internet kind of connectivity is the underlying theme.
And also, again, kind of the patient security, privacy but user experience thing because people really want to use these tools. I mean, I think we also have to listen to that. People know a session on Facebook is like smoking three cigarettes and they still do it anyway. They know it's not great for them. And so, they're going to do it. So, it's just a matter of how do you get the benefit out of that? I mean, I like the idea that patients with a new diagnosis can find people like themselves on Facebook. That's a great thing but we just have to understand that there's a lot of bad actors and can we actually control that? The challenge with the internet is a lot of the bad acting is the business model. That's the issue, right? Like Google ads or things like that. You look up something in Google and then you go check the weather the next day and what do you know? There's the luggage you looked at dancing in the margins, it's like you're being watched, which is okay. People just have to understand it.
Bhatnagar: Yeah. These gaps in understanding are unfortunately prevalent and that's something that we're hoping that this podcast series can help with as well. So, thank you for taking the time to explain all of that. Now, pivoting a little bit, you've had the opportunity to work for the government, private companies and major academic centers. What are the major differences? How do your experiences with these different employers compare?
Perakslis: It's a great question. I think that government and academia are similar and part of that because a lot of academic is federally funded, so I think businesses are wildly different. I went from being a senior vice president at Johnson & Johnson to being the CIO and the first chief data scientist at the FDA, didn't feel that different because FDA's 14,000 people. And the part of J&J I was in was less than 10. So, it went from one large complex organization into another large complex organization. Opposite missions in a lot of ways. So, there are things that are true about large and small environments. Academia again, because the way it's funded the way reward systems ... So, I would guess I would say, I guess a more thoughtful answer would be the things that make them similar or dissimilar have to do with how the reward systems are constructed in them.
And all the politics kind of follows from that, follows from incentives. But I like all of it. I mean, I've actually viewed those things like they were rotations in a way, although I did spend 13 years at J&J but I grew so fast. I started off as a project manager and I was promoted a VP in six years in SVP in like eight years. And so, I view myself as an alumni of J&J, having had such a great career there and worked there. But they're all different but I do think that different ... I like all of them and still get to contribute to all of them. But there's definitely people that would probably like one more than ... the large, extroverted types, probably like that really big complex corporate environment. Lots of people to keep track of, lots of travel, lots of FaceTime. That small, the more introverted, really deep kind of thinker might like a startup a little bit better just because it's a small team. Nobody cares what I'm doing. I'm the only person doing it. It's really interesting.
Things like Myers Briggs and those types of things are actually really important to understand. But at the end of the day, I think they all contribute. And I think that's the thing that I've gotten is that, for me, the mission of the FDA was ridiculously motivating. Every day as crazy it was, I knew why I was there. And sometimes in a large corporation that that can get forgotten pretty easy. I could be like Chandler with the Wenus in Friends, he couldn't describe his job, that's sometimes what corporate could feel like. I can't even describe my job. But working with a lot of clinicians coming out of Duke and out of Harvard, people are experimenting more now with less traditional career paths then I think they probably did 30 years ago or so. And I think that's great.
I think that it's a great chance for people to kind of take because I do think that the barriers between the industries come down with the right type of crosspollination. It really requires somebody that knows both sides to be able to lower a drawbridge on something and say, it's okay. It's funny, I never went into cyber security or anything like that because I had an interest in it. I went into it because I wanted to give technology away for free and I wanted to understand the risks in doing that. So, for me, it was totally selfish. I went into it because there's certain things I wanted to do, so I became an expert. It was the reason everybody was going to tell me to stop. So, I built my expertise in that as a way to be able to give away a system that's now being used for free around the world and those types of things.
Bhatnagar: Thank you. That was a really insightful reflection on everything that you've been involved in. And I'm sure that this will really help anyone who's looking at building their career or exploring these different avenues to accomplish similar goals. In a similar vein, what do you think health IT will look like in the next 10, 15 years, especially as my generation of medical students will start entering the workforce?
Perakslis: I actually hope that health IT becomes more ... It was great developmentally but I find few people that think the status quo with the massive EHR companies, as anything that anybody loves, far as the patients, as far as the clinicians, as far as the organizations. And I'd love to see more appropriate kind of point of care capabilities that actually enable clinician workflow in patient care, as opposed to just being good billing systems and other things that go on. So, I actually think we should almost redo it but I don't think we'll get 14 billion like we had with ARRA, back in the American Recovery Act or restoration act, something like that. ARRA, when you look it up. But I do think we should be thinking that it's not an epic concern a world five years from now and that the EHRs really do exist for clinicians and patients as opposed to billing systems, which is what they were built to do.
Bhatnagar: Yeah. I'm sure that's a change we'll all like to see some day, let's just see how long it takes to get there. All right. As we're wrapping things up, do you have any social media handles or other channels where people can connect with you and follow your work?
Perakslis: I mean, I always do Twitter and I do LinkedIn and everything else. I just do part privately if I do it. But those are the two that where people find me. And I really keep both of those for professional reasons because I have people that worked with me 10 years ago that will want to reference and they'll find me on LinkedIn and I think that's great cause I would want to help them. So, I don't mind keeping that stuff out, I actually think that's a good thing.
Bhatnagar: Great. Well, everyone, that's all for today. Thank you listening. And thank you for your time today, Dr. Perakslis.
Perakslis: You’re welcome. Thank you.
Bhatnagar: This has been Making the Rounds, a podcast by the American Medical Association. You can subscribe to Making the Rounds and other great AMA podcasts, wherever you listen to yours or you can visit ama-assn.org/podcasts. Thank you for listening.
Disclaimer: The viewpoints expressed in this podcast are those of the participants and/or do not necessarily reflect the views and policies of the AMA.