Episode 52: Practical Applications of AI in Healthcare with Joseph Zabinski, PhD of OM1

Embed Code:

Subscribe on Apple Podcasts, Spotify, Buzzsprout, or follow the podcast on LinkedIn for new episode drops.

In this episode you’ll discover:

  • AI is for optimization and not replacement

  • The two hurdles to implementing AI in healthcare

  • The benefit of LLMs in healthcare

  • Using AI to improve diagnosis and treatment

Keep scrolling for a transcript of this episode.

Key Takeaways

  • Every time there’s a new technology, there’s always a lot of fear surrounding it. With AI, we must always focus our intention on and enlighten people with the fact that these technological advances are for the optimization of work and not for the replacement of people. 

  • Conservatism in regard to AI will always be one of the biggest hurdles in its implementation. This is because we want to make sure that there will be no harm caused in using this tool. When querying for information that might be critical for treatment, it should be able to give out accurate results. Another hurdle is the integration of AI tools into clinical workflows. 

  • LLMs are very good at helping extract and summarize information from messy datasets. A practical application for this is saving clinicians’ time by summarizing physician’s notes.  

  • AI can be used as a tool to identify subtypes of diseases within large categories, leveraging patient data to improve diagnosis and treatment.


Learn more from Carrie and Rebecca: 

Healthcare insights (monthly email) | Telehealth/Virtual Care Mgmt Update (biweekly LinkedIn update)

Website | Carrie on LinkedIn | Rebecca on LinkedIn | NGL on LinkedIn

 
 
We’re not trying to replace the humans involved in the process. There are many, many things they do that AI will never do as well as them. In my opinion, we’re augmenting and helping, just like other tools would.
— Joseph Zabinski
 

Read the transcript

Announcer (00:01):

You are listening to Decoding Healthcare Innovation with Carrie Nixon and Rebecca Gwilt, A podcast for novel and disruptive healthcare business leaders seeking to transform how we receive and experience healthcare.

Rebecca Gwilt (00:17):

Welcome back to Decoding Healthcare Innovation. I am your host, your co-host, Rebecca Gwilt, co-founder and partner of Nixon Gwilt Law, where we help digital health companies navigate law and policy to build great businesses. Today I'm delighted to share the pod with Dr. Joseph Zabinski, managing director of AI and Personalized Medicine at OM1. We're going to talk about everybody's favorite current topic, AI and its practical applications in healthcare. There have been about 1.6 trillion articles and discussions on the topic, but no better way to dig into it than somebody who spends their whole life trying to optimize. So welcome to the pod, Joseph.

Joseph Zabinski (00:58):

Thank you. Delighted to be here. Thank you for the invitation.

Rebecca Gwilt (01:01):

You're very welcome. First of all, congrats on making Healthcare Innovations' 40 under 40 for 2023.

Joseph Zabinski (01:08):

Thank you. Yes, I was honored.

Rebecca Gwilt (01:12):

They mentioned you were picked because of your efforts to bring useful AI to real world healthcare, especially for pharmaceutical and medical device companies and providers. I'm curious, how did you get into this space? Maybe you can briefly talk me through your journey. I'm guessing that you weren't just discovering chatGPT in large language models a year ago, like the rest of us.

Joseph Zabinski (01:34):

No, actually the past year has been sort of a nice validation of the work that we've been doing for longer than a year because now the world is beginning to say that, oh, there's some interesting stuff with this AI that's been sort of hyped but intangible for a while. So yeah, I got into this field to give the quick story because I always was interested in doing useful things with math and optimization and technology, things like that. Studied those things in school, found my way to healthcare with those tools because it gives us the opportunity to do something impactful and useful for people and have spent the better part of the past decade working at that intersection of technology and healthcare. I did my doctoral work in data and health, got started in industry as a consultant, sort of figuring out some of the earlier applications of AI in the pharmaceutical world. And then most recently, for the past five years or so, I've been at OM1 kind of building from the ground up with large data sets, applying and developing AI against those data sets to answer useful questions. So it's been a journey. I always say I would've majored in data science if you could have when I went to college, but I'm too old. But now people can, which is great. So yeah, glad that there's lots of interest here.

Rebecca Gwilt (02:53):

Yeah, I mean, in my research I hear a lot of very action oriented words, which I love, useful, impactful, actionable. This is what I'm hearing all the time from folks in healthcare that it's great to have a lot of data and to see what it is and to buy it and combine it with other data, but really what's going to be helpful to people who actually have very little time to treat way more people than they have time to treat is sort of those useful, impactful actual items. Given your work so far, I think we're sort of at the tip of the iceberg now, but what do you think is the greatest use case right now or maybe some examples of really wonderful use cases right now for AI specifically in delivering personalized healthcare?

Joseph Zabinski (03:40):

Yeah, it's a great question. So we're kind of spoiled for choice nowadays because as you mentioned, the data have been getting bigger and better for the past five, 10 years, and the AI tools have also been more recently catching up and letting us do all sorts of cool things. And sometimes the challenge is like, where do we start because there's so many options of cool things we could do. Some of my favorite ones, so one of them is around the notion of patient identification. We talk about it as patient finding for undiagnosed or misdiagnosed patients. So the short story is most people will begin to experience a symptom of a disease and then have some delay until they get diagnosed. Could be a week until you go see your doctor get an answer. Could be a few months in some cases, could be years, and sometimes that's because the condition isn't too serious.

(04:30):

But another cases it's because it's a rare genetic disease that confounds some of the physicians that you work with or it's just something that gets misdiagnosed and then you're sort of in a place of treatment that's not really being effective and it could be helped elsewhere. So in that application, just as an example, AI is very good at taking massive cohorts of data and saying, what's the common pattern, the common theme for patients from that instance of first diagnosis until, excuse me, first symptom until diagnosis, and then how can we catch people earlier on the path? That's one that works in the real world. We do a lot of it. One of my favorite applications, I'll give you one other one just because I think it underpins a lot of the development in our industry, and that's increasing use of AI in clinical trials. This is something that's been hyped for quite a long time, but I think we're getting to a point where exactly as you say, some of the practical impact is happening, not just the theoretical impact, but the actual something is becoming real. One area being how do we look for people who, for example, could benefit from participation in a clinical trial? How do we surface them, make sure their physicians are aware of a trial, make sure they understand what's going on, have the opportunity to consent if they choose to and participate. This is a hugely unsolved problem in trial world, but again, that notion of massive data sets, AI being able to say, what's going on at the cohort level and what does that mean for the person in front of me that can help surface those patients?

Rebecca Gwilt (05:57):

Yeah. Someone recently told me, or maybe not too recently told me, but it stuck in my head that medical schools teach a core of maybe 200 diseases and there are recorded over 10,000 diseases. And so this notion of solution finding, patient finding, I imagine that this technology just unlocks that in a way that even the best doctor that exists in the world, I know every time there's a new technology, there's a lot of fear around it, but those kinds of examples always remind me that this isn't about replacing, this is about optimizing.

Joseph Zabinski (06:40):

Exactly. Yeah. That's a super important point you make because first of all, absolutely agree a hundred percent the best doctor in the world, he's the one who knows as much as they can and does what they can with that knowledge for their patient, doesn't mean they know everything, right? My wife is a primary care provider and is constantly trying to keep abreast of exactly as you say, all these different things people can have, how they interact and so on. But crucial point for adoption of all this is to emphasize that, yeah, we're not trying to replace the humans involved in the process. There's many, many things they do that AI will never do as well as them, in my opinion, we're augmenting and helping just like other tools would.

Rebecca Gwilt (07:21):

Yeah, absolutely. Absolutely. So I always ask this and I always forget the answer, so forgive me for asking again, how do you explain to non-technology people the difference between AI and LLMs?

Joseph Zabinski (07:37):

Yeah, I would say, and everybody has a little bit of definitional freedom here, but AI is the umbrella term. AI can pretty much mean whatever you want to mean. That's kind of the dirty secret of this area, at least within a pretty broad remit. LLMs, I'd certainly say fit underneath that umbrella. LLMs large language models are one of these more recently public, I guess, tools that we have in our toolbox along with machine learning, natural language processing, the other terms that you might hear. All those, I would say fall reasonably under the umbrella of AI.

Rebecca Gwilt (08:14):

AI. Okay, got it. So you said before there aren't a lot of mature real world clinical applications using LLMs yet. What do you think is the hurdle here and follow-up question, how do we hurdle the hurdle?

Joseph Zabinski (08:29):

How do we get over it? Yeah, so LLMs are pretty new with respect to the broad public awareness and wrestling with some of these questions. I don't think it's incorrect to have in our minds the chatGPT timeline, which was as of today, about 10 or 11 months ago, I think that it went public. Of course, folks have been working on these things for longer than that, but LLMs behave differently in important ways from other types of AI modeling we might do. And so I think especially in healthcare, there's always a conservatism that comes from making sure these things don't do bad things. We may have all had the experience of playing around with chatGPT and having to lie to us in ways that can seem funny when you're playing with a toy, but can be quite dangerous if you're actually querying medical record data and asking to surface something important potentially about a patient. So I think that's one reason. The other thing is the integration of AI tools into clinical workflows is a bigger unsolved problem. There are some examples of it being done really well, but certainly with LLMS as well, work needs to be done to put things in the pathways. We have sometimes redesign pathways, but that don't try to fight against the people elements of this that work with them.

Rebecca Gwilt (09:52):

Can you give an example of how it's being used currently? An example of a use case of how a piece of LLM technology is being used, the sort of in the clinical space.

Joseph Zabinski (10:09):

So in the clinical space that goes all the way down to you walk down the street to your primary care provider's office on the corner and see what technology's there. Versus in an academic medical center that has all sorts of research collaborations, there's a broad spectrum of what's being used in real practice. But with LLMS for example, I think they're very good at helping to extract and summarize information from sometimes quite messy data sets, quite large and messy data sets in a way that's flexible and somewhat intuitive, more intuitive than other methods. So I'll give you an example of something we do in-house. We at OM1, look at physician notes, clinical narratives where the physician has said, a patient came in today, here's what happened about that patient de-identified, of course. But a lot of times if you have a human read that note, you could sort of interview that person about what the content was. Not necessarily like was this word in there that, but what was the sense of what was going on? That ability to extract and synthesize context and content in a way that's better than just mechanically trying to say, is this or is this not present in the information? Is something LLMs are emergently good at? They're not perfect, but they're probably better at it than other technologies I've seen. And I think that is going to help in the clinical workflow pretty soon. This ability to sort of summarize for clinicians quickly is going to be useful.

Rebecca Gwilt (11:48):

I mean, I've been, frustrated isn't the right word, but I'm eager to see more creative applications that I've seen so far. So right now I have a great chatbot that's better than the last chatbot, and certainly LLMs are great at simulating that kind of conversation, but the mature applications of that technology, the potential is so much more than that. But alas, I think I'll just have to wait and see what comes through.

Joseph Zabinski (12:23):

We'll get there. I think you're absolutely right. It's harder to invent sort of the new paradigm of applications than it is to incrementally innovate on the ones that we have already. But I often say with AI, at least in healthcare, we want to be using the tools where they're appropriate rather than taking the tool and trying to jam something to fit it. And oftentimes, I've found that appropriateness comes from asking questions where help is most needed. So if there's a clinical area where we would say, it would be really amazing if X, then we would next ask can LLMs help with that. I'll give you one totally speculative example that I'm making up on the fly, but could be interesting. We don't have a great way of summarizing and synthesizing multimodal data. Kind of like what I was just describing with the clinical narrative. But what if you could ask an LLM, tell me what's true about this patient's medical history and all their imaging data and their genetics and just eat all of that and tell me what the answer is. That would be pretty cool. I don't even know if it's super realistic, but perhaps that's the kind of transformative thing you're talking about.

Rebecca Gwilt (13:37):

Yeah. Well, and I think it's not just about the technology. I think it's about upskilling the people who are going to be using this technology. So for instance, one way that I use it outside of the healthcare context, one of the things I often do is say, ask me 15 questions to elicit information. You need to get to an answer to get to x, y, z answer. And you can imagine in the clinical context, you could say, what questions should I ask this patient or ask me as many questions as you need to ask me so that I can get you the context you need to make x, y, z evaluation. And so I wonder if the sort of LLM technology itself has that's going to evolve from a technological perspective, but the use of that technology has to evolve as well. And I imagine that's a challenge. I'm sure that's overwhelming for folks in the healthcare sector who are already up against the wall with so many changes.

Joseph Zabinski (14:54):

I think they will slowly but surely figure out how to do better things with this, but we shouldn't. You're right. We shouldn't forget that they're trying to do their day jobs along the way too, right? Yeah.

Rebecca Gwilt (15:07):

Okay. So you're company recently launched PhenOM, a patented AI powered platform for personalized medicine. Tell me how that came about. What is the thing you're solving for?

Joseph Zabinski (15:22):

So the deep core of OM1 as a data company is our OM1 real world data cloud, which is this huge data asset, many hundreds of millions of data points and patients where we can really look at longitudinal health histories and study what happens across individuals and also cohorts over time. So if you're talking about a group of patients in rheumatology or cardiovascular disease, you can see what did they do, what treatments did they take, how did they respond, which types of physicians did they see? And all that helps to inform things about treatment effectiveness, risk, all these kinds of questions. I think a lot of times AI applications have said, let's take data like that and let's try to predict stuff that will happen. Let's predict if a patient's going to have a heart attack or if they're going to respond well to particular treatment.

(16:19):

Those are good questions to ask, but we found that there was one level deeper we could go, which is to say, can we use AI to create not just sort of point answers to those kinds of questions, but a more permanent generalizable representation of the patient, which we think of as a digital phenotype, like a digital twin similar to a genetic genotype or even if you take a blood sample, you can read a bunch of things out of that. We said, can we create sort of a representation of a patient that we can read lots of things out of? So phenOM is essentially that it's an engine that we built to take longitudinal histories, develop these genomic profiles or fingerprints, and then when we do that for a group of patients with a characteristic like a rare disease, we can then look at new patients and say, does their profile, their fingerprint look similar to the reference we have on file? And if so, maybe they're at elevated risk of that undiagnosed condition, but we can also ask other questions about them. What's their chances of improvement under this particular treatment path? Whatever it may be. That ability to say we have a permanent representation, we can ask multiple questions, I think is different from other AI uses, and that's why we put phenOM into place.

Rebecca Gwilt (17:37):

Okay, so let me say this back to you because my brain exploded a little bit. So we're going to create this digital phenotype. So I'm imagining a hologram like walking through space that has certain

Joseph Zabinski (17:51):

That's a good starting point. Yeah.

Rebecca Gwilt (17:53):

Physical and like a Star Trek gal, so physical and genetic markers, et cetera. Are you able to almost, I can imagine in a clinical trial space they would love to do this, right? Like a hypothetical person and then they test things on that person and see given what the data says about the reaction or well, the likely reaction to whatever that treatment is, gather data that would actually be helpful in the real world. Is that what you mean by asking it questions? Can you say more about that?

Joseph Zabinski (18:27):

Yeah, absolutely. That is one of the kinds of things you can do with this approach. You can say, I mean, what we'll say is we have a, I'll call it a phenotypic representation of what patients with a certain characteristic look like, and you might say, what would happen if we had a patient who had that profile and who had another one, which was another disease or comorbidity they might have that can be used to study subtypes of disease, which is often a question in development like you described. Can we target smaller groups within a large disease category? By using understanding of how people differ within the same large condition, you can also use this kind of technology to say, are there aspects of a patient's profile that help us to call them out or identify them more quickly? And that's something harder to do in the current moment with a fully next generation trial context like you're referencing where we have sort of a holographic patient. We don't need to worry about testing people in real life anymore, but some point I think we will get there. We'll be able to perturb and sort of vary these phenotypes enough to say, if we do this, what happens over there? And then you've got sort of the future you're describing.

Rebecca Gwilt (19:54):

Yeah, it's very, very cool. Well, what I hear most often is we're working on integrating AI into our solution. Sounds like AI is the base of the whole concept and company for you all. One question that's sort of not off topic for me, but possibly off topic for you, how are you finding the challenge of sourcing the data that you need to develop the kinds of insights you want to provide?

Joseph Zabinski (20:28):

Yeah, so we're blessed at OM1 on the AI side that the folks of our business on the data side have done such good work in sourcing lots and lots of really quite rich and deep data that we can use to figure this stuff out from an AI perspective. And we also have a number of really great partnerships with clinical groups. Like for example, we partner with the American Academy of Dermatology to get data and really understand what's going on with a clinical lens on it, not just sort of an abstract database lens. I will say the data challenge that I run into more is then to say, when we want to generalize these things, when we want to take something we've learned from our large deep rich datasets and put it into a smaller dataset, put it into a health system, how do we understand what's sufficient there? What's necessary? If the data are clean, what do we have to do to clean them up? If they have all sorts of unique local data, what do we do about that? I think interoperability obviously has been a topic of conversation for a very long time in healthcare. I do think at a minimum, these kinds of things will help us force more and more translation amongst data context because the value will just get bigger and bigger.

Rebecca Gwilt (21:41):

Okay. This is my last question. So my guess is that the evolution of capabilities and technology in the AI, LLM, NLP, machine learning space.

Joseph Zabinski (21:56):

All the things, all the acronyms,

Rebecca Gwilt (21:58):

Things I used to work for the government, so I speak fluent acronym. My guess is that the speed at which this will evolve and change is going to be sort of in hyperdrive over the next months. What do you think this conversation would sound like if we were having it one year in the future from now?

Joseph Zabinski (22:23):

That's a great question. I think a year from now, we are going to look at certain of these applications, particularly with LLMs, and they're just going to be second nature to us all. I saw this referred to recently as auto complete for everything. We're all familiar with this kind of technical function of auto complete, which would've sounded futuristic and bizarre at some point in our past lives. I mean, it hasn't been around for that long, and now it's just second nature. I think that will be true a year from now, but I think we'll be talking about a lot of the same core problems because they're not going to be solved in an instant.

Rebecca Gwilt (22:59):

Yeah. Yeah. Yeah. Well, I really, really do appreciate your focus on the problem, right? The problem, the impact, and I am excited to see what PhenOM does. I really appreciate your time and talent today, Joseph. If folks want to hear more of these kinds of insights or they're interested in exploring a business relationship with OM1, what's the best way for them to get in touch with you?

Joseph Zabinski (23:27):

Sure. Check out our website. We have lots of other materials on there. Our phenOM website there has me on it, and folks are always welcome to contact us through that. My email is Jayzabinski, my last name, @om1.com. If anyone wants to reach out directly, always happy to talk more. I know. Well, I just get excited about talking, so there you go.

Rebecca Gwilt (23:54):

You did. Okay. Well, thank you so much. For you listeners out there, I'm Rebecca Gwilt. I hope you enjoyed today's discussion on the clinical applications of AI with Dr. Joseph Zabinski of OM1. If you haven't already, please subscribe to Decoding Healthcare Innovation and follow us on LinkedIn. And as always, you can check out the links and resources in the show notes and find out more about our work with Healthcare Innovators at nixongwiltlaw.com. See you next time.

Announcer (24:25):

Thank you for listening to Decoding Healthcare Innovation. If you like the show, please subscribe, rate and review at Apple Podcast, Spotify, or wherever you get your podcasts. If you'd like to find out more about Carrie, me or Nixon Gwilt Law, go to nixongwiltlaw.com or click the links in the show notes.