Podcast

Episode 1: Paul Howard – The Developing Role of AI Healthcare

March 7, 2025

In the first episode of On Background, we talk to Paul Howard, former senior advisor to the commissioner of the FDA, about the developing role of AI in healthcare.

speaker-0 (00:01)
This is On Background, a deep dive behind the scenes with health policy insiders. Please note, everyone on this podcast is representing themselves. No one is speaking on behalf of any corporate, academic, or governmental entity. We’re just nerds talking about healthcare. With Steve Prunty, Matt Stoll, and our mystery health policy expert, JJ.

Welcome back and joining us today is Paul Howard. He is the former senior advisor to the commissioner at the US FDA and is now executive director of policy and patient experience innovation at Amicus Therapeutics. Paul, thanks for joining us.

speaker-2 (00:37)
Thanks for having me. Glad to be here.

speaker-3 (00:41)
but we’re thrilled to have you here. I know you and I have worked together in some other policy work in the past and we’re just going to kick this off and it’s purely conversational and we’ll see how we go.

speaker-2 (00:56)
Yeah. And I’ll just clarify, I’m here speaking on my own recognizance, not representing amicus therapeutics. So you’re getting the pure Paul experience today.

speaker-3 (01:07)
Fabulous. Yeah. And we did, I did, I’m glad you didn’t go with the I’m just Paul because you know, cause there, there is the Barbie reference there that’s, we’ll just, we’ll just let that be. There’s no, there’s no song and dance required yet with this, with this program.

speaker-2 (01:22)
I’m gonna sneak in an Oppenheimer reference later, but just wait for it.

speaker-3 (01:26)
Yeah, thank you, thank-

speaker-0 (01:27)
Once

we get sponsors, we’ll introduce the dancer teens. But yeah, that’s down the road.

speaker-3 (01:32)
So I think, and I just put that disclaimer too, so I’m just coming as Steve Ferranti, private citizen, not representing the University of Minnesota or anything else that I’ve done as a consultant or client, et cetera, et cetera. But let’s just hop into this thing. So Paul, you had a front row seat basically working at the FDA as a senior advisor to the commissioner after wanting to do a lot of cool things with health policy. And frankly, if I’m wrong, that was your first time really in government.

Right. So we’ll get to the whole regret stage of this whole of your experiences and life issues. But one big question we want to throw at you is, first of all, just, know you worked in health AI when you were there and other things that you were engaged in. and suddenly, I think in 2020, that wasn’t such a hot topic or 2019, but it certainly is the topic of the moment now. ⁓ Talk about a little bit about how that even came up and how it’s working for you now.

speaker-2 (02:29)
Sure. As I’ll say this, know, Scott Gottlieb was the commissioner when I was there, certainly one of the most innovative people ever to work in government. You know, one of the things the agency was starting to grapple with when I was there was the application of healthcare AI and how to develop a regulatory framework. And at the time and still now the Center for Devices and Radiological Health was kind of leading the push there. And so I got involved through some friends and colleagues at what was then the informed.

⁓ Initiative Project Informed at the FDA that was trying to understand ⁓ how the agency could use ⁓ big data and analytics internally and how to use those tools to grapple with regulatory submissions across its portfolio of regulated products. that’s how I got interested in it started thinking about it and trying to understand how regulation was going to interact with this really fast moving field, which at the time,

⁓ and to be honest, I actually co-published an op-ed with my friend Sean Cozen, where we were both at the agency talking about what we thought the future of, for instance, wearable technology and EHRs would be. And it looks like we were a few years too early, but you could see, you could connect the dots. You could see where the technology was going. You could see the consumerization of wearable tools and sensors. You could see the algorithms getting really strong.

And you could see how companies were beginning to incorporate these tools into regulatory submission. So everything was on the table, but it was not yet assembled in the way that it’s still going today. So, and this is also probably something you and Matt are very familiar with. There’s really a hockey puck like trajectory to these kinds of technologies where it’s no, no, no, yes. Finally, it’s taking off. And I think we’re at that stage today with of course, chat GPT and everything else that’s going on.

speaker-3 (04:18)
Yeah, that’s a really good point. And chat GBT, mean, that, I mean, that I think probably no one really thought AI was doing anything useful, even though AI was in the background checking our grammar for at least 10 years or so. then when it’s actually now able to write a short story for us, sometimes with a more colorful language than we’d like, it’s a, it’s an interesting tool. I guess one thing I just to put in the context here, when AI is being used in healthcare, I sort of level set for general audiences here. It still isn’t a system.

It’s not really replacing. In other words, an AI is not left on its own to produce a diagnosis entirely without a clinician by its side. Is that fair? Correct?

speaker-2 (05:00)
No, that’s correct. And it also goes to the rules of the road that regulators and developers are trying to build up around AI for explainability. that whatever AI tool you’re using, of which there are many, you understand why it’s making a recommendation or why it’s suggesting a certain decision. A lot of tools, if they’re just used to present information that could otherwise be understood by the physician or the consumer,

are going to be lightly regulated or not regulated at all. But once you start going into that change a medical decision or influence a medical decision, that’s where the regulations come into play. And that’s where you have to start talking about things like explainability, transparency, human in the loop, and a whole set of best practices that we’re starting to understand about how do you incorporate these systems into decision-making processes around clinical care.

or making decisions about, you know, clinical trial designs or recruitment or designing biomarkers for trials so that you know that they’re not going to run off the rails or if there are problems that you can detect them and correct for them.

speaker-3 (06:05)
Okay, makes sense.

speaker-0 (06:07)
So quick questions. One thing that folks have commented on is you kind of map AI’s development against the hype cycle is a lack of really great killer applications. Where do you see those emerging in health care?

speaker-2 (06:22)
That’s a great question. I think what we’ve seen early on, and these are still tools that are coming online. I think it’s going to be a lot in decision support tools. think if we think about the physician today, they are drowning underneath a sea of requirements for testing and engagement with their physician, pardon me, with their patients and then maintaining budgets if they’re in capitated plans. There’s just so much.

cognitive burden on the physician today, that anything that helps them pay less attention to being a data entry clerk and more attention to actually managing the person, the patient in front of them so that that physician, she can actually use more of her bandwidth and brain power addressing chronic diseases or applying compassion to the person who’s sitting in front of her as opposed to what I think a lot of physicians feel they’re doing now, which is just inputting data for someone else to make a decision about whether or not it’s going to get covered.

speaker-3 (07:22)
I guess one of the, the killer app thing is kind of intrigues me a lot because I mean, can imagine like, know, chat GBT kind of like grabbed everybody’s attention and the diagnostic pieces are there. I mean, where do you think there might be that moment for consumers when they basically, when they see it really impacting them directly? Is it, it going to be like telemedicine where it’s super smart basically that sort of

does almost a triaging job, but you’re not even sure that that’s actually a person that you’re talking to? Or is that just even a scary prospect in itself?

speaker-2 (07:56)
think that’s a great question. And I think maybe you could help answer that question, Steve, because I think what people think is that AI is new or the machine learning is new. And companies have been using this technology for 10 plus years in a lot of different industries. It’s how Netflix makes predictions about what you want to watch next. It’s how Amazon suggests products and services for you to use, or how American Express catches potential fraud on your card. I think…

the systems in terms of the killer app, it might be that they operate with greater success invisibly. So it’s the things that you don’t see happen. Maybe you get a correct test recommended by your physician in a more timely way. If it helps the system be smoother and more efficient so that you spend less time waiting in a room or you get more personalized service, I think those are the things that are the low hanging fruit that we know are still huge headaches in healthcare compared to other industries. Those might be…

the killer apps, even though they’re the back office functions that people don’t pay enough attention to. But it’s also where there’s a whole lot of friction in the healthcare system that adds costs to everyone.

speaker-0 (09:09)
Where is the…how is the government’s policy on…and even today as there were the Senate confirmation hearings on HHS Director, how has the US government’s policy evolved over the last few years as we start trying to tackle these issues?

speaker-2 (09:26)
Yeah, I think it’s still evolving. think there was a lot, I mean, we have yet to see where this goes in the second Trump administration, but for Trump first term in 2017 and then for the Biden administration, I think there was a lot of overlap in how both those administrations, at least initially, were approaching AI regulation. I think we’re going to maybe see a little bit of a different tack here, at least from the perspective of how people like Elon Musk are getting involved in the process. But I think where we’ve been in the last few years,

Things have just taken off so fast. mean, it’s hard to believe with chat GPT has only been here for two years now, basically. So I think what people are trying to do is assess risks around automation, especially automation, and ⁓ how we’re asking these systems to manage large amounts of information and deliver more predictive

recommendations to whether it’s a physician, whether it’s a regulator, whether it’s a patient or consumer that’s really actionable. And the challenge is I think around things like hallucinations. So improving the ability of the algorithm, which I think we’ve made some strides in lately, but we’re still grappling with of once an LLM starts reasoning incorrectly, like ⁓ one analogy I saw in article recently, it’s like a three-year-old when it gets an idea in their head.

And it just doubles down on wrong. So I think that that’s the kinds of problems we’re going to have to solve for. And I’ll say one thing that I think is important is that having test beds. So have places where you have regulators at the table, you have innovators at the table, you have people with domain knowledge at the table, expert clinicians and researchers, and of course you have patients at the table.

to say, are we trying to achieve? How are we going to go about doing it in a way that respects privacy, protects patient and physician autonomy, and still delivers value to the system? And do that in a way where it’s transparent to everyone who’s at the table about how you’re changing the dial or how you’re moving the dial, what the outcome is you’re getting, and how it improves on the status quo. I think that’s the bottom line is find a problem, demonstrate that you can fix it in a repeatable and scalable way.

and still be able to drop in there and fix it when it goes off the rails. Because as we know, something’s going to go wrong. Sometimes it’s going to go off the rails. How do you have a dashboard that helps you identify when something goes wrong and then step in and fix it?

speaker-0 (12:00)
It’s funny because I was working with a neurointensivist, a couple of them many moons ago, their desire, what they wanted to see out of AI mirrors exactly what your, your echo is exactly what you’re talking about. was tell me when a patient’s going off a cliff so that I can manage a larger patient base more effectively, or tell them when they’re about to go off a cliff and so that I can intervene before the cliff shows up and do more with less.

speaker-2 (12:28)
Yeah, and I think that, mean, getting back to the killer app conversation, we’re not there yet, but if you look at, let’s just say pharmaceuticals, if you look at drug development, it’s a field where there’s basically a 90 % plus failure rate, depending on the therapeutic area you’re looking at. We’re talking about 10 years, we’re talking about two and a half billion dollars at least. So, and the reason we have so many problems is because our understanding of human biology is just not…

where it needs to be. If Amazon knows your zip code, it can get a product to you in 24 hours. So developing the ability to understand disease biology at the level of where we know that some patients, if you have a 20 % response rate for some cancers, that’s really, really good. Getting that higher and higher and be able to individualize treatments and then be able to put the right patient in the right trial at the right time. That’s the golden, golden app. That’s the killer app is being really predictive.

in your ability to match a drug to a phenotype or a phenotype-genotype that’s going to be a high responder or keep a person who’s going to have a bad effect out of the trial. All of our systems right now are episodic, I think, as you pointed out. It depends on one-offs of very brief interactions with a patient. So a lot of the data that we need to make really predictive decisions just isn’t in the system yet. I think that’s where AI can really help.

speaker-0 (13:52)
almost juicing personalized medicine to a degree.

speaker-2 (13:56)
It is. It’s, you know, if we look at wearables or other tools, it’s developing technology that’s unobtrusive, that’s reliable, that generates 24-7 data that when you join it with an EHR record and with a diagnostic test or scan, and you have it over time, then you can start to say, that person is their own control. And I can see where a person’s going off the rails or…

Maybe they just had an infant that kept them up all night, and that’s why their biometrics are so off today.

speaker-3 (14:30)
So Paul, I want to ask you question, just to have you reiterate this. When you talked about failure rate, you said 90%. 90 % fail is that I heard that right.

speaker-2 (14:38)
Yeah,

I mean, if you look at everything that, you know, goes into a trial from phase one, which is basically safety testing through, you know, phase two, when you start to look at not just safety, also efficacy, and then phase three, which is generally a randomized controlled trial to demonstrate efficacy. Again, depending on the indication, somewhere between, you know, 88 plus percent of things are going to fail. And that fails for all sorts of reasons.

because we didn’t have the right endpoint. The data we got in a phase two trial looked better than it really was in a phase three trial. Maybe the population, we didn’t have a really good natural history. We didn’t enroll the right cohort of patients. Or for reasons we just don’t understand. I mean, my analogy for this that I used when I was at the agency was if we did airports the way we did clinical trials, every time you flew someone from Newark to Minneapolis, land the plane, then tear the airport up and start over again the next time you had to fly someone out there, that’s…

a lot of what we do. It’s one-off. It’s not iterative. You’re not really building a consistent knowledge base that would allow you to really develop really sophisticated disease models that would let you leverage what you’re doing on the pharmacology side with what you’re learning from the patient side.

speaker-3 (15:53)
So I’m going to throw a question ⁓ on Stargate because I personally think that there should have been a concluding episode where we could have put Stargate Atlantis together back with the original show and unified it with the movies. think Paul, you probably would agree with that having watched those shows as thoroughly as I did in our youth, right?

speaker-2 (16:13)
Yeah, although they’re very distant memories now, sadly. So there’s been a lot of talk about, mean, if you had had this conversation, we’d had this conversation a week or two ago, I think we might have a different conversation because the assumptions going forward were these large language models like chat, GPT, they’re enormously data intensive. ⁓ Training them is enormously expensive.

And then when you’re trying to do inference at scale, especially with the latest chain of thought techniques that GPT and other models are using, they’re expensive to run. And then we had a Chinese startup by the name of DeepSeek basically come out, I think in the last week and put out an open source model that demonstrated that it had a new approach to architecture that made the model a lot less expensive to train and run. And for healthcare purposes and competitive purposes, this is actually good news.

Because you want models that are inexpensive, don’t use as much energy and you can design and specialize really quickly. And so your ability to iterate and scale models like those without having to worry about having hundreds of the most advanced chipsets running from Nvidia or whoever is actually good news for the people who buy and use the models, which is of course, 99 % of the people as opposed to just the small.

group of companies that are building hardware. And so I think this is, be a, ⁓ you know, moment where like for computers, you move from room size or house sized computers down to laptops. That could be this kind of pivot model for AI innovation.

speaker-0 (17:54)
And I’ll be honest, my understanding of AI is incomplete. Would taking the techniques that DeepSeq used to make such an efficient model and applying them to the ones that OpenAI and Google and other companies are coming up with, would it just make them even higher horsepower than they are today?

speaker-2 (18:12)
I think that that’s exactly the right inference to make that, that, you know, I think probably the thing that you have to keep in mind is all these models are learning from each other. So once you put a model out, know, you know, meta is learning from open AI open AI is learning from, mean, they wouldn’t put it this way, but let me put it this way. Every, every competitor is learning from the models that they put out in the public. So once you have a model, you’re using it to train your next model. And that means your competitors are doing probably doing the same thing with your model.

So there is a real software like, and very understandably so, software like adaptation where everyone is reverse engineering everyone else’s discoveries and then using it to improve their own product and have faster development cycles. So that’s what I think is happening in real time and we’re just watching it happen. But yeah, I think, you know, and it’s really impressive that DeepSeq has put out all this as open source. mean, there’s also that competition going, going on in the AI field between people who are developing.

know, closed private models and people who are putting it all out open source. I think that’s a really, really interesting innovation and we’re going to see which model turns out to be better for which application, but absolutely.

speaker-3 (19:23)
So Paul, one thing, first I want to thank you because once upon a time you taught one of my classes here at the Medical Industry Leadership Institute and students still love what you do. And we’re going to try to bring you as a guest speaker again somehow, but you, you reversed for the students Moore’s law, if you recall. And so I’m going to throw that back at you now, like, so, cause you gave the perfect illustration of like how the computers will learn and learn and learn that’ll, it’s going to probably, it goes to Moore’s law in terms of computing power, but yet you used.

speaker-2 (19:38)
Yes.

speaker-3 (19:52)
The opposite of Moore’s law differently in medical innovation. Do you want to expand a little bit for the viewers at home?

speaker-2 (19:58)
Yeah. And let me give credit where credit is due. That was actually a term I can’t take credit for. Jack Scannell wrote an article for Nature Reviews Drug Discovery back in 2012. I think it was diagnosing the decline in pharmaceutical R &D efficiency. And he coined this term because if you look at other industries, when you apply technology to them, go down. What Jack noted was that as we were dumping more and more

technological inputs into drug development, costs were going up and productivity was actually going down. So, he coined that. I think the situation has reversed itself a little bit in recent years. We’re getting better, especially in particular fields like cancer that illustrate kind of that precision medicine point I’m trying to get at. Back in 2005, we thought that lung cancer was large cell, small cell. And now we know that there are dozens of different

cancer gene mutations just for lung cancer that are driving the disease. And you have to have, you know, essentially tailored treatments for all of those, you know, quote unquote subcategories of lung cancer. So we’re even getting away from thinking of lung cancer as, pardon me, cancer in general is something that’s, you know, defined by organs. We’re, we’re defining them based on genetics. We’re defining them based on proteins that are expressed on the surface of cells or not expressed on the surface of cells. our ability to classify these diseases and drive

drug development against them is what we really need to do. And for a lot of things, oncology is at the forefront of everything we were trying to take it and apply it to other, other indications. I mean, Matt, you must see this, I mean, with, medical technology applications, right? You spoke a little before we were online, before we were live about having a real understanding of where the value chain is in healthcare is really hard. People just think you can walk in and disrupt it or work around it. And you really can’t. It’s really.

really difficult to do that. And so, you if we want to talk about healthcare and AI, I think you, you know, we have to find or identify the places where there’s a lot of friction right now. And by reducing that friction, you’re improving someone’s margin by 5%, by 10%. Those would be killer apps for healthcare or for drug development in particular. If we could improve our productivity and our trials by 5 % or 10 % or the success rate of a, you know, phase two to phase three trial by five or 10%.

or get to market six months earlier, right? That is a huge advancement. And so when people talk about like, you know, killer apps, the thing I’ll offer a reply is, know, you don’t have to, know, Amazon started out selling books, right? You don’t have to fix everything. You just need to find like Toyota, niche where you can introduce a new product at an advantage to an existing provider. And that I think is, is going to be where someone’s going to figure it out for a.

speaker-3 (22:52)
I want go back to the Stargate thing for a second because the thing that, as I watched, ⁓ you know, the announcement, you know, doing Stargate and obviously the first thing that was brought up in the press conference with everyone there in the White House was healthcare is going to be the big thing. It’s going to make all these changes for that. That’s the, that’s the big selling feature. yet, and tell me this is right, Paul, cause this is where my skepticism sort of comes in is that to make the large language model work in healthcare and all like

the patient record story that was told, like, we know all this stuff about this person, it’s there, we can match the exact treatment because we can see other people that are similar to that and it’ll be great, except all the data’s locked down. Right? I mean, so I teach a course in Health IT. Matt probably still remembers taking that course from me 20 years ago, maybe. And you know the great thing about teaching that course is? I don’t have to change my slides because they basically are like, in the future. And the future isn’t here.

Because, because the data is so siloed, you know, mean, if it’s epic, one epic installation for hospital EMRs is one epic installation. And even though there is interoperability and fire and all these things, if the institutions themselves don’t want to make them totally free flowing for their, besides just inside their corporate intranets to connect with all the other systems, it’s just not going to happen. I wrong?

speaker-2 (24:18)
No, mean, look, this is something that I think you probably grappled this with when you were in government too. There were the CMS’s interoperability rules, right, that they put out for EHR records. I think we were there in 2018, 2019. I think the penalties for those rules just got rolled out last year. So there’s a long lag. People have incentives.

The way I put it is this, the challenge with healthcare is all healthcare is delivered locally, right? And so for any individual provider or health or hospital system, their incentive is not only not to make it easier for you to go somewhere else, when someone’s really sick, they don’t want to travel. They want to get care as convenient as possible, as close to where they live or work as possible. So driving scalability in healthcare is…

very different from driving scalability and banking and finance where in every node in the finance cycle has a real incentive to make your information about your cash and your purchasing available to you 24 hours a day, seven days a week. They want you to interact more. They want you to be more comfortable moving through the system because they take a little piece of the action every time you touch the system.

So the incentives are just very different in healthcare. Although I will say, again, like thinking about where the test beds are and where the gains are going to be had. ⁓ I’d look at the VA, know, look at populations where you’ve already got a very sticky system or look at integrated health systems where they own the insurance and they own the clinics and they own the hospitals. They’ve got a real incentive to optimize outcomes, keep costs down and keep you in network. Those are the places

I would probably start thinking about it. It’s going to be, it’s going to be a figure out how to move the book to the person, you know, like Amazon, start with Amazon. They’re in terms of what their original aspirations were, which is, know, get a proof of concept, deliver a product, make the customer happy. ⁓ I think, you know, the people to watch here, Tempus AI actually just put out an app last week, I think called Olivia. And what it’s trying to do is, as I understand it, I haven’t talked to them about this, but what I think it’s trying to do is.

allow a patient to upload all of their medical information into an AI system and help the patient develop essentially a dashboard for understanding the flow of their health state over time. That includes their diagnostic scans, their tests, their patient notes from the EHR encounter, and then use that to have a smarter conversation with your doctor. For the average healthy person, probably not terribly interesting. For someone who’s got cancer or a serious chronic disease,

That’s really powerful. That’s potentially a really powerful application because then that does allow you to query, drive discussions, drive decision making, and potentially share that information ⁓ in a really tractable way. So again, those might be the systems to look at first that drive the friction down.

speaker-0 (27:29)
Are the EMR companies positioning themselves to go after these? You would think they’d be in perfect shape to do this.

speaker-2 (27:38)
What do you think, Steve?

speaker-3 (27:40)
I don’t, I mean, cause their profits are fine the way they are. What I’ll say, one thing Paul that gave me hope though, was that you’re right with the VA or like a Kaiser Permanente where it’s a totally closed system and they have Epic, or at least, at least Kaiser has Epic as their install, right? ⁓ and I’m not going to pick on Epic in particular, but it’s just, they are operating a model that’s like a Bloomberg terminal model. Basically it’s like, is the Bloomberg terminal can easily be put onto any Chrome computer or just a laptop. It doesn’t need to have a Bloomberg terminal.

But there’s a value in having this hardware switch that the vendor wants that’s making profit. And I think that still applies to Epic. They don’t really have an incentive yet until someone shows them that like without having your data go into the stream, the money that you could make from a large language model for a medical care will not be realized. And then they get pressure from others. So I think we’ll see, like for example, if Kaiser suddenly like

corners the market for data feeding in, which makes sense. It’s going to be on the West Coast. There’s a lot of AI on the West Coast. Then that could change everything, right? Then there’d be more pressure from other more recalcitrant health systems that don’t want to share their data or the providers, I should say, don’t want to share the data to say like, well, we’re losing out because our patients can’t plug into the models to get the full advantage. Does that make sense or no?

speaker-2 (29:03)
No, I think that’s a good point. in a sense that the EHR record vendors need to, there needs to be a monetization play for them in this space, exactly what you said. They need to have a way to say, you know what? I’ve developed a better model for predicting who’s going to have a heart attack six months before they have an event. And I’m going to use that to drive…

ability to enroll in a rich trial for a pharmaceutical company, right? I want the high risk patients in my trial so I get a really quick readout on whether or not my product is effective or not. So yeah, I think they need to see that. And that again, gets back to the test bed concept of getting multiple people at the table, multiple stakeholders at the table who can effectively check each other’s work. I think probably pre-competitive spaces where government and regulators are at the table.

saying this is what good looks like that allows it to be transferable and scalable. I think that a lot of the systems we’re developing today are getting to the point where we can do that. And there are companies that are working at that, like not to plug any of them in particular, but like Okun that are developing federated learning systems and others. So you don’t have to share the You don’t have to make your data available to everyone in the system, but people can query the data. People can start to look for associations. So that…

I think the ability to, I think what we talked about earlier, like drive hypothesis testing, like that would be really important. And then you can think about a way of developing licensing and monetization around that data, whatever tool you need to say, I’m developing a profile. And if you have really high quality data and I can verify that it’s high quality data that lets me get to, or develop a better biomarker or faster trial enrollment, then we can talk about sharing gains from that efficiency.

speaker-3 (30:50)
So, one thing I want to close on, on a few comments, a few questions we want to ask all our guests, because like after doing government service, you made it through. I’m sure there’s the exhilaration. remember the exhilaration of going in and then the, the catharsis of leaving. But what you take away from it, I suppose, is like, there’s some lessons learned and how you kind of approach things. Like everybody has that. What do you, what do you think your best positive takeaways from the whole experience was? Would you say?

speaker-2 (31:17)
I had a lot of opinions about the FDA before I went to work there. It is an agency that works incredibly hard. The people there are working much harder for much less pay than they would in a private sector job. They’re really trying to do the best that they can with the information they have available to them. And like referees in the next chiefs game, they’re going to be accused of not getting it right all the time. No matter do.

someone is either not going to, you they’re not going to get credit when they do the right thing because someone’s going to say it’s just, you know, pharma did it, or they’re going to take a lot of flack when people perceive that they did something wrong. So they’re, know, like all of us, they’re laboring under uncertainty and they’re working in a really highly pressurized environment, but I think they do a really good job. The challenge they’re facing now with these technologies like AI and wearables is they can’t compete with us, with pharma for the best talent, for the latest, you know, engineering, know, ML engineers, honestly, you know,

Pharma can’t compete with Facebook, Amazon, Netflix, and Google either. So there’s a real divide between the ability of government to be at the table and drive the conversation. So what I took away from it is we need regulations that are really agile, that move at the speed of the science, and that agencies like the FDA, they need their own machine learning stack. And when I was there, I worked with

Amy Abernethy, who was there, who later went over to Barrelly. But that was one of her jobs at the agency. I think she did really well was create an infrastructure ⁓ for data and data modernization at the agency, which I think is basically, know, job one of the agency in this environment. I know a lot of former colleagues who are still there who are really working on that very hard. But I think that government has got to learn how to apply these tools. I think there are a lot of really important applications for knowledge management.

and having real time response, being able to like real time look at extremely large databases to try and track rare adverse events and determine whether or not they’re signal or noise. I think that’s one of the things I learned the most is 21st century government is not going to look like government did 2030 or maybe even 15 years ago.

speaker-3 (33:31)
Very wise. I guess one thing since we try to ask this other question for folks as there is a transition time, if you had any one piece of advice for the future incoming FDA commissioner, what might that be?

speaker-2 (33:46)
Wow. ⁓ I will just take some lessons learned, I think, and maybe he wouldn’t say it this way, but I’ll say it this way. I think what we accomplished, or what Scott accomplished when he was FDA commissioner was he was, you this was his second or third time go around with the agency. had a deep knowledge of the agency, he a lot of deep relationships. I’d say one of the most important things for a commissioner to do when you land is go deep. You know, really get to know the staff, really get to know the centers, be highly visible, spend a lot of your time talking and being seen.

and being seen to be a champion for the agency’s priorities and ⁓ be an advocate for the really innovative things that the staff are trying to do. Cause then you raise morale. You are an advocate for your agency on the Hill when you’re trying to ask for more money and who isn’t going to Congress to ask for more money or more flexibilities, all of those things. So I’d say the time that you spend really

building credibility and cache inside the agency is going to make you more effective on the Hill. It’s going to make you more effective in public. It’s going to make you a better counselor to the president when you’re giving him or her advice. All those things are going to make you a more effective commissioner, a more effective advocate for public health. And at a time when the technology is moving really, really fast. And, you know, I think we were writing before we went onto the podcast, you want to move fast in healthcare, but you don’t want to break anything. You don’t want to get anyone hurt.

You don’t want to have a bad outcome. So your ability to kind of dial in and engage and be a strong advocate, but do so in a way that builds credibility of all your key stakeholders on the Hill, at the White House and in industry is really important.

speaker-3 (35:23)
Awesome. Matthew, any last thoughts to conclude or we are?

speaker-0 (35:28)
we’re

good. Paul, thanks for your time today. Really appreciate it.

speaker-2 (35:32)
My pleasure.

speaker-3 (35:56)
as I was down to shore in the summers, we could at least lament over those things.

speaker-2 (35:59)
I

mean, you know, you’re you’re from Philadelphia from Philadelphia is practically, you know a Jersey by honorable mention. Yeah

speaker-0 (36:05)
mean, you have to fly in a new work. Like, yeah, it’s, it’s basically the same. Yeah.

speaker-3 (36:09)
We don’t… Newark’s always an option.

speaker-0 (36:13)
I mean, it is an option. It’s not a good option, but it is an option. Yeah. See, I’m from Pittsburgh.

speaker-2 (36:18)
I’ve got two new terminals there. They’re actually pretty good. I’ve got to say. All right.

speaker-0 (36:22)
Fair

enough. It’s been a while. Yeah. And I’m from Pittsburgh. I’m on the other side of the state. So.

speaker-2 (36:26)
Yeah.

speaker-3 (36:28)
You realize, Matt, it took me 50 years to get to Pittsburgh.

speaker-2 (36:32)
Ha ha ha.

speaker-0 (36:34)
I haven’t been to Philly in, yeah, 30. So I guess it’s only fair.

speaker-3 (36:38)
Those Appalachians

speaker-0 (36:41)
Oh yeah, it’s a

big barrier. It kind of keeps everybody walled off.

speaker-2 (36:45)
I’ve heard that Pittsburgh is a really great city that’s really had a research in the last 15, 20 years or something.

speaker-0 (36:52)
It really has. was talking, my parents are still there. My, all my dad’s family is still there. And he mentioned something about it being one of the top food destinations in the country now, which stunned me. You know, when, when I think it was Google that first figured out or Uber, I’m sorry, that CMU had a fantastic robotics department and they dropped an office there and then looted it. And then Google and Apple followed suit and pits got great engineering. Folks were kind of figuring out that there was a lot of

really great stuff there for a fraction of the cost.

speaker-2 (37:23)
Yeah, you’ve got Highmark there, right? You’ve got a couple of big anchor tenants. PPG, US Steel. up on the table of the top 10 best small cities in America.

speaker-0 (37:34)
Yeah, it’s still an affordable place to live at the same time. So it’s really nice.

speaker-3 (37:41)
And if the Steelers ever do get a chance to win the Super Bowl, won’t grease the light pole. that’s the bonus there.

speaker-0 (37:48)
They’re not as heartbreaking as the Vikings, but some you’re like, it’s yeah.

speaker-2 (37:53)
but they’re not in the rut that the jets and the giants have been stuck in for a very, long time. We have to envy Pittsburgh’s success.

speaker-0 (37:58)
Excellent point.

speaker-2 (38:33)
You know, before we go, the one thing I will also say is I think it was you who said it, Matt, like the ability to have smaller, cheaper models, think for the last week or so we’ve seen like Wall Street panic over this. Right. Yeah. And Nvidia lost like $250 billion of market cap or something like that. But I think you’re right. You read the lines and you know, this is not going to, I mean, it’s going to erode some of those Nvidia’s advantages in terms of its competitive.

the moats that it’s built around as technology, anything that allows you to do more with the infrastructure you got eventually is going to be, it’s going to be additive to all of those companies. think the market’s going to recover.

speaker-0 (39:11)
thought, okay, so if you’ve got a more efficient model and you throw the kind of horsepower that Nvidia is developing or constantly developing, then what can you do with that? ⁓ Yeah, just, you know, take the build a bigger engine based on those principles.

speaker-2 (39:22)
Right.

Yep. Yep. And then, and then like I said, I think that this, I think the, the, the killer app is going to be being able to deploy lots of cheap models all over the place and, know, run them without having to build, you know, new data centers first. Yeah.

speaker-0 (39:40)
massive bouts.

speaker-3 (39:45)
I question’s going to be power consumption, because you’re actually right. If you can deploy it off much more small factor machines, as are my IT nerd particles in there, you don’t have to recommission Three Mile Island at that point.

speaker-2 (39:57)
Note to Microsoft. Yeah. mean, it’s really.

speaker-3 (40:01)
I attribute our hair loss, Paul, by the way, to Three Mile Island. I would have had a f- because my brother wasn’t- my brother had moved away by the time that the pume came out, you know, and he has a beautiful full head of hair, so-

speaker-2 (40:10)
I

mean, there’s all another really interesting conversation I would love to have with somebody who sits around and thinks, again, the analogy I’ve used for AI and healthcare is like, remember when we started getting very bad desktop computers in like 1991, 92, like right around that time? And we’re like, the IT productivity revolution is coming and it’s coming and it’s coming. And companies were just dumping money into IT.

And it didn’t really hit until the latter part of the decade, because it was not just, it’s not just, you have a tool, but did you figure out how to use the tool? And did you, did you reallocate all of your processes to maximize the use of the tool? And I still think we’re, we’re for, for AI, where just dropping something into your existing business process is not necessarily going to be optimal advantage for you. You’ve got to figure out how to, you know, like, like a lot of companies are doing like.

Hey, how do I actually make all my data talk to each other? How do I allow my people to visualize the data and ask and query the data and do it in really effective ways? And then once I have an insight, how do I drive it really quickly?

speaker-3 (41:18)
That is the final word.

speaker-2 (41:21)
That’s the word.

Scroll to Top