A lot of academic researchers, journalists, NGOs, even a few tech firms–are working on the issue of disinformation. Some people are opposed to this work, especially on the political right, and have given this disparate group the ominous collective nickname of disinformation industrial complex, as if it were a monolith devoted single-mindedly to censoring unpopular voices.
The fact is, this is no monolith. The fragmented nature of the fight against disinformation weakens the effort, and that’s what Phil Howard and Sheldon Himelfarb want to solve. Phil and Sheldon are the co-founders of the International Panel on the Information Environment (IPIE), which was born to bring together the world’s best scientific minds on the topic of information integrity and democracy. Phil is a professor at the University of Oxford, a global authority on technology and public policy, and author of 10 books and over 100 papers. Sheldon, in addition to being the IPIE’s executive director, is also the CEO of PeaceTech Lab, which has won global praise for equipping peacemakers with tech and data tools.
Sheldon, Phil and I discuss why citizens need to upskill their news literacy; whether social media or governments are the most toxic players in the ecosystem; the scarcity of data on disinformation solutions; where the trends are pointing and what it would take to turn them around.
This episode was produced by Tom Platts
Eric Schurenberg (00:02.856)
Sheldon and Phil, welcome to In Reality.
Sheldon Himelfarb (00:06.526)
Nice to be here, thanks.
Phil Howard (00:07.67)
Thanks for having us.
Eric Schurenberg (00:09.undefined)
It’s great to have you. Now, the work of IPIE is to aggregate knowledge about the global information environment. Tell me more about what your goal is, what success is going to look like, and then I’d like to get into a little bit about what brought you together around this particular approach to the topic. But, Shelden, why don’t you start.
Sheldon Himelfarb (00:36.926)
Well, Eric, I think I’d toss to Phil, you know, metrics of success, but I do think it’s worthwhile to have start with me about where we began, because this, this journey for the IPIE actually began two years ago at the Nobel Prize Summit in 2021. When we were seeing in, in around the world.
From my perspective in the peace-building business, so I was the CEO of the PeaceTech Lab, we were seeing things that I had never seen before, Eric. So having worked in conflict zones for the better part of 30 years, I know the first thing that goes out the window in conflict is truth.
But what we’d never seen before was the ability of individuals sitting in Virginia, for example, on their laptop, able to inject themselves into a tribal dispute with hate speech in South Sudan. And that is actually leading to, because we were doing the correlations, leading to people getting dragged off of buses and killed. Now, what’s unprecedented about that is that it’s an individual. It’s not an institution, it’s not a government. It’s an individual able to cause such hell and havoc on the other side of the planet with the speed of electrons. And that, I was seeing that happen in a number of different… conflict context. So I asked a couple of the brightest people I knew, like Phil Howard, who had been doing some incredible research on mis and disinformation for the 2016 presidential elections, had been trying to analyze foreign influences in the US elections. Phil, along Vint Cerf, you know, one of the fathers of the internet, and a couple of other, Tawakkol Karman, Nobel Peace Prize winner. We came together to discuss this unprecedented situation. And there was general agreement at that time that we needed to really do a lot more to understand this problem than the drum beat that we were seeing among regulators that it’s all about content moderation. It’s all about getting Facebook or Meta to pull things down faster. It’s not. It was much more complex than that, but we really didn’t know the dimensions of the problem. We really needed to apply a scientific kind of approach. And that’s where Phil and I were launched this idea of bringing together the best science research in the world to try to really understand the magnitude of the problem, which people were talking about, but it was sort of feeling at the time, Eric, you know, when one, when all you have is a hammer, every problem looks like a nail, that’s what it felt like in the content moderation debate. And there was very little good science out there. And so we were talking about taking this IPCC-like approach which is to bring together the best research in the world. It was a good analogy for us because, you know, back in the climate change era, it, same thing was going on, lots of different silos of information, but very little effort to pull it all together until the IPCC did that. And that’s what we started to discuss at that time in 2021. And then… we started to build the IPIE as an actual organization. Let me turn it over to Phil now to talk about how we think about the IPIE going forward and what success looks like.
Eric Schurenberg (04:55.712)
Well, Phil, in particular, I’d ask you to address where you saw, at the very birth of IPIE, the real gaps in knowledge and where you think that the next steps in directing research should be.
Phil Howard (05:11.51)
Great question, Eric. I think there were gaps in our knowledge, but there were also some tantalizing things we did know. So there are some effects we do know. We know what the cognitive biases there are in the human mind. We don’t like contradictory information. We tend to vote for the same party we voted for last time. There’s a bunch of cognitive biases we do know about.
Researchers, independent researchers and journalists were producing good evidence about how social media was taking advantage of those cognitive biases for a little profiteering to deliver messages on behalf of hostile foreign governments. And it was pretty clear that the things we were saying was not enough for us to simply spot an information operation and try to call the world’s attention to it.
There were a couple of regulators around the world that were over-regulating, that they were going too far on political speech. There were technology platforms that weren’t implementing the community norm guidelines that they had written for themselves. So when Sheldon and I and others had that fabulous set of conversations, in my mind it was about trying to figure out how to surface scientific consensus, express ourselves you know, with a unified voice. If you think back to the way astronomy used to be done, right, it used to be lots of independent astronomers with their little telescopes in their backyards working from their university trying to study phenomenon at the far end of the galaxy and those phenomena are just as complex as the problems of misinformation and algorithmic bias. But astronomy has come a long way. The telescopes got bigger, people collaborated, they built bus-sized telescopes and launch them into space, right? And we’re at that very early field-defining stage where we need the research community to come together and start to build some instruments that will work together, and start looking in the same direction.
Eric Schurenberg (07:20.78)
As you’re both well aware, the research has become a little bit politicized, at least to say that even the term misinformation has become politicized, and delving into the question has been effectively vilified by some parties as censorship of ideas that differ from the elitist norms.
What would you, what do you do to reassure audiences that IPIE is not just another attempt to quash ideas that don’t resonate with the elites?
Phil Howard (07:56.714)
Absolutely. We’re not about evaluating particular chunks of content. You wouldn’t want to put a bunch of scientists on evaluating truth claims at scale across an entire social media. So that’s not something we can help with. What we can help with is identifying those moments when something that might be junk gets artificially spun out to 100,000 people the night before an election. That kind of infrastructural interference is what we can look at.
We can audit an algorithm to see if it’s purposefully designed to manipulate some outcome in a way that’s against the law or against our values. That’s the stuff we can help with. So the mission of the IPIE is to stay on closer to those computer science, engineering, public policy impact kinds of questions that we can get evidence for and we do know how to have an influence on.
Sheldon Himelfarb (08:51.99)
Eric, I’d say another factor there though, and what you’re really asking is how can we trust? How can the IPIE set itself up as or become the trusted science advisor that is desperately needed in this field, on this work? And another way, in addition to all the things that Phil was saying is we are an international organization, very diverse right now sitting at close to 300 research scientists from about 60 countries. So there’s a certain, you know, diversity and we have to set very high standards, of course, for the research. But you can be sure that it will not, it cannot represent a single poll of the discussion. It’s going to be very much about the science.
Eric Schurenberg (09:48.852)
Okay, all right. Well, thanks for that description. From my observation of the research on this issue, it seems pretty clear that we in the academic world or like me in the media world are pretty good at admiring the problem, not so good at finding solutions. Your own study, your own IPI study about countermeasures suggested that there is a limited amount of research on the effectiveness of various interventions that are designed to counteract misinformation. Is this something where you think you can direct new research?
Phil Howard (10:34.166)
Absolutely. I think we probably will never have the budget to do all the research that we want to do for policymakers and it’s, I should be clear, it’s very applied research that we’re looking at, right? There’s particular outcomes that we need to know about for real-world settings. But if we can identify some areas in which the National Science Foundations and around the world in which scientists should spend a little more time, I think that that’s the way that science works, right? Science is not a collection of facts, it’s a set of processes and a set of review. And you know, in that last study you mentioned, we identified a dozen different possible policy interventions that might work, and there was only enough research on two of them to actually say anything with confidence. The rest could plausibly work, but it’s going to take a lot of research to verify what exactly success looks like for those outcomes.
Eric Schurenberg (11:35.072)
One of those techniques where there was significant research was about content moderation. And that may be in part because of the reason that Sheldon discussed earlier on was that was the hammer that was wielded by social media platforms. And it seemed like a logical thing to do to quash misinformation is to counteract it, to beat it down, to correct it. And yet, content moderation or fact checking, not only is under attack by free speech absolutists as being just another way of suppressing ideas that are not popular, but they also have the problem with sort of being behind the curve, that fact checking by its nature cannot move as fast as misinformation. It is always a game of whack-a-mole. I’m curious that was one of very few applied techniques that got support from the researchers that you queried.
Phil Howard (12:50.538)
In part it’s, so first of all I’d reframe it, it’s not really that the researchers voted on this stuff, so it’s not so much about the, the researchers supported one option or another, it’s that this is where the evidence lies, so this was…
Eric Schurenberg (13:00.36)
Phil Howard (13:07.85)
We had some, we scooped up some 4,000 papers. There were only a handful that actually had rigorous evidence about the causal patterns we actually wanted to test out. And then we were able to use a technique called meta-analysis where you take other people’s data, right? And you put it all together. And so that’s where the evidence was. That’s the direction the evidence was taking us. I think what counts as content flagging, right, which was the key feature of the report,
Phil Howard (13:37.764)
platform to platform. And this is where we, this is the challenge, right? So we’ve been able to identify some evidence. We have some serious recommendations to make. It’s going to be up to the platforms to figure out what flagging looks like for TikTok, what it’s going to look like for Twitter X, what it’s going to look like for Facebook. The thing we can say is that if they don’t do it at all, users are going to not trust the platform, they’re not going to remember, they’re not going to be exposed to high-quality professional independent news and information.
There’s going to be spin-off effects for trust in other institutions. So we know what happens if they don’t do it at all, and we have a sense of what happens if they make the effort.
Eric Schurenberg (14:20.524)
Unfortunately, the platforms don’t seem to be moving in that direction in quite the opposite. X, formerly known as Twitter, has famously gotten rid of most of its content moderation, if not all of it. Facebook has also really cut back on its trust and safety department. Are you concerned about that? And do you think your message can resonate with the platforms about the potential to further pollute their information environments.
Phil Howard (14:55.838)
Yes, I think it’s very concerning and I think the, um, your instinct is right in that we should worry about what the platforms think and we should try to get them to behave better and implement policies that make sense. But I do think we’re past the point of industry self-regulation. So the audience for some of the important audience for IPI outputs is actually the designers who might make different design decisions, but it’s also the regulators who are starting to spread, starting to step up a bit. The European Commission, the Canadian in the US that actually might make some key decisions in the next few years. I think they also need evidence, right? We don’t want them over-regulating, but we also don’t want them doing nothing. So the key intervention for us is to get them the evidence they need to make the light touch guidelines that would actually improve public life.
Sheldon Himelfarb (15:55.162)
And the value there, Eric, of having, you know, many, many research scientists on this in the past, you know, I don’t think anybody would dispute. We’ve seen the firepower, the knowledge, the technical knowledge of the platform is enabling them to basically lay the table, to set the parameters of the debate. When they come up in front of Congress, they really are kind of running circles around the legislators and bringing to bear the weight. I think there is some sort of strength in numbers, bringing to bear many, many research scientists working on these things is going to mean that the recommendations are going to have to be taken seriously and we are also gonna be working very hard to put those recommendations in the hands of policy makers and regulators in a timely fashion.
We’ve learned a lot from the experience of the IPCC, which produced great science, but it took 25 years before it turned into great policy. And we’ve learned a lot from that. I would also just say, Eric, in respect to your earlier question about content moderation and other techniques, fact checking, content labeling, we are very aware that the technology is also rapidly changing. I mean…
There is no doubt that this, you know, the ability of artificial intelligence to both turbocharge the problem, but also to be used for the solution, its ability to discern patterns, to identify anomalies, to do a little bit of forward prediction. All of those things are going to be key parts of the technology solution to this issue best minds in the world, not just inside the platforms, but in the research community, rowing in the same direction to try to understand. When we’re told by the platforms, as you know, they’re not going to self-regulate, we’re told we can’t do this. The ability to say, oh no, we know you can do this because this has been shown in this lab is really an important.
Eric Schurenberg (18:15.056)
All right. Thank you. That helps. I’d like to delve a little bit into the free speech issue again. It has been a cudgel wielded by purveyors of false information and conspiracy theories. To quash the kind of research you’re doing or to push back on it, how do you draw the line between free speech, which is obviously deeply embedded in the political culture of this country and the kind of work that would be needed to make the information environment reliable, that people know what information they can trust. How does free speech enter into the work you’re doing?
Sheldon Himelfarb (19:03.554)
I mean, I think the guardrails for our work is how do you ensure a healthy information environment? How do you limit the potential dangers so that we can promote social progress? And right now we are seeing this problem impeding progress on virtually every other area of social issues whether it’s climate misinformation, whether it’s health misinformation, we all know how many hundreds of thousands of lives were lost to vaccine misinformation, whether it’s in conflict, whether it’s you name it, there’s not an area of, of social, um, policy that is not being impacted by this. So I think of course, you know, we’re not going to be defining as Phil was saying earlier, not to be defining what is truth here or what is free expression here, but it is to be able to identify in an evidence-based fashion the harms that’s being caused and how to offset them.
Eric Schurenberg (20:16.92)
Okay. Phil, would you like to add anything there?
Phil Howard (20:20.306)
I think Sheldon said it well. For speech to stay healthy and to remain free, you need to be able to have some mechanism for identifying the ways you can interfere with that. You can take it down, mess it up for people, and you can’t get a bunch of scientists to evaluate individual truth claims on a tweet-by-tweet basis. That wouldn’t make sense. We wouldn’t want that.
But the large scale, the infrastructural interference, the unusual profiteering that sometimes occurs around particular information operations, when there’s an entire government behind an information operation, that’s the kind of stuff that degrades our speech, degrades political speech, and that’s where we can have a positive constructive role in actually protecting our speech rights.
Eric Schurenberg (21:16.288)
It was interesting, among the many fascinating data points that came up in the research was the difference in attitudes among the researchers that you surveyed in autocracies versus those in liberal democracies and what they saw as the biggest threats to a healthy information environment. Would you quickly review those and explain where the differences might arise?
Phil Howard (21:43.882)
Well, in part, we think the differences arise because the people who were reporting from those authoritarian regimes may have been a little bit shy about reporting on exactly what it is they’re studying locally. But essentially, there’s different concerns in different parts of the world, and there are several different internets. And so people, the research community, the expert community… in some places, particularly in the global north, the US, North America, Europe, very interested in trying to beat back climate science misinformation. That’s one of the issues that popped for that part of our research community. In other parts of the world, it’s election interference. In other parts of the world, it’s sort of neighboring governments or even our own politicians. So we did have, I think that was one of the consistent findings across the countries that people do flag their own politicians often as sources of algorithmic bias manipulation of moments elections are when big money flows right into political campaigning. A lot of it may be above board and you know free and fair as part of the rules of whatever the campaign rules of the country some of it some of it is not and that’s the stuff that pays and that’s the flow of money that pays for information operations that knock a sensible public policy conversation out into the long grass.
Eric Schurenberg (23:29.92)
It strikes me that in liberal democracies, the global North, as you mentioned, Phil, people have been left to fend for themselves to understand what to believe in media. This is a difference from 30 years ago or so. And while there is some effort underway, at least in the US, to help people become better at distinguishing falsehood from truth and trustworthy from untrustworthy information. It seems to me that a lot of that work is focused on students, that at least the organized work is focused on students, which is great, but it does mean that it’ll take a generation for that group of educated news consumers to become part of the electorate. Students don’t vote. How would you see media literacy being promulgated, are there any recommendations you have about getting media literacy into the thought stream of the voting audiences?
Phil Howard (24:37.218)
Well Eric, I like this question because in a sense you’re setting us up for success. It’s one of the things we don’t actually know yet. So there is some good evidence about what a good media literacy or digital civics kind of curriculum can look like from some countries. It wasn’t one of the effects that popped in our study, the study we were talking earlier about constructive policy interventions. So there’s some good reasons to think that digital civics, digital literacy, is an important part of a modern civic education and modern civic behavior, but I think we need to do a little more work to figure out what the elements of that program are. I do think your instinct is right, that getting people in the high schools to think about all the different institutions they inhabit is going to be healthy.
In some countries education is managed centrally, right, federally, and others it’s managed in provinces and states. So there’s a fair amount of variability in what different high school students will get, especially around the US by way of civics. And one of the things we know though is that if civics education and digital literacy is going to be successful, it’s a whole life learning thing, right? So you can’t just serve it up to the high school students. As Sheldon points out every time there’s so many different new platforms, we don’t know what their affordances are going to be five years from now. The technology that people use is going to be different. So trying to stay ahead of those technology innovations with digital literacy programs is going to be a big challenge but I think I hope we can meet again in a year and you can ask me that again because I think we’ll have some answers.
Sheldon Himelfarb (26:31.338)
Eric, I think you teed up though something very, very interesting and very complex that is certainly going to be an area focused for the IPIE. And that is the generational differences here. You know, you’re exactly right. Media literacy programs are largely focused at the school level and that’s, and you see
The voting population is that highly polarized, very difficult to reach, to move with any order of magnitude there. But there are also, as Phil was hinting at, there’s lots of things we can learn by looking around the world too. So in Finland, they have had a hugely successful effort at media literacy because they share constantly trying to participate in their online discourse. So we can learn a lot from what the Finns have done to create a very educated body politic about this. But I do think these generational differences are key. And Phil and I talk about this a lot because I’ve got daughters, he’s got sons and they’re young.
Their view of what is privacy, their view of how they trust the online space full stop is completely different. Their expectations are different. And I’ll just leave you with one nice little research vignette, Eric. I think it was about 10, 11 years ago that Intel did a very interesting study and they found that on average people over 25 years old rang the doorbell with their forefinger and people under 25 on average rang the doorbell with their thumb. Why? Text messaging, the dominant digit was actually changing for that next generation. Now, if that doesn’t speak volumes about, you know, how to wrestle with this complex issue across generations, I don’t know what does.
Eric Schurenberg (28:57.78)
That is an amazing data point, Sheldon. Thank you. One of the things that occurred to me as I listened to you talk about the high integrity of your research, the fact that you have assembled this entourage of leading researchers across the field is the level of trust in science has also been declining.
Sheldon Himelfarb (29:00.173)
Phil Howard (29:07.863)
Eric Schurenberg (29:26.728)
Has the level of trust in any institution that aspires to get at the truth. You could certainly say the same of media or the judicial system or government. In the end, for your research to be persuasive to the kinds of policymakers that you would like to reach, how do you overcome that question about trust?
Phil Howard (29:55.702)
Yes, that’s a good one. That’s a tough one. I think in some sense the audience for the IP IE’s outputs is not everybody. Right? We certainly want to have… We want to have content that is accessible and useful for policymakers. But just as the IPCC was about defining a field and setting some definitions early on, setting some standards, setting some definitions, figuring out what the science was, getting the oceanographers to talk to the soil scientists.
That’s more what the IPIE is about so that we can develop this shared understanding of what the problem is and how to address it. Those bonds of trust across the disciplines are united by this, by the scientific method, the notion that we can gather lots of evidence, study it purposefully and reproduce the results if we need to. Those are just some of the basic ideas of how science works.
And so I think the group that’s the audience, other scientists and policymakers, for the most part, do accept some of those basic assumptions, right? That you can make generalizable knowledge if you design a study well and get high quality data.
Eric Schurenberg (31:31.98)
Okay, all right, that’s good. Yeah, that is a key factor, that you’re not speaking to a vast audience, but a particularly targeted audience that generally buys into what Jonathan Rauch describes as the constitution of knowledge, belief in the process of discovering factuality. One of the more disheartening findings in your research was that most of the experts thought the information of why did they think that?
Sheldon Himelfarb (32:09.974)
I think what we learned, there’s a, again, the context matters, where people were matters. And I think maybe that’s what’s so interesting is that kind of whether you’re one of those researchers living in an autocracy or working on autocracies or whether you’re those researchers working on democracies, it was a majority of them that felt the information environment was likely to deteriorate.
And I mean, we weren’t asking, why do you think that? So it’s not fair for, I mean, it would just be my gut feeling about, you know, knowing the research community. But I think it is a combination of, you know, that problem you just put your finger on, Eric, trust in science. When you start to see a combination of trust in science eroding, and new technologies coming online that can turbo charge the information problems. It’s hard to be optimistic about next year, which is different by the way, of whether they would be optimistic about five years from now or in the longer term. I do think this is a matter, a question we’re wrestling with is how do you manage an information environment that is unprecedented in human history. And we don’t have that answer right now. You know, we don’t have the clarity that the IPCC reached after 30 years of work to say that when temperature goes up a degree and a half, this much of the world goes underwater. We have a lot of work to do before we get at that point.
That is one of the things that we are going to be working incredibly hard in the IPI to do, to give people a sense that yes, we actually can manage the global information environment so that it is healthy and supportive of social progress.
Eric Schurenberg (34:26.908)
Well, it sounds like an optimistic perspective. And of course, you would not even have undertaken IPIE unless you thought that you could make a positive difference. The challenges though are immense. Artificial intelligence has presented itself dramatically and in the past six months as a factor that will, as you put it Sheldon turbocharges, the ability of bad actors to spread misinformation at lightning speed and the technology is evolving rapidly and we barely understand even what we have with GPT-4. Given all that, I’m glad to hear that you are optimistic. What do you see on the horizon that would change your point of view from the point of view perhaps of the researchers who were in the short term, pretty worried. So looking out long term, what do you see happening to bring the ship back upright?
Phil Howard (35:32.942)
I think there are some exciting initiatives coming from several parts of the world that would put a light touch public policy oversight over what kinds of content go out at scale rapidly and the Canadian, what’s called the media bargaining code, the Canadian bargaining code, the Australian bargaining code, Brazil, Indonesia are considering bargaining codes. These are mechanisms by which the revenue, some of the revenue that used to go to independent journalism would, and now goes to the firms, would actually get redirected and put back into public interest journalism. You know, there is research from over, forward that suggests that the countries with a public broadcaster, public service broadcasting tradition are slightly more inoculated against junk news than the countries that don’t have this. And the answer is it’s not that everybody listens to the public broadcaster. The reason for this connection is that the public broadcaster, the CBC in Canada, PBS, NPR, our examples, BBC. They create a culture of professional news production that involves fact-checking, double-checking sources, editorial oversight, distance from political appointees, the separation of political appointees in the newsroom, and those values, those journalistic values, leak over into the commercial news newsrooms. That’s a theory, anyway. So, I think there are a couple of regulators that are starting to flex their muscle and play a leadership role here and then there are these opportunities I think that are going to create more content that raises the level of public conversation. The IPIE still needs to do its work, right? We still need to be able to demonstrate that for the most part, the technology firms are grading their own homework.
And that’s got to stop. We need to have ways of auditing public service algorithms. And if your platform is circulating large amounts of misinformation about climate change, then you’ve got to work on ways to stop that. There are some things, there are some issue areas in which some leadership from the social media platforms would improve public life.
Eric Schurenberg (38:08.788)
Yes. Sheldon, I will maybe give you the chance to bring us home with a reason that you, in particular, may see reasons for optimism in the moderate future, the medium future.
Sheldon Himelfarb (38:25.186)
Yeah, Eric, I think there’s a couple of really real positive indicators as well as all the fear that we all have and I do not want to minimize either the complexity or the danger in the information environment right now. I truly believe this is an existential problem for us because of how it’s preventing progress on so many other issues. We have to figure out how to manage it. But that said.
The fact that we went, the IPIE, attracted 300 research scientists to our midst in a relatively short amount of time, nine, tells you about both the concern and the eagerness to work together to make the environment better. That’s one thing. Second thing, you talked about the technology and the fears that we all have about AI and, you know, it’s AI this year. Next year it’s going to be quantum computing.
Last year it was social media and the year before that it was big data. I mean technology changes all the time create all kinds of both problems and opportunities. But what we’re seeing right now has been a very interesting pause. People coming together and taking a breath before racing headlong into AI and trying to ask good, hard questions. Questions about the unintended consequences of a new technology. That’s pretty encouraging and has not happened the way it should in the past. And then also I said, as I said earlier, you know, I do know that the technology itself has lots of potential for helping us with this problem, you know, its ability to discern patterns and identify anomalies and so forth.
Sheldon Himelfarb (40:19.798)
Taking a breath will give us time to also think hard about how we can harness its power. How can we amplify the power of the technology for social good above the power of the technology to do harm? You know, look around the world, I see peace tech projects emerging. The European Union created the global peace tech hub last year in Italy.
I see it in a number of countries’ portfolios. The Nordic countries have very interesting peace tech programs within foreign ministry and development ministries. So I do think we are learning from our past transgressions to it’s just as important that we think hard about amplifying the power of the technology for social good, and I think it’s happening.
Eric Schurenberg (41:16.528)
All right, well that is a great place to leave it. Certainly the first step in solving a problem is recognizing it and working together on it. And you guys at IPIE are in the vanguard of making that happen. So thank you for the work you do and thank you for your time. This has been a great conversation.
Phil Howard (41:36.554)
Wonderful. Thank you.
Sheldon Himelfarb (41:37.39)
Thanks for the great questions, Eric.
Created & produced by: Podcast Partners / Published: Oct 27 2023