Spotting tech-driven disinformation isn’t getting easier
“Misinformation” and “disinformation” are often lumped together. They’re not the same, but they are very much connected.
Say you hear that Christmas falls on Dec. 23 this year. If someone told you that thinking it was true, it’s considered misinformation.
But when it’s spread with the intent to deceive, that’s disinformation, which can easily be amplified unwittingly by the folks in the first group.
Audio and video generated by artificial intelligence is everywhere in this election season. So before you click Share, know that the tech used to create that convincing-but-often-false content is getting a lot better a lot faster than you might think.
Marketplace’s Lily Jamali spoke with longtime misinformation researcher Joan Donovan, now a journalism professor at Boston University, to learn more.
The following is an edited transcript of their conversation.
Joan Donovan: When it comes to the technology itself, what it shows is that we can realistically depict any politician as having said or done something that never happened. And we have that example now of the robocall in New Hampshire that was already illegal to impersonate someone’s voice, especially a politician, through robocalling. But now we’ve had to amend the regulation to also include impersonation by AI.
Lily Jamali: And is this just a lot more prolific now than it’s ever been because the tech has gotten so good?
Donovan: I don’t know. You know, one of the things that my research is starting to look at related to deepfakes is, is it the case that it’s just a small number of actors producing a prolific number of deepfake video and deepfake audio? You know, it’s certainly gotten easier for people to sign up with different companies that offer AI impersonation as a service. There’s about a dozen of those companies now. And it’s increasingly harder and harder to know the difference. And I think that as we start to investigate and understand its effects on politics, we have to be very attuned to the fact that impersonation of political candidates is illegal and can be disastrous, even if it’s just a prank.
Jamali: We saw this wave of tech sector layoffs, and they continue even as we speak. And there were, among the people laid off were a lot of moderators on major platforms, people who are losing their jobs in those sections of the tech universe. But some of these companies have promised to do more to crack down on election misinformation. So I wonder what you make of their efforts so far, if you can give any specifics because we’re seeing a lot of promises, I’m not sure we’re seeing any results.
Donovan: We’ve been waiting a decade for AI moderation to arrive. It remains to be seen what these tech companies are going to do. There have been some pledges made to remove AI content that is confusing about candidates. They have pledged to label AI-generated content. But really what we need is a board that will oversee and come up with penalties if people are able to scale the disinformation into the mainstream. I often think about this as a problem of true costs, which is to say, how much money does it cost the industry of journalism to clean up after a large disinformation event? And how much work has to go into that in those investigations in order to at least get some version of the truth back into the hands of the public? And so I think that social media companies really have to do more to ensure that timely, accurate local knowledge is part of the streams of information that people engage with, because otherwise, in that void, disinformation tends to proliferate.
Jamali: Are you seeing signs that AI technology has gotten to a place where it can solve and suss out misinformation and disinformation?
Donovan: No. You know, every once in a while, I’ll get an email with a tip that says point your disinformation laser at this topic and I’m just like, that’s not how it works. I mean, one of the things that I think we’re gonna have to reckon with is that human intelligence comes at a premium. And it doesn’t seem to me that there’s a groundswell of customers that want poorly written essays and more and more fantastical images. I think it’s a neat trick, but at the same time, I just, you know, when it comes at such a cost to being able to surface truthful information, where truth is really a human process. Truth has always been a human process. We might invent instruments like the thermometer that will tell us when water boils, but when AI large language models are trained on, you know, a decade of Reddit data and the corpus of Wikipedia, it doesn’t really have any parameters for the truth. It would be interesting if they actually built expert systems that were really good at giving us facts, but that’s not what’s happening here. They’re in a race for generalized artificial intelligence. And what they’re running into is the fact that human speech can be very confusing. So I think that it’s going to take some major time for us to really reckon with the fact that these systems have no relationship to the truth.
Jamali: But no doubt people are using them. I mean, the data being shared by these tech companies that are trying to draw as much attention to their AI chatbots, they say a lot of us are using them, tens of millions of us, and anecdotally, I would say that bears out as well.
Donovan: I’m not too sure. Because I don’t know if that use is just because these are new products on the market and people are trying to figure out how to make them effective. What we do have a lot of data on is the follies, right? The things that have gone bad, the things that are broken about these technologies. And then we have a lot of hype and potentially even hype that is more about technological dystopias of the future, this idea that AI will learn how to commandeer and run itself, and that humans could then be destroyed. These kinds of fears also play into our idea that these technologies are incredibly powerful, when in fact, they’re iterative and that this is a development of years and years and years of people being online. And so I don’t know, you know, in a year from now, if people are still using these products and they’ve come up with a business model that makes sense, that’ll be one thing. But it does remind me of the early days of social media, where they were building tools in search of a consumer. And right now it’s hard to tell who those consumers are going to be and for what purpose they’re going to use AI. But we do know when it comes to elections and disinformation, that at this rate, you are not going to need a human being coming up with massive amounts of disinformation when you can have AI iterate on a particular pet theory or idea that one has that potentially could be disruptive to elections.
There’s a lot of time between now and Election Day, Nov. 5. Plenty of time for more misleading info to get out there, whether it’s misleading by accident or on purpose.
That’s why we’re launching a series called “Decoding Democracy,” where we’ll explore election disinformation, tech advances that have made it more convincing, and tips on navigating all of this.
Our first episode is out Tuesday — that’s Super Tuesday for the party primaries — on our YouTube channel.
The future of this podcast starts with you.
Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.
As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.
Support “Marketplace Tech” in any amount today and become a partner in our mission.