The Political Turing Test
AI is suffuse through American politics, but findings from interviews with campaign professionals show it doesn't always work the way you may expect
We are entering a new era where we may not be able to tell the difference between real political communication and content created by artificial intelligence. Right now, we can pass a Political Turing Test, intuitively discerning what is real and fake. But the leap forward in campaign technology is testing the boundaries and over the next few cycles we will get closer to failing that test.
When we take a technological leap forward usually the largest campaigns benefit first. They have the most money and the elite expertise to make sense of it and incorporate the new tech into their campaigns. It usually takes two or three cycles for all campaigns throughout a ballot to benefit because the tech needs time to scale for the costs to go down far enough be affordable for everyone.
The release of Open AI’s ChatGPT generative AI tool was different. Everyone got the best new technology at the same time. The cost was either free or extremely affordable. Moreover, there were plenty of alternatives to ChatGPT and they were just as cheap. Everyone got the new toys at the same time, and this led to all the experimentation that usually takes cycles to be compressed into one.
The best practice I’ve seen is using generative AI as a drafting board, but with adult supervision. Content produced by these new services is clean but uninteresting, without the kind of creativity and context that moves voters.
What I found in my interviews for the second edition of Modern Political Campaigns: How Professionalism, Technology, and Speed Have Revolutionized Elections was that the benefits of generative AI were mostly found down-ballot. The top consultants tried using the technology but found them ineffective because voters rejected them, but smaller campaigns found them useful in other ways.
There were robocalls from “Joe Biden” generated from his voice that not only incited backlash, but criminal convictions. Ron DeSantis tried to generate Donald Trump’s voice, and the response was so swift and brutal that you can no longer find the ad online. The RNC generated video with Joe Biden and Kamala Harris, and they were creepy enough that they weren’t featured for long.
State and local campaign teams I spoke with for the new AI chapter of the book found the new tools enormously useful. Some were using them to help grow their firms by using their successful pitches as a base for new business. Others were helping to upskill newbies to come up with better first drafts of social content and speeches. These drafts then led to deeper conversations within campaign teams on strategy and fit, instead of remedial editing on missed words, commas, or typos.
Beyond the basics, successful down-ballot teams used generative AI to draft quick, informed, responses to attacks. This kind of research usually takes teams days to accomplish but in the rough-and-tumble world of campaigns there is usually never enough time to do due diligence. Now, campaigns had the benefit of quick-turn research that would’ve been cost prohibitive in previous cycles.
What does this all mean? The new wave of generative AI tools are not the society-ending weapons of mass political destruction the doomers envision. Our collective bias toward authenticity has neutered that strategy. It’s also not the solve for every campaign need. Early academic experiments have shown that using AI in content and disclosing it, creates a boomerang effect, reducing trust on the candidate who sponsors it.
The best practice I’ve seen is using generative AI as a drafting board, but with adult supervision. Content produced by these new services is clean but uninteresting, without the kind of creativity and context that moves voters.
Please RSVP here, we look forward to seeing you.
But it’s a starting point. The technology will get better over the next two mini-cycles — two years, then two more to 2028. By then, we expect to see models that can do more reasoning, in addition to pulling together research and content that is more interesting to the campaign audience. Moreover, we’ll see additional use cases in planning, organizing, and get-out-the-vote efforts.
AI will be everywhere and nowhere specific.
Much like in our lives now. I find use cases all over the place from replacing Google almost entirely — I’d really love to have it as my default search engine. It’s great for doing basic research on subjects that come out of nowhere, like tariffs. Subject for another piece, but it’s clear that the entire world is running the Threepio, “Let the Wookie Win” strategy.
But it’s painfully obvious that some writers, even here on Substack, are using generative AI as a crutch, or to create content while they are more actively engaged in other pursuits. I think this undervalues the relationship we have with our readers. As I once counseled a fellow author on his book, readers are here for our voices, not someone else’s. This might seem elementary, but every time we outsource our creativity to AI it chips away at our relationship with our audience.
This is the same challenge political candidates face with this amazing new technology. How much of the humanity of our campaigns will be resilient enough to survive the ease of use and creation that generative AI offers? The bottom line is this: I trust voters to sniff out what’s real and what’s fake. But, admittedly, that line is getting blurrier. As the models improve, we might find we can’t reliably pass a Political Turing Test to discern what is real and what is fake.
And that’s why no matter how close we get to generalized artificial intelligence we need to retain adult supervision. That’s the responsibility of the humans on the campaign: to ensure that our elections retain our collective and suboptimal humanity, with all the personal tics, weirdness, and inspiration that makes non-algorithmic democracy so much better than the alternatives.
Michael Cohen, is the author of the book Modern Political Campaigns, president of Cohen Research Group and a 30-year veteran of the polling industry. He writes The Level regularly for 24sight News, analyzing polling and campaign trends with a keen eye and level-headed approach.
The advancements of AI can only exponentially increase our distrust of its users (why is this necessary?) AND “truth” itself.
What is particularly disturbing is that no one cares — to MAGA, to Republicans, it’s just a show … and a gaudy bandwagon to hop on.