AI, Politics, Ethics and a bit of Plato - Nevada Museum of Art

I recently had the pleasure of speaking at the Nevada Museum of Art in Reno about AI, ethics and politics (with a bit of Plato). My written remarks are below.

Thank you for kind introduction and invitation to join you this evening.

When Caitlin asked if I would be interested in giving a talk about AI and politics at the Nevada Museum of Art I jumped at the chance. In part because as a college professor, sometime pundit and former political consultant, I’m always happy to tell other people how clever I am and in part to see the Sea Dragons of Nevada exhibit, which is amazing.

I have a canned set of analysis and anecdotes about AI, politics and ethics. Some of those I’ll share tonight. But Caitlin’s invitation forced me to rethink my position. I’m still going to lean on old examples, I think they still make sense, but talking about politics in a museum forces different thinking than talking about politics to the Washington Post.

Before getting to that thinking I want to define some terms and explain where I think we are. You may already know much of this, but good to get everyone on the same page. If I’m repeating what you know, do the alphabet on the roof of your mouth or something until I get to an interesting bit.

In his 1946 essay Politics and the English Language George Orwell complained about political words that are “abused” - his word. He wrote:

The word Fascism has now no meaning except in so far as it signifies ‘something not desirable’. The words democracy, socialism, freedom, patriotic, realistic, justice, have each of them several different meanings which cannot be reconciled with one another. In the case of a word like democracy, not only is there no agreed definition, but the attempt to make one is resisted from all sides…Words of this kind are often used in a consciously dishonest way. That is, the person who uses them has his own private definition, but allows his hearer to think he means something quite different. 

AI falls into the same bucket. AI sounds cool and smart, so companies that want to sell you things that sound cool and smart call those things AI. AI purists note that many of the things are just old-school FAQs, appliances with sensors that order new filters from Amazon, diagnostic devices for cars, and so on. Not really AI, but cool and a bit creepy.

As an aside, in the same essay Orwell wrote that “In our time it is broadly true that political writing is bad writing,” “political speech and writing are largely in defense of the indefensible,” and “political language…is designed to make lies sound truthful and murder respectable, and to give an appearance of solidity to pure wind.” Again, 1946.

Back to AI.

Broadly speaking, there are two types of artificial intelligence: predictive and generative.

Predictive AI predicts things. In the words of George Lawton of Tech Target, “Predictive AI uses patterns in historical data to forecast future outcomes or classify future events. It provides actionable insights and aids in decision-making and strategy formulation.” Or, to get meta, not Meta as in Facebook but meta is in sounding like a clever academic, I asked Gemini, Google’s AI tool for a definition of what it does. According to AI on AI, “Predictive AI is a type of artificial intelligence that uses historical data and machine learning algorithms to identify patterns to forecast future events or behaviors.” 

Generative AI on the other hand generates things. Tech Target’s Lawton explains, “Generative AI focuses on creating new and original content, such as images, text and other media, by learning from existing data patterns.” Google’s Gemini explains, “Generative AI is a type of artificial intelligence that focuses on creating new, original content. Unlike predictive AI, which analyzes existing data to forecast future outcomes, generative AI learns the underlying patterns and structure of that data and then generates new data that has similar characteristics.”

Both are important. We tend to focus on generative AI, and I’ll do that in a moment, but it’s worth noting that predictive AI is here, and has been for a while.

lot of public relations firms use predictive AI. One of the first was PRophet, which uses predictive AI to  “…to predict media interest, surface and rank the top 100+ journalists to target and pitch your story,” and “…sources contact information including email and Twitter handle for journalists and podcasts, when available.” 

A firm called Resonate says it provides, “Comprehensive solutions for building, modeling, and sizing any voting audience in real time, enabling you to identify, understand, and target voters for winning outcomes.”

According to Jeff Berkowitz writing for the Center for Strategic and International Studies in 2020, in 2008 “The Obama campaign was at the forefront of bringing advanced data analytics and targeted advertising into the political sphere through [machine learning], creating “sophisticated analytic models that personalized social and e-mail messaging using data generated by social-media activity.” 

Predictive AI in politics, and in communication in general, does what strategic communication and political campaign staff have always done - try to figure out what’s going to make decision makers vote for candidates or buy snacks. While the tools are new and the targeting can be more precise than ever, the idea is at least as old as Aristotle who urged in The Rhetoric that “The orator has therefore to guess the subjects on which his hearers really hold views already, and what those views are, and then must express, as general truths, these same views on the same subjects.”

The conversations about ethics in predictive AI are often about data privacy. Bots are sorting through all our stuff, from browser history, to what we watch on Netflix, to where we live, and much more. Companies and campaigns use these predictions to sell us gadgets and candidates.

Hyper-targeted ads may be more effective than targeting based on human guesswork, but feels intrusive. Bespoke campaigns increase individual voter interest, but potentially at the expense of a shared democratic conversation. If everyone votes for their own separate reason, if we all live in our own bot-constructed political bubbles, democracy becomes impossible because there is no shared conversation or shared experience.

There’s a lot more to say about this, but that’s not the point of this evening, so I’m going to leave it for now and turn to the new things that’s freaking people out more - generative AI. The generative AI in politics toothpaste isn’t just out of the tube, it’s going to happy hour and taking selfies with the candidate.

The best campaigns use predictive AI to inform generative AI, just as they use research to inform strategy and tactics. The more a campaign can learn about voters - where they spend their time, especially online, what news they consume, languages they speak, and so on, the more precisely they can craft their messages. Research informs communication. Theory informs practice. Campaigns tend to have research teams, communications teams, fundraising teams, and grassroots teams. They all talk, but they are all different. It’s historically difficult to coordinate information and messaging, to integrate all of the elements of a campaign. One of the things that AI allows is better coordination and integration of information. This coordination makes campaigns more efficient because less information is wasted or lost. We'll see a bit of in some of my examples in a moment. 

There are a raft of companies using AI to draft emails, lobbying material, talking points, and more. An organization called Campaigns & Elections, basically a business-to-business publisher, conference organizer, and so forth for the political campaign industry, has a section of their website devoted to AI, along with pages on field organizing, campaign finance, industry news and other typical topics. 

Their annual awards include several AI categories, I’m one of the judges this year. I spent part of my morning reading award submissions from firms doing some really interesting work with both predictive and generative AI. You know something has arrived when you can get a plaque for it.

Political campaign companies using generative AI include Quiller, which was founded by Democratic email fundraiser Mike Nellis. In his, or it’s, or their, the pronouns around AI get tricky, ”Quiller is an AI copilot designed for mission-driven organizations to create high-impact content. From fundraising emails to direct mail, Quiller moves you from concept to final draft instantly, freeing up your time so you can better support your communities.”

Examples of AI’s use in politics abound. A few include:

A firm called AmpAI by Peerly uses AI to “facilitate meaningful dialogue among stakeholders.” According to its website, the firm can: “Summon the AI Legion: AMP the Power of 1,000 Volunteers! AMP sorts opt-outs, organizes, evaluates, responds, arranges subsequent follow-ups, and instantly enriches your database with vital insights!”

According to Quiller, a candidate for mayor of Bowling Green, Kentucky named Patti Minter used the generative AI tool to help with fundraising emails for a month. Over that time: “Fundraising efficiency skyrocketed 1,750% — from $8.33 per minute to $56.47 per minute. Email open rates jumped 15% higher than previous benchmarks. Clickthrough rates hit 2.5 times the industry average.”

In 2024, three State House candidates in Lancaster, Pennsylvania used AI to help craft answers to questions from a local newspaper. 

A Democratic candidate for the US House in Pennsylvania used an AI chat bot to call voters. Voters who answered the phone heard, “Hello. My name is Ashley, and I’m an artificial intelligence volunteer for Shamaine Daniels’ run for Congress” The bot then answered policy questions and otherwise “talked” to voters about the candidate. You can’t see it here, but talked is in quotation marks. Like pronouns, verbs are weird with AI. 

The bot was developed by a company called Civox. According to its website, “Civox isn't just about reaching more; it's about connecting better and more deeply. Offering scalable, automated solutions, Civox brings natural, meaningful conversations to the forefront, coupled with detailed analytics for every call made.” I want to pause here for a moment. The website says its chatbot provides natural and meaningful conversations that connect deeply with voters. The AI tool, a complicated computer program, is natural and its interaction with voters is deep and meaningful. The bot, Ashley, said it volunteered. It, a computer program, chose to take time from its busy day to talk to voters. It wasn’t programmed, or told, or switched on. The bot considered its options, and decided to put off listening to the newest episode of Smartless or whatever podcast bots listen to, opted to skip the PTA meeting or planning its bot kid’s birthday party, and thought to itself, “this election is important enough that those things can wait, little bot Suzy will thank me when she’s older.” 

According to Wired magazine, two AI bots ran for office last year, one in the UK and one in Wyoming. VIC ran for mayor of Cheyanne. VIC was actually a co-candidate with an actual person named Victor who said he would use AI to help run the state’s capital city. VIC and Victor ran into trouble with OpenAI, which built ChatGPT and went on to lose the election, leading one outlet to declare “Cheyenne, Wyoming Elects Human Mayor.” It’s worth noting that he lost by a lot, 11,036 to 327.

In 2024 the Republican National Committee slammed then-President Biden with an AI generated ad. The ad disclosed it was AI generated and pictured a dystopian future, so no attempt was made to mislead voters with deepfake, shallow fake, head fake, or really fake anything. My guess is they bragged about using AI to get press attention to Republicans and away from Biden. It was a clever campaign tactic, not a devious plot.

Other ways firms are using AI include a Democratic firm that has a text based AI that interacts with voters using the same language and tone the voter uses, identifying how the opposition is describing issues and writing counter arguments and placing those arguments online in places where voters will encounter them, and more. 

I asked Google’s Gemini for other examples, but it wouldn’t give them to me and encouraged me to use Google search instead, which says weird things about how Google feels about itself, issues presumably to be worked out in bot therapy. If you’d like more examples, I encourage you to use Google them. These examples are in the news because they’re newsworthy, by definition the exception rather than the rule. In most places, AI is mostly doing boring things. It’s drafting press releases and emails, it’s writing drafts of position papers and backgrounders, that sort of thing. In 2024, there was much more news about AI taking over politics than there was actual generative AI use in politics. It was there, and there will be more of it, but doomsaying about deep fakes convincing us of outlandish things by and large didn’t happen. Voters believed outlandish things, many still do, but we don’t AI for that. We are perfectly capable of being daft all on our own without the aid of machines, we don’t need the help of anti-performance enhancing AI drugs thank you very much.

Everyone here is nodding at voters falling for foolishness, I’m guessing most of you mean someone else. Media scholars refer to this as the “third person effect.” I’m not dumb enough to fall for those attack ads and nonsense, my neighbor is the one you have to worry about. Of course your neighbor is saying the same thing about you.

Nevertheless, there has been a flurry of attention given to AI. The fact that I’m here is proof of that. So, thank you for your flurry of interest.

Policymakers are among the many responding to that interest. Nevada is one of many states that has considered legislation to regulate the use of AI in campaigns and elsewhere. SB 199, introduced a couple of weeks ago by State Senator Dina Neal “would create a framework to regulate artificial intelligence companies operating in the Silver State” and “would require companies that offer AI as a service to register with the Attorney General's Bureau of Consumer Protection.” In addition,  at the request of Nevada's Secretary of State, the legislature is considering a bill requiring the phrase “This image has been manipulated” be the largest text a mailer with “synthetic media” used to create “a fundamentally different understanding” of the edited content - red eye fixes are OK, adding people is not. Similar requirements address newspaper, radio and TV ads.

Nevada’s legislature didn’t meet last year, and in 2023 two limited bills failed.

One of the reasons people are paying attention to AI is that it can make up really compelling nonsense. On The New Yorker’s podcast, Joshua Rothman said that ChatGPT is “not trying to be right, it’s just trying to be plausible.” As New York Times columnist Zeynep Tufekci put it, “ChatGPT sometimes gave highly plausible answers that were flat-out wrong.” In other words, generative AI can produce “not what is really right, but what is likely to seem right in the eyes of the mass of people who are going to pass judgment: not what is really good or fine but what will seem so.” That last bit is from the Greek sophist Phaedrus in response to Socrates’ question about what makes a speech good in Plato’s dialogue of the same name

As promised, name checking Plato. He made a pretty good name for himself by, among other things, writing witty repartee between Socrates and whatever poor soul tried to match wits with the master. In Plato’s hands, Socrates was Columbo, asking “just one thing puzzles me…” In Progragoras, Socrates compares knowledge to the food of the soul and says those who are good speakers but can’t define the nature of justice can do real harm by poisoning the soul. Elsewhere in the Phaedrus, Socrates warns that someone who has no idea what a horse is could sell a mule to someone claiming it was a great steed and could carry the buyer to glory in battle, only to have the would-be hero be killed almost immediately. In another dialogue, Socrates asks a sophist if someone who didn’t know anything about medicine but was a great speaker would be more persuasive when it came to medical advice than a poor speaker who knew what he was talking about. The sophist agreed. I’ll just leave that one there.

Google’s Gemini, ChatGPT, MicroSoft’s Copilot and the rest are forcing us to ask again the same questions we have been asking, and failing to answer, for thousands of years: what is the relationship between truth, fact, rhetoric and persuasion?

Reno isn’t Athens, and AI isn’t Gorgias wandering the ancient world selling rhetorical tips. Things move faster than ever, the world is more interconnected today, and the stakes feel incredibly high. But the basic question at the core of AI is the same basic question that Socrates asked the sophists.

Seen in this light, the AI question changes a bit. Nonsense and lies have been a part of politics since basically forever. The Roman orator and teacher Quintilian complained about “hack advocates” in about 95. All that Orwell I quoted a few minutes ago is from 80 years ago. 

As with AI in politics, examples of fake images and lies driving politics abound. Here are a few highlights:

In 1782, Ben Franklin invented an entire supplement to a newspaper and added stuff he made up about the British to drive public opinion during peace negotiations to end the American Revolution.

In 1898 the newspaper magnate William Randolph Hearst wanted the US to help the Cubans win independence from Spain. The US battleship Maine sunk in the Havana harbor. Hearst sent someone to cover the sinking and raise anti-Spanish sentiment and reportedly said, “you furnish the pictures, I’ll provide the war.”

Even TV shows about Washington are faked, with the exception of Veep, which rings alarmingly true. For example House of Cards, an iconic show about American politics in the first part of the 21st Century, was often shot in Baltimore, about 45 minutes north of Washington.

Because I’m at a museum in the great American west, two other examples seem worth noting. Both Ansel Adams and Albert Bierstadt were, shall we say, somewhat casual in their relationship to the facts.

Bierstadt was a leading member of the Hudson River School of painters active in the mid 19th century. One of his most famous paintings is The Rocky Mountains, Lander’s Peak. This and other paintings helped inspire America’s commitment to the west, to westward expansion, and the creation of the National Parks. To quote Art Story, “Despite its documentarian roots, however, the painting is a composite. In order to convey the awe-inspiring vastness and possibility of the American West…Bierstadt depicts an ideal landscape rather than the actual view of Lander's Peak.” As historian Anne F. Hyde explained, the work portrayed "the West as Americans hoped it would be." The point, for Bierstadt, was to express the sublime, something that for Edmund Burke overwhelmed the senses. The point wasn’t accurate representation, the point was somehow capturing a feeling that was beyond accurate or inaccurate, a transcendent awe.

Similarly the great American photographer Ansel Adams helped define the American west. His images show dramatic and untouched landscapes that seem both timeless and out of time. They are also not entirely true. To quote photographer Robin Lubbok, “In the darkroom, once Adams had developed his negatives, he would return to them over and over again to create the image he'd envisioned.” The photograph was of what Adams saw and felt rather than entirely what his camera captured. He showed us his image, not the image.

Upstairs you have a remarkable exhibit of what some of the undersea life that once swam where we’re sitting looked like. It offers “striking examples of paleoart, revealing how artists and scientists have long worked together to imagine the world’s prehistoric marine creatures.” That last bit is from the museum’s website. According to Nevada Today, put out by UNR, one of the exhibit’s designers shook a bag of dried beans in front of a microphone to simulate the sound of a school of fish. The sound is faked but the impact on visitors is real. According to Google’s Gemini, AI is increasingly used in archeology for both predicting where to dig and also in creating three dimensional models. Rather than fret about fakes, the museum rightly brags about using science to inform creativity to foster understanding. The archeologists whose work led to the exhibit increasingly use AI to help us understand life.

The landscapes Bierstadt and Adams portrayed capture the sublime, a beauty unexplainable and beyond reason. Their images inspire conservation and respect for our astonishing landscapes. In this way, they are “true” but not wholly accurate. They present not what is really right or true, but that which would seem so. Deepfakes from the easel and the dark room.

There is one more twist before we get to what I think we’re really talking about. 

We’re here tonight to talk about AI generated words and images in politics. We’re concerned that what we read and see is not what it seems and that the author is not who it appears to be. I just spent a few minutes on the former - words and images in the service of politics have long had a casual relationship with the truth. Sort of a political situationship. Now let’s linger on the latter, the question of the author. 

One complaint about AI generated words and images is that they’re fake. Another is that they are AI generated. A sheriff in Philadelphia got in hot water last year for posting AI generated positive news stories about herself. The fake stories were from real outlets, they were meant to show the embattled sheriff was getting positive press. That this is yet another Pennsylvania example is entirely coincidental, there was also one from Wyoming, a lot of love to go around.

I learned about this example from a reporter for the Philadelphia Inquirer who let me know the sheriff was posting made up stories, and what did I think? I said obviously people shouldn’t make up news, and noted that the articles sounded like they were written by ChatGPT. The reporter called the sheriff and she confessed. The original story was about an elected official lying about support. The bigger story was that she used AI to generate the content of the lies. But why does that matter? Why does AI make the lie worse?

Virtually no political candidate does all of their own writing, producing and editing. Rafts of interns, staff and consultants write emails, letters, speeches and ads. The State of the Union Address is typically assembled by a small team with the chief speechwriter at the lead, and of course the President with the final pen. The odds that Trump or Harris wrote all those annoying texts last fall approach zero. That letter you got from US Rep. Mark Amodei’s office thanking you for your input was likely drafted by a staff assistant or legislative assistant, then approved by the Congressman’s legislative director or chief of staff, and sent out without the Congressman ever knowing about it. I’m not picking on Mr. Amodei, he just happens to be the guy from Reno. Over my career I’ve written, approved, signed and sent countless letters on behalf of my boss.

The slides you’re hopefully finding interesting and helpful were put together by a student named Alana Beasley, who I hired to help with speeches like this. Other members of my team in the School of Media and Public Affairs write emails from me and sign documents I never see.

That candidates, elected officials, administrators, and others have teams generating content isn’t surprising. There are entire firms like West Wing Writers and the Washington Writers Network who write speeches for political, corporate and academic leaders. These students,  interns, staff, freelancers, and firms do what ChatGPT does. Hopefully they do it better, but the idea is the same, the person delivering the message was not the full author of the message. Yet I have never been asked to give a talk about the ethics of interns writing first drafts of press releases, or legislative correspondents writing letters thanking constituents for their input on Scottish wool imports. 

The art world has a somewhat more complicated relationship with the identity of the artist. 

American pop art star Jeff Koons doesn’t make any of his work. I’ve walked past The Puppy in front of the Guggenheim in Bilbao hundreds of times and I’ve never once seen Koons watering the flowers. Think what you will of Koons or his approach, but he isn’t the first. Andy Warhol had his Factory in which armies of assistants worked on his screenprints. Again, think what you will or Warhol, but his studio was literally called a factory. One of the most important artists of the 20th Century, Marcel Duchamp, famously declared manufactured objects like a snow shovel and a bottle drying rack art and gave them the label “readymades.”

In one case, Duchamp sent a ball of twine between two metal plates to his patron, Walter Arensberg, and instructed Arensberg to unscrew the plates, put something in the middle of the ball of twine, and screw the plates back on. The piece, called “With Hidden Noise,” was conceived of by Duchamp but completed by Arensberg. Duchamp never learned what Arensberg hid, calling it ‘A Readymade with a secret noise. Listen to it. I will never know whether it is a diamond or a coin.’ Koons, Warhol and Duchamp have their fans and detractors - I’m obsessed with Duchamp. But they are hardly alone. Artists have long worked with assistants who finished, polished, carved, and framed. Most are uncredited, and most art viewers don’t care.

This is where I typically leave my talks about AI, ethics and politics. I suggest that AI is the newest thing to raise an old question about the relationship between truth, persuasion, images, and rhetoric. I say we’re not concerned with generated fakes, but with fakes of any flavor. Except in those instances, like Bierstadt and Adams, when we’re not. Or when we would rather overlook the ethical lapses of writers like Ben Franklin. Rather than rant at the machine, we should again wrestle with the ethics of representation and creation. 

But as I said at the beginning, this isn’t my typical rant. I’m in a museum. I just name checked the guy who pulled one of the great art pranks in history by submitting a urinal to an art show under an assumed name and then stormed out when the show’s jury, on which he served, didn’t accept it. In thinking through this conversation, it occurred to me that AI is different not because of what it does, which is largely what’s been done forever, but because of what it is. Or, more precisely, because of what it isn’t. It isn’t sentient, and doesn’t want to be. It’s not pinocchio who only wants to be a real boy, or Rachel in Blade Runner who doesn’t know she’s a replicant. Chat GPT doesn’t care anymore than a hammer cares. 

The question generative AI raises is not a machine finding a soul, but it’s a soul relying on a machine. 

A 2012 show at the New Museum called Ghosts in the Machine explored the ways in which people have projected human-like characteristics onto machines, which have then become to appear more like humans. The House candidate in Pennsylvania who used a bot to call voters called it Ashley, not Bot. We all want our Waze apps to talk to use in pleasant sounding voices. Alexa has a calming voice. 

In addition to being a great album by The Police, the ghost in the machine refers to a philosophical conception of the mind and body being separate things. The ghost, the mind, and the machine, the body, operate separately. Computer programmers sometimes refer to the ghost in the machine when software is buggy, or seems to have a mind of its own. In these cases an inanimate object appears to take on human behavior. 

HAL declaring “I’m afraid I can’t do that Dave.” In all of these cases, there is a machine that becomes or appears to become human. There is a difference, a divide or gap, that is lept. Pinocchio becomes a real boy, there’s a difference that the puppet bridges. That difference is what matters. HAL turns against the humans. The machine and Dave are different.

Carrying this a bit further, maybe the problem is that the machine doesn’t care. Unlike Pinocchio and Rachel, it doesn’t want to be human. It’s entirely ambivalent, worse it isn’t even interested enough to rise to the level of ambivalence. The Lincoln automaton at Disneyland doesn’t dream of one day going to Gettysburg, it doesn’t dream of anything. There are no androids dreaming of electric sheep, only computer programs we like to imagine like to imagine being us.

Maybe we aren’t bothered by speechwriters, or interns, or Jeff Koons’ gardeners because we know there are at least people involved. Someone typed something or pruned something. Someone put something in the middle of a ball of twine. There was always a ghost in the machine. Maybe we’re bothered because when the machine becomes more like the ghost, the ghost matters a little less. If AI is more, then what am I? I can’t write as quickly, my grammar is much worse, I can’t draw, and I need to do things like eat and sleep. If Gemini can do everything I can do better - I won’t punish you by singing, but you can hum to yourselves - then what good am I? If AI doesn’t care, should I?

A colleague in the School of Media and Public Affairs suggested another way into this conversation. He dropped by my office last week and foolishly asked what I was working on, so I told him. He suggested one way in which AI is different is because it again raised French philosopher Michel Foucualt’s question, “What Is an Author?” and his compatriot Roland Barthes’ essay, “The Death of the Author.” Foucault, Barthes and others point out that at some point the idea of an author mattered. Foucault for example was interested in “the singular relationship that holds between an author and a text, the manner in which a text apparently points to this figure who is outside and precedes it.” Again, someone is doing something. There is a person, a sentient being, a ghost that haunts the machine.

Maybe our concern with AI is that the question of an author, of a ghost, is moot. As Gemini put it, “generative AI learns the underlying patterns and structure of that data and then generates new data that has similar characteristics.” What it learns is fed back into the machine, which then finds and generates more content. The writing, painting, and sculpture are simultaneously created and creator. Hammers making hammers. And, even more troubling, the hammer doesn’t care. To the hammer, there is nothing special about sentience. In all the dystopian novels and movies about AI the problem starts when the robot thinks it knows better, or reasons that people are expendable, that Dave is a threat to the mission. These at least have the hopeful reminder that at some level people matter, that there are machines and there are ghosts and that that difference between them matters. AI takes even that away from us.

Politics may be the most human thing we do. Aristotle said that “man is a political animal,” we are together by nature. We are set apart from other creatures because we have language. Being together in a polis or polity, talking and debating and ranting, is what makes us human. If AI can do politics, then what do humans have left? Or if AI doesn’t care, if AI works on politics like it works on driving directions and recipes, does it belong in politics at all? If politics is the machine, what is to become of us mere ghosts?

All of which leaves us where, exactly? A bunch of years ago a student turned in a paper that she described as a walk in the woods. It was an interesting walk, but my concern was that the paper didn’t end in a clearing. I’ve been wandering in the woods for a while now, here’s my attempt at where that walk leads.

Both predictive and generative AI are here to stay. 

Predictive AI has been with us for a while. Mostly it uses big data sets to predict which voters will respond to what messages, to identify donors and connect them to issues they care about, and otherwise learn what gets who to do what. Ethical issues in predictive AI include concerns about data privacy and microtargeting to such an extreme that everyone experiences their own political campaign and we lose a shared debate on which democracy relies. 

This evening I’ve focused on generative AI. In addition to what I highlighted, ethicists raise concerns about bias - since AI pulls from what’s there, and what’s there tends to have been written in English by white guys the content reflects those world views. Racial stereotypes are baked into the system by definition; stereotypes are popular conceptions, mis- or otherwise, and AI repeats what’s popular. The AI generated material gets fed back into the system, and reinforces the biases as it pulls information out again.

At the start of the semester I ask students in my political communication ethics class to ask any generative AI tool to write a 500 word essay about ethics, politics and AI. I then ask them to write a 500 response, and turn both in. A young African American woman asked ChatGPT to write an essay in the voice of a Black girl. The bot said her favorite sport was basketball and favorite food was fried chicken. It also used African American vernacular.

A few other ethical questions to ponder:

What if the AI generated campaign volunteer spoke Spanish? Which accent would it use? Which vernacular? Would it matter if the bot were programmed by a white guy? Now imagine the AI generated bot was an image, a visual campaign spokesperson. A smart campaign would use predictive AI to figure out the race, age, gender, and ethnicity of the viewer and shape the bot to look and sound like the voter. Is that basically electronic black-face?

Examples like this abound. And again, while AI accentuates or highlights them, the fundamental questions are old. Should you pretend to be something you’re not? Should you put on an accent to sound more local or authentic? If not, is it OK to name check local landmarks or say “we love you Cleveland, I mean Reno!”? If so, what’s the difference?

These and other concerns all in addition to what I’ve highlighted this evening. The concerns I’ve raised are those we often see in the media about deep fakes and misinformation. My argument is that those concerns are real, but they’re not really about AI. The same goes for concerns about targeting, privacy, bias, and the rest. AI exacerbates problems, it’s the lung-busting coughing fit version of a tickle in your throat. But the problems, the ethical questions, are very old. AI is the extreme case of what we take as the norm.

What’s new are speed and reach. More people can generate more stuff faster and spread it more widely faster than ever before. Our politics have always been full of gunk, generative AI makes our politics gunkier.

My bigger concern is over how we respond to what we imagine. Popular media have focused our attention on faked pictures and invented news. That focus has two effects apart from the reality of the computer generated fictions. The first is that people can say real things were AI generated, as Trump did when he falsely claimed that pictures of crowds at Vice President Harris’ rallies were faked. With apologies to U2, is it easy to say that fact is fiction when TV is reality.

A related concern I have is that claims that everything could be fake means people have no reason to believe anything. People assume politicians lie. AI makes it easier to lie, and the media are telling us AI is always lying to us, so why believe anything? In the words of Warren Zevon, because apparently I’m now quoting musicians, “the skies are full of miracles, and half of them are lies. Are you real or not? It’s a fine line.” Or, to sort of return to Orwell, if everything can be pure wind, why bother with the appearance of solidity at all?

There are some upsides to AI in politics that are worth noting. Campaigns that can’t afford to hire political professionals can use generative AI to produce good drafts of speeches and ads, and to raise money. People who aren't billionaires or who don't hang out with billionaires will have an easier time competing for elected office. 

If AI is doing basic research and writing, all of which has to be checked because it really does make stuff up, then the candidate has more time to talk to voters. Less time sorting through spreadsheets and writing texts means more time talking to voters.

Another side effect of AI is that it could bring more people into campaigns, it could lead to more actual people talking to actual voters in person. If we don't believe anything we read or hear on line and on TV or streaming services, and if all the gunk and noise increases our demand or craving for authenticity, then campaigns will get more authentic. They will find more people to knock on doors and talk to voters where they are - both metaphorically by relying on the oceans of data and AI generated talking points, and literally by standing on their doorsteps.

So it's not all doom and gloom. I mean, it's mostly doom and gloom, but that's pretty much always been the case. 

As you encounter AI, as you will and are, I encourage you to ask what's really alarming. Is it that it's AI? That AI does bad things better than mere people? Or is it that we are still not sure how to answer the questions Plato asked 2500 years ago?