Silicon Valley’s new AI generates election-meddling worries

When Contra Costa County’s elections staff met with local police and an FBI agent to plan defenses and responses to voting-related threats for the 2024 election year, an unusual new risk had been added to the mix: Silicon Valley’s blockbuster product, generative artificial intelligence.

In one mock scenario, a news report highlighted a problem at a local polling station, seemingly in an attempt to keep people from voting. But the news report was fake, created by a bad actor using AI to sow misinformation. Elections officials and the law enforcement officers hashed out how to stop the threat by investigating who is behind the source and issuing correct information to the public.

Now, on the eve of next week’s Super Tuesday primaries, AI-risk discussions are occurring in elections departments around the Bay Area and across the country, especially after a faked version of President Joe Biden’s voice was used in a January robocall to deter voting in New Hampshire’s primary. California Attorney General Rob Bonta joined other state AGs in condemning the AI meddling, which Bonta said had potential to damage “the integrity of our voting process.”

Less than three weeks after news of the fake-Biden robocalls broke in January, the U.S. Federal Communications Commission made it illegal to use AI-generated voices for unsolicited robocalls, with agency chairwoman Jessica Rosenworcel citing use of the technology by “bad actors” to “misinform voters” as well as to commit extortion and imitate celebrities.

In Santa Clara County, elections officials are plugged into information-sharing networks with agencies around the country, and are tracking the potential for the new AI technology to affect elections here, said assistant registrar of voters Matt Moreles. He and his colleagues worry little about AI-enabled hacking of voting systems or alteration of results, because defenses are robust. But they fret more about use of AI-generated materials to deceive voters.

  Stephen A Smith Urges Bucks to Trade Damian Lillard

“It’s just about spreading misinformation and confusion,” Moreles said.

Artificial intelligence, after creeping into everyday life via apps such as Apple’s Siri bot and assisted-driving technologies, suddenly burst into prominence with the 2022 public release of San Francisco startup OpenAI’s ChatGPT generative AI bot. Other companies soon followed with products that allow realistic generation of text, sound and imagery in response to user prompts.

The explosive growth has raised concerns ranging from copyright infringement by companies hoovering up online data to “train” their software, replacement of human workers by AI, students cheating on exams, and people spreading fake material as propaganda or political misinformation.

“Misinformation is definitely something to worry about in this election cycle,” said UC Berkeley political science professor Susan Hyde. Election deception is not new — efforts to discourage voting have taken place for decades, Hyde said. But AI can be used to spread false information faster and wider than was possible in years past.

“We should watch out for foreign interference — that’s been around for a while,” Hyde said. “We should worry about partisan actors ranging from the local to the national.”

AI provides new tools for seeding the voting population with convincing, election-related falsehoods that can ripple through social and family networks where people may believe false information because the source is close to them, Hyde said. Misinformation that attacks the legitimacy of elections can lead people to conclude that U.S. democracy is a sham, and they may become more receptive to “cult-of-personality” candidates and the hyper-partisan view that “we must win at all costs,” Hyde said.

  Warriors overcome big Doncic night to beat Mavericks, earn 5th straight win

Marci Andino, a senior director at the Center for Internet Security, said she expected AI-aided interference in this year’s elections, peaking as the November general election nears.

The federal Cybersecurity and Infrastructure Security Agency warns the technology could be used to spread false voting information by text, email, social media channels or publications. “AI tools could be used to make audio or video files impersonating election officials that spread incorrect information to the public about the security or integrity of the elections process,” the agency said in a bulletin about 2024 election security. “AI-generated content, such as compromising deep-fake videos, could be used to harass, impersonate, or delegitimize election officials.”

Related Articles

Election |


Waymo’s request to expand driverless robotaxis to the Peninsula approved

Election |


Silicon Valley humanoid robot-maker partners with OpenAI and gets backing from Jeff Bezos and tech giants

Election |


Elon Musk sues OpenAI and CEO Sam Altman, claiming betrayal of its goal to benefit humanity

Election |


Magid: I like generative AI, but I can write my own correspondence and columns, thank you

Election |


‘Road House’ brawl: Amazon used AI to replicate actors’ voices during strike, lawsuit alleges

Convincing but false election results could be generated and used to manipulate public opinion, the agency advised. Systems, too, could be compromised, if voice-cloning is used to impersonate election-office staff and get access to “sensitive election administration or security information,” the agency warned. Or AI could create “a fake video of an election vendor making a false statement that calls the security of election technologies into question,” the agency said.

  Chicago Cubs Tabbed a Trade ‘Fit’ for Projected $90 Million Ace LHP

Chief among the worries of AI consultant Reuven Cohen is the use of generative AI to manufacture “apathy as a weapon” by persuading people not to vote.

“It’s actually easier to make someone do nothing than do something,” said Toronto-based Cohen, who advises Fortune 500 companies.

Newly released software allows cheap, easy generation of realistic videos, and election meddlers can buy data from the dark web allowing them to target people according to demographics, buying habits, or psychological profiles, Cohen said.

“It’s a thousand times difference between where we were in the last election and where we are today in terms of raw ability to do this,” Cohen said. “The ease of access is the part that’s concerning.”

Reliable information is key to preventing damage to elections from AI, officials said, urging members of the public to seek out government elections websites and official social media channels, call local elections offices, and consume credible news sources to obtain information and confirm or reject information arriving via other sources.

The news isn’t all bad. No evidence so far exists that AI-boosted propaganda could affect the outcome of an election, said Georgetown University researcher Josh Goldstein.

(Visited 1 times, 1 visits today)

Leave a Reply

Your email address will not be published. Required fields are marked *