Current:Home > NewsFake robocalls. Doctored videos. Why Facebook is being urged to fix its election problem. -Visionary Wealth Guides
Fake robocalls. Doctored videos. Why Facebook is being urged to fix its election problem.
View
Date:2025-04-17 04:33:33
As the nation heads into the 2024 presidential election, the independent body that reviews Meta’s content moderation decisions is urging the tech giant to overhaul its policy on manipulated videos to encompass fake or distorted clips that can mislead voters and tamper with elections.
The test case was a doctored video of President Joe Biden that appeared on Facebook last May.
Meta currently bans video clips that have been digitally created or altered with generative artificial intelligence to make it appear as if people have said something they did not. But it doesn't address cruder clips − so-called "cheap fakes − made with basic editing tools, nor does it address clips that show someone doing something they did not.
The Oversight Board upheld Meta's decision to allow the Biden video to remain on Facebook but called on Meta to crack down on all doctored content, regardless of how it was created or altered. It also recommended that Meta clearly define the aim of its policy to encompass election interference.
Of particular concern is faked audio, which the board said is “one of the most potent forms of electoral disinformation we’re seeing around the world.”
In January, a fake robocall used Biden's voice to encourage New Hampshire voters to skip the primary. The robocall was artificially generated and is being probed by the New Hampshire Attorney General's Office as an attempt at voter suppression. It had no effect on the outcome of the primary – Biden won in a landslide – but it illustrated how generative AI could be used to influence an election, critics say.
“As it stands, the policy makes little sense,” Oversight Board Co-Chair Michael McConnell said in a statement. “It bans altered videos that show people saying things they do not say, but does not prohibit posts depicting an individual doing something they did not do. It only applies to video created through AI, but lets other fake content off the hook.”
Meta did not say whether it would follow the Oversight Board’s guidance. A spokesman said the company was reviewing the recommendations and would respond publicly within 60 days.
Even if Meta makes changes to its manipulated media policy, observers say there's no guarantee it will put enough money and resources into enforcing the changes.
“The volume of misleading content is rising, and the quality of tools to create it is rapidly increasing,” McConnell said. “Platforms must keep pace with these changes, especially in light of global elections during which certain actors seek to mislead the public.”
Meta defended its election integrity policies.
“We have around 40,000 people globally working on safety and security, and protecting the 2024 elections is one of our top priorities," the company said in a statement. "Our integrity efforts continue to lead the industry and with each election we incorporate the lessons we’ve learned to help stay ahead of emerging threats.”
In the first AI election, 'a tsunami of disinformation'
The stakes are not just high in the United States. In 2024, more people will have a chance to vote than in any previous election, increasing the likelihood that AI will play a role at the ballot box. And that's raising concerns.
With rapid advances in technology and too little oversight from the government or private sector, election experts have been bracing for the malicious use of deepfakes in the 2024 presidential contest. Virtually anyone can now create or digitally alter images and clips in realistic ways to deceive voters.
Like other technology companies, Meta has made pledges to curb the harms of generative AI. Yet, even as the technology grows more sophisticated, powerful and ubiquitous, there are still very few rules governing its use.
In the case of the doctored video, the original footage showed Biden accompanying his granddaughter for her first time voting in October 2022. Biden placed an “I voted” sticker near her neckline as she instructed then kissed her on the cheek. But the looped version made it seem as if Biden were repeatedly touching her chest. The caption labeled Biden a “sick pedophile.”
Meta left the video up, saying it did not violate its rules because it was not altered using AI and did not show Biden saying something he did not say. The company made a similar decision in 2019 over a clip that was slowed down to make then-House Speaker Nancy Pelosi appear drunk −even as Democrats fumed.
Biden’s 2024 campaign has set up a deepfake task force to respond to misleading AI-generated falsehoods and propaganda.
“There is going to be a tsunami of disinformation in 2024. We are already seeing it, and it is going to get much worse,” said Darrell West, a senior fellow at the Center for Technology Innovation at the Brookings Institution. “People are anticipating that this will be a close election, and anything that shifts 50,000 votes in three or four states could be decisive.”
How Facebook and other social media platforms police faked content
What’s alarming to West is the tepid response from social media platforms that host this content.
Rather than strengthening protections, Meta and other major technology companies have loosened their misinformation policies and laid off staffers charged with policing lies and propaganda since the 2020 election, West said.
Meta also now allows political ads to question the legitimacy of the 2020 U.S. presidential election. It does not allow ads that question the legitimacy of current or upcoming elections.
“So at a time when fake videos are becoming rampant, their capacity to deal with it is quite limited,” West said.
When policing fake election content, social media platforms can take it down, slap warning labels on it or demote it.
To ensure the policy is "proportionate," the Oversight Board recommended that Meta stop removing manipulated media when there is no other policy violation and instead apply a label warning the content has been significantly altered and may be misleading.
It also discouraged Meta from demoting content that fact-checkers identify as altered or fake without informing users or providing an appeals process.
“Political speech must be unwaveringly protected. This sometimes includes claims that are disputed and even false, but not demonstrably harmful,” McConnell said.
Facebook not doing enough to protect elections, critics charge
Hany Farid, a UC Berkeley professor who specializes in deepfakes and disinformation, gets daily inquiries about fake images on the internet, from Biden in military fatigues in the situation room to Trump with pedophile Jeffrey Epstein. He says the use of warning labels for this kind of malicious content is "cowardly."
While the warning labels provide cover to Facebook, the average person doesn’t care about the label or ignores it, he said. Most of the time those labels are not added until a video has gotten millions of views. What’s more, anyone can then take that video and post it somewhere else without the label.
According to Farid, Facebook, whose algorithms serve up content that stirs strong emotions, has been on the wrong side of this issue for the last 15 years.
“It’s hard to take Facebook seriously when they say we have these policies and it’s clear those policies are in place to maximize their profits,” he said.
Election experts call for deepfake regulations
Any efforts by social media companies to rein in doctored or AI-generated content should be paired with thoughtful standards crafted by regulators and policymakers, says Daniel Weiner, director of the Brennan Center’s Elections and Government Program.
While AI-generated depictions of Biden are quickly debunked, what about a local candidate for city council or the school board?
Last year, Sen. Richard Blumenthal, D-Conn., launched a Senate Judiciary Committee hearing into the potential dangers of deepfakes by playing an AI-generated recording that mimicked his voice and read a ChatGPT-generated script.
“The latest advances in AI technology, more than anything else, has reinforced the need to strengthen fundamental guardrails for our political system,” Weiner said. “These problems existed before. They would exist if every deepfake disappeared tomorrow. And, a lot of times, the solutions aren’t AI-specific. They are about the need for a broader strengthening of democracy.”
veryGood! (4)
Related
- Opinion: Gianni Infantino, FIFA sell souls and 2034 World Cup for Saudi Arabia's billions
- Amanda Seyfried Shares How Tom Holland Bonded With Her Kids on Set of The Crowded Room
- Grimes Debuts Massive Red Leg Tattoo
- How 90 Day Fiancé's Kenny and Armando Helped Their Family Embrace Their Love Story
- SFO's new sensory room helps neurodivergent travelers fight flying jitters
- Suniva Solar Tariff Case Could Throttle a Thriving Industry
- Celebrity Hair Colorist Rita Hazan Shares Her Secret to Shiny Strands for Just $13
- Brad Pitt and Angelina Jolie's Winery Court Battle Heats Up: He Calls Sale of Her Stake Vindictive
- Military service academies see drop in reported sexual assaults after alarming surge
- Trump’s Fighting to Keep a Costly, Unreliable Coal Plant Running. TVA Wants to Shut It Down.
Ranking
- Gen. Mark Milley's security detail and security clearance revoked, Pentagon says
- Tom Cruise and Nicole Kidman's Son Connor Cruise Shares Rare Selfie With Friends
- RHOBH's Kyle Richards Shares Update on Kathy Hilton Feud After Recent Family Reunion
- How a DIY enthusiast created a replica of a $126,000 Birkin handbag for his girlfriend
- Senate begins final push to expand Social Security benefits for millions of people
- Puerto Rico’s Solar Future Takes Shape at Children’s Hospital, with Tesla Batteries
- How 90 Day Fiancé's Kenny and Armando Helped Their Family Embrace Their Love Story
- Check Out the Most Surprising Celeb Transformations of the Week
Recommendation
A White House order claims to end 'censorship.' What does that mean?
United Nations Chief Warns of a ‘Moment of Truth for People and Planet’
As low-nicotine cigarettes hit the market, anti-smoking groups press for wider standard
Wild ’N Out Star Ms Jacky Oh! Dead at 33
US appeals court rejects Nasdaq’s diversity rules for company boards
These City Bus Routes Are Going Electric ― and Saving Money
Local Advocates Say Gulf Disaster Is Part of a Longstanding Pattern of Cultural Destruction
Influencer Jackie Miller James in Medically Induced Coma After Aneurysm Rupture at 9 Months Pregnant