AI is destabilizing ‘the concept of truth itself’ in 2024 election 人工智能正在破坏 2024 年选举中“真理概念本身”的稳定性 Story by Pranshu Verma, Gerrit De Vynck
Experts in artificial intelligence have long warned that AI-generated content could muddy the waters of perceived reality. Weeks into a pivotal election year, AI confusion is on the rise. 人工智能专家长期以来一直警告说,人工智能生成的内容可能会混淆感知现实的水域。进入关键选举年几周后,人工智能的困惑正在加剧。 Politicians around the globe have been swatting away potentially damning pieces of evidence — grainy video footage of hotel trysts, voice recordings criticizing political opponents — by dismissing them as AI-generated fakes. At the same time, AI deepfakes are being used to spread misinformation.
全球各地的政客们一直在抹杀那些可能具有毁灭性的证据——酒店幽会的模糊视频片段、批评政治对手的录音——将它们斥为人工智能生成的假货。与此同时,人工智能深度造假也被用来传播错误信息。
On Monday, the New Hampshire Justice Department said it was investigating robocalls featuring what appeared to be an AI-generated voice that sounded like President Biden telling voters to skip the Tuesday primary — the first notable use of AI for voter suppression this campaign cycle.
周一,新罕布什尔州司法部表示正在调查机器人电话,其声音似乎是人工智能生成的,听起来像是拜登总统告诉选民跳过周二的初选——这是本竞选周期首次使用人工智能来压制选民。 Last month, former president Donald Trump dismissed an ad on Fox News featuring video of his well-documented public gaffes — including his struggle to pronounce the word “anonymous” in Montana and his visit to the California town of “Pleasure,” a.k.a. Paradise, both in 2018 — claiming the footage was generated by AI.
上个月,前总统唐纳德·特朗普驳回了福克斯新闻上的一则广告,该广告中播放了他有据可查的公开失态视频,包括他在蒙大拿州艰难地发音“匿名”一词,以及他对加利福尼亚州“快乐”小镇(又称天堂)的访问,均发生在 2018 年——声称这些镜头是由人工智能生成的。 “The perverts and losers at the failed and once disbanded Lincoln Project, and others, are using A.I. (Artificial Intelligence) in their Fake television commercials in order to make me look as bad and pathetic as Crooked Joe Biden, not an easy thing to do,” Trump wrote on Truth Social. “FoxNews shouldn’t run these ads.”
“失败且一度解散的林肯计划中的变态者和失败者以及其他人正在使用人工智能。 (人工智能)在他们的虚假电视广告中,以使我看起来像狡诈的乔·拜登一样糟糕和可悲,这不是一件容易的事,”特朗普在《真相社会》上写道。 “福克斯新闻不应该投放这些广告。” The Lincoln Project, a political action committee formed by moderate Republicans to oppose Trump, swiftly denied the claim; the ad featured incidents during Trump’s presidency that were widely covered at the time and witnessed in real life by many independent observers.
林肯计划是一个由温和派共和党人组成的政治行动委员会,旨在反对特朗普,但很快就否认了这一说法;该广告讲述了特朗普担任总统期间发生的事件,这些事件在当时被广泛报道,并被许多独立观察家在现实生活中亲眼目睹。 Still, AI creates a “liar’s dividend,” said Hany Farid, a professor at the University of California at Berkeley who studies digital propaganda and misinformation. “When you actually do catch a police officer or politician saying something awful, they have plausible deniability” in the age of AI.
尽管如此,人工智能还是创造了“骗子的红利”,加州大学伯克利分校研究数字宣传和错误信息的教授哈尼·法里德说。在人工智能时代,“当你真的发现警察或政客说了一些可怕的事情时,他们有合理的推诿”。 AI “destabilizes the concept of truth itself,” added Libby Lange, an analyst at the misinformation tracking organization Graphika. “If everything could be fake, and if everyone’s claiming everything is fake or manipulated in some way, there’s really no sense of ground truth. Politically motivated actors, especially, can take whatever interpretation they choose.”
错误信息追踪组织 Graphika 的分析师 Libby Lange 补充道,人工智能“破坏了真相概念本身的稳定性”。 “如果一切都可能是假的,如果每个人都声称一切都是假的或以某种方式被操纵,那么就真的没有真实感了。尤其是出于政治动机的演员可以采取他们选择的任何解释。”
Trump is not alone in seizing this advantage. Around the world, AI is becoming a common scapegoat for politicians trying to fend off damaging allegations.
特朗普并不是唯一一个抓住这一优势的人。在世界各地,人工智能正在成为政客们试图抵御破坏性指控的常见替罪羊。 Late last year, a grainy video surfaced of a ruling-party Taiwanese politician entering a hotel with a woman, indicating he was having an affair. Commentators and other politicians quickly came to his defense, saying the footage was AI-generated — though it remains unclear whether it actually was.
去年年底,一段模糊的视频出现,显示一名执政党台湾政客与一名女子进入一家酒店,表明他有外遇。评论员和其他政客很快为他辩护,称这段视频是人工智能生成的——尽管目前还不清楚是否真的如此。 In April, a 26-second voice recording was leaked in which a politician in the southern Indian state of Tamil Nadu appeared to accuse his own party of illegally amassing $3.6 billion, according to reporting by Rest of World. The politician denied the recording’s veracity, calling it “machine generated”; experts have said they are unsure whether the audio is real or fake.
据《世界其他地区》报道,4 月,一段 26 秒的录音被泄露,其中印度南部泰米尔纳德邦的一名政客似乎指责其政党非法筹集 36 亿美元。这位政治家否认录音的真实性,称其为“机器生成的”;专家表示,他们不确定音频是真是假。 AI companies have generally said their tools shouldn’t be used in political campaigns now, but enforcement has been spotty. On Friday, OpenAI banned a developer from using its tools after the developer built a bot mimicking long-shot Democratic presidential candidate Dean Phillips. Phillips’s campaign had supported the bot, but after The Washington Post reported on it, OpenAI deemed that it broke rules against use of its tech for campaigns.
人工智能公司普遍表示,他们的工具现在不应该用于政治竞选,但执行情况却参差不齐。周五,OpenAI 禁止一名开发者使用其工具,因为该开发者开发了一个模仿民主党总统候选人迪恩·菲利普斯 (Dean Phillips) 的机器人。菲利普斯的竞选团队曾支持该机器人,但在《华盛顿邮报》报道后,OpenAI 认为它违反了禁止在竞选活动中使用其技术的规则。 AI-related confusion is also swirling beyond politics. Last week, social media users began circulating an audio clip they claimed was a Baltimore County, Md., school principal on a racist tirade against Jewish people and Black students. The union that represents the principal has said the audio is AI-generated.
与人工智能相关的混乱也超出了政治范围。上周,社交媒体用户开始传播一段音频片段,他们声称该音频片段是马里兰州巴尔的摩县的一名校长对犹太人和黑人学生进行种族主义长篇大论。代表校长的工会表示,音频是人工智能生成的。 Several signs do point to that conclusion, including the uniform cadence of the speech and indications of splicing, said Farid, who analyzed the audio. But without knowing where it came from or in what context it was recorded, he said, it’s impossible to say for sure.
分析音频的法里德说,有几个迹象确实表明了这一结论,包括演讲的统一节奏和拼接迹象。但他说,由于不知道它来自哪里或在什么背景下记录,所以无法确定。 On social media, commenters overwhelmingly seem to believe the audio is real, and the school district says it has launched an investigation. A request for comment to the principal through his union was not returned.
在社交媒体上,绝大多数评论者似乎相信音频是真实的,学区表示已展开调查。通过工会向校长提出置评请求没有得到回复。 These claims hold weight because AI deepfakes are more common now and better at replicating a person’s voice and appearance. Deepfakes regularly go viral on X, Facebook and other social platforms. Meanwhile, the tools and methods to identify an AI-created piece of media are not keeping up with rapid advances in AI’s ability to generate such content.
这些说法很有说服力,因为人工智能深度假货现在更常见,并且能够更好地复制人的声音和外表。 Deepfakes 经常在 X、Facebook 和其他社交平台上疯传。与此同时,识别人工智能创建的媒体的工具和方法并没有跟上人工智能生成此类内容的能力的快速进步。 Actual fake images of Trump have gone viral multiple times. Early this month, actor Mark Ruffalo posted AI images of Trump with teenage girls, claiming the images showed the former president on a private plane owned by convicted sex offender Jeffrey Epstein. Ruffalo later apologized.
特朗普的真实假照片已经多次疯传。本月初,演员马克·鲁法洛发布了特朗普与十几岁女孩的人工智能图像,声称这些图像显示前总统坐在被定罪的性犯罪者杰弗里·爱泼斯坦拥有的私人飞机上。鲁法洛后来道歉。 Trump, who has spent weeks railing against AI on Truth Social, posted about the incident, saying, “This is A.I., and it is very dangerous for our Country!”
特朗普花了数周时间在 Truth Social 上谴责人工智能,他在帖子中谈到了这一事件,并表示:“这是人工智能,它对我们的国家来说非常危险!” Rising concern over AI’s impact on politics and the world economy was a major theme at the conference of world leaders and CEOs in Davos, Switzerland, last week. In her remarks opening the conference, Swiss President Viola Amherd called AI-generated propaganda and lies “a real threat” to world stability, “especially today when the rapid development of artificial intelligence contributes to the increasing credibility of such fake news.”
上周在瑞士达沃斯举行的世界领导人和首席执行官会议的一个主要主题是对人工智能对政治和世界经济影响的日益关注。瑞士总统维奥拉·阿赫德在会议开幕致辞中称,人工智能生成的宣传和谎言对世界稳定构成“真正的威胁”,“尤其是在人工智能的快速发展使得此类假新闻的可信度不断提高的今天。” Tech and social media companies say they are looking into creating systems to automatically check and moderate AI-generated content purporting to be real, but have yet to do so. Meanwhile, only experts possess the tech and expertise to analyze a piece of media and determine whether it’s real or fake.
科技和社交媒体公司表示,他们正在研究创建系统来自动检查和审核人工智能生成的虚假内容,但尚未这样做。与此同时,只有专家才拥有分析媒体并确定其真假的技术和专业知识。 That leaves too few people capable of truth-squadding content that can now be generated with easy-to-use AI tools available to almost anyone.
这使得能够提供真相内容的人太少了,而这些内容现在可以通过几乎任何人都可以使用的易于使用的人工智能工具生成。 “You don’t have to be a computer scientist. You don’t have to be able to code,” Farid said. “There’s no barrier to entry anymore.”
“你不必成为一名计算机科学家。你不必会编码,”法里德说。 “不再有进入障碍。” Aviv Ovadya, an expert on AI’s impact on democracy and an affiliate at Harvard University’s Berkman Klein Center, said the general public is far more aware of AI deepfakes now compared with five years ago. As politicians see others evade criticism by claiming evidence released against them is AI, more people will make that claim.
人工智能对民主影响的专家、哈佛大学伯克曼克莱因中心附属机构阿维夫·奥瓦迪亚 (Aviv Ovadya) 表示,与五年前相比,现在公众对人工智能深度造假的认识要高得多。当政客们看到其他人通过声称针对他们的证据是人工智能来逃避批评时,更多的人会提出这种说法。 “There’s a contagion effect,” he said, noting a similar rise in politicians falsely calling an election rigged.
“这是一种传染效应,”他说,并指出政界人士错误地称选举被操纵的现象也有所增加。 Ovadya said technology companies have the tools to regulate the problem: They could watermark audio to create a digital fingerprint or join a coalition meant to prevent the spreading of misleading information online by developing technical standards that establish the origins of media content. Most importantly, he said, they could tweak their algorithms so they don’t promote sensational but potentially false content.
奥瓦迪亚表示,科技公司拥有解决这一问题的工具:他们可以为音频添加水印以创建数字指纹,或者加入一个联盟,通过制定确定媒体内容起源的技术标准来防止网上误导性信息的传播。他说,最重要的是,他们可以调整算法,这样就不会宣传耸人听闻但可能虚假的内容。 So far, he said, tech companies have mostly failed to take action to safeguard the public’s perception of reality.
他说,到目前为止,科技公司大多未能采取行动来维护公众对现实的看法。 “As long as the incentives continue to be engagement-driven sensationalism, and really conflict,” he said, “those are the kinds of content — whether deepfake or not — that’s going to be surfaced.”
“只要激励措施继续是由参与驱动的轰动效应,并且确实存在冲突,”他说,“这些内容——无论是否是深度伪造的——就会浮出水面。” Drew Harwell and Nitasha Tiku contributed to this report.德鲁·哈威尔 (Drew Harwell) 和尼塔莎·蒂库 (Nitasha Tiku) 对本报告做出了贡献。
来源:华盛顿邮报 编辑:李梦瑶
|