【案例】
眼见不再为实|【外交事务】:照片可P,视频可P,直播也可P
【世界决定视界】【视界决定世界】 欢迎打开“我与我们的世界”,从此,让我们一起纵览世界之风云变幻、洞察社会之脉搏律动、感受个体之生活命运、挖掘自然之点滴奥妙。 我与我们的世界,既是一个奋斗的世界,也是一个思考的世界。奋而不思则罔,思而不奋则殆。这个世界,你大,它就大;你小,它就小。 欢迎通过上方公众号名称,来挖掘本公众号与大家共享的往期文章,因为,每期都能让你“走近”不一样的世界、带给你不一样的精彩。
本期导读:人脸解锁、刷脸购物、人脸识别的动物表情小游戏,生活中已很常见,这在一定程度上也就说明了一个问题,那就是,人工智能(Artificial Intelligence)在人脸这件事儿上已经越来越精通了。
如果说识别只是AI对人脸做出的第一件事,那么第二件事是什么呢?从种种迹象来看,答案只有一个,那就是给人换脸。当然,AI不会真的去给人整容(至少目前不会),它能做的是在视频里给人换脸。比如曾被刷屏的小视频可能有些朋友就已看过。
视频中的女主角(确切的说是女主角的脸)是《神奇女侠》的扮演者盖尔·加朵。但这当然不是其本人出演了什么令人羞耻的小电影。而是有人用深度学习(Deep Learning)技术把盖尔·加朵的脸替换到了原片女主角的身体上。 这就是源自2017 年 12 月,一位自稱 DeepFakes 的網友利用深度學習技術,將 A 片女主角換臉成神力女超人女主角蓋兒加朵(Gal Gadot),此技術當時引起一陣轟動,近日有外國的研究報告估計,DeepFakes將會成為對下屆美國總統大選構成影響,甚至造成威脅的人工智能技術。
根據《The Nextweb》報導,研究人員認為只要依賴人工智慧,加上事先蒐集大量的語音訓練數據,就可以製造出以假亂真語音記錄和影片,預計在 5 年內,這項技術將會越來越成熟,足以欺騙沒有訓練過的人們。 隨著人工智慧(AI)技術日漸成熟,民眾越來越難分辨網路上假新聞及假影片的真實性。就像是之前美國男演員喬登皮爾(Jordan Peele)和美國數位新聞網站 BuzzFeed,就利用前陣子火紅的 AI 換臉技術「FakeApp」,聯合製作一條歐巴馬談論假新聞的政令宣傳影片,逼真到讓人難以分辨。
视频截图
根據《The Verge》報導,這隻影片除了採用之前美國網友 Deepfakes 所使用的 AI 人臉交換技術「FakeApp」外,也有使用 Adobe 的視覺特效製作軟體After Effects,兩個軟體一起結合運用,成功地用歐巴馬的臉把喬登皮爾的臉換掉。 AI人工智能在全球开花,而这个技术也是被运用到各行业中来。据CNET报道称,Naughty America公司打算通过AI人工智能,为用户提供定制化的换脸视频,当然主要是集中在一些成人电影上,而这种人工智能驱动的换脸技术掀起了行业的热潮。
据悉,Naughty America正在使用的AI技术,可以做的不仅仅是替换面容。用户可以和他们最喜欢的女演员或男演员一起出现在一个场景中,或者可以把自己置身于现实生活中不可能的性环境中,甚至情侣也可以一起放到同一场景中。
目前Naughty America团队将与外部人工智能研究人员合作制作这类视频,不过国外的一些社交网站已经禁止换脸色情视频。美国国防高级研究计划局(DARPA)正在研究一种检测深度伪造视频的方法。 这个ID叫做DeepFakes的网友,始终致力于在Reddit上分享其利用AI技术制作的明星换脸小视频。差不多所有当红的好莱坞女星都被他“炮制”了一遍。各位是不是感觉有点兴奋?以后想看哪位明星的片子自己动手做就是了,甚至可以把自己的脸替换上去演对手戏,各种YY皆能成真。 可是,如果是你亲戚朋友的脸被替换了呢?如果把犯罪现场所拍摄嫌疑人的脸换成你呢?如果在你不知情的情况下,不法分子发给你家人一段有你露脸的绑架视频呢?当我们不能相信自己的眼睛,各种混乱和罪恶的重量,绝对会大于那一点点违法的“福利”。 换脸的恐怖之处,在于AI很简单。 回到前面提到制作女星换脸小电影的DeepFakes。这哥们不仅是个老司机,还是一位热爱分享的“技术型活雷锋”,不仅免费发布他的成果,还不厌其烦的分享自己制作换脸视频的教程,以及自己编写的深度学习代码和相关数据集。大概他的意思是,别再问我要谁谁的视频了,你们自己做去吧。
当然,这哥们也不是专注女明星,上边这张就是他分享的如何把尼古拉斯·凯奇换成川普的教程。根据他的分享,制作一个明星换脸视频非常简单。以盖尔·加朵的视频为例,他首先会在谷歌、YouTube以及各种网络图集中收集盖尔·加朵的各个角度的视频和图片。组成一个能满足深度学习任务进行脸部替换的素材库。 然后通过TensorFlow上提供的机器视觉相关模型,学习和理解原版小电影中女主角的面部特征、轮廓、动作和嘴型等。继而让模型在素材库中寻找各种角度、各种表情下AI认为合适的图片与视频,对原本视频进行替换。 虽然可以看到,他做的视频在很多细节上还是有瑕疵,不够自然。但是大体一看已经可以蒙混过关,并且制作效果在日渐提高。这里隐藏的真正问题,在于利用开源的AI架构进行视频换脸这件事,不是太复杂太前卫了,而是太简单太容易了!
这东西毫无技术难度,要会用TensorFlow的基础功能,电脑显卡不至于太烂,差不多几个小时就可以搞出来一个。哪怕连编程基础都没有的人,跟着教程一步步走,搜集足够多的素材,也可以自己搞出来换脸视频。 设想一下,当你身边某个仇人想要陷害你的时候,只要收集你的照片和自拍,就可以随意把你和任何罪恶甚至肮脏的视频结合到一起,然后在你的社交圈里散播的到处都是,那场面何其令人胆寒?这就像枪支可以无审查、无监管的随意买卖,并且价格低廉。 在机器视觉开发的底层技术日益完善后,视频换脸必然继续在三个层面加强它的普及化: 1.近乎无门槛使用。换脸相关的数据集、源代码和架构,在今天只要有心就可以轻易找到,随着技术的成熟,这种趋势大概只会愈演愈烈。 2.可以工具化。由于技术并不复杂,这个功能被工具化的可能性很大。也就是说不法分子可以把它做成一个应用,购买了之后只要按要求添加视频和希望替换人的图像,就可以自动生成换脸视频,达成真正的无门槛。 3.欺骗性不断增强:有相关AI从业者认为,DeepFakes的视频仅仅经历了初步的学习和替换过程,没有进行修补和细节雕琢,就已经获得了很高的完成度。那么假如进一步结合对抗生成网络进行修饰,大概就可以生成真伪难辨的视频了。 总之,当我们知道照片可以PS之后,视频也不再可信了。而且,不仅仅是视频。 山雨欲来:下一站是直播+换脸 去年年初的时候,德国纽伦堡大学的相关团队发布了一个应用,也就是非常出名的Face2Face。这款应用的能力,是通过摄像头进行脸部追踪,从而让视频里的人跟着你说话。
由于其精准的捕捉效果和实时化能力,Face2Face在诞生之日起就引起了轩然大波。在其演示视频下,无数网友质疑这项技术将成为网络诈骗、绑架勒索的帮凶,质疑如果视频电话的另一端,竟然不是你认识的那个人,那将会是多么恐怖的一件事。 当然,Face2Face目前是个封闭状态,用户只能扮演其提供的角色尝尝鲜而已。但经过一年多的发展,直播中的脸部捕捉和替换技术也已大幅度提升。如今我们可以在直播平台上看到实时替换的背景和道具,而利用AI在直播中进行脸部替换,也已经是近在咫尺的事了。 与之相配合的,是AI进行声纹识别与声音合成技术也在突飞猛进。比如Adobe在近两年陆续发布了新的声音合成技术。普通人用AI来进行柯南用蝴蝶结完成的换声,已经不是多困难的事情。 借助AI,直播中换脸和换声正在同步向前跨越。那么带来的影响会是什么呢?双头人开播?川普坐在白宫办公室里跟你连麦?某当红小鲜肉在直播中跪着给你唱《征服》?没问题,统统都可以。有没有很开心?当然,你跟直播平台可能都开心了,小鲜肉却不开心了。 而换个角度想想,假如同样的技术运用在视频电话里呢?假如你接到的亲人/朋友的视频电话,套取你的隐私或者跟你借钱,事后竟然发现是陌生人处心积虑伪造的。假如一个人可以彻底伪装成另一个人,还会有人开心吗? 当我们打开手机电脑,发现一切都不是真的。真是挺让人丧心病狂的一件事。 AI换脸并不难,由于多种应用场景的存在和超高的娱乐性,我们也很难阻止它的到来。于是真正该让我们头疼的,大概就是深藏其中的法律问题与伦理陷阱。 基本可以很靠谱的说,今天国内外的很多直播与视频平台,都在研发直播换脸技术。并且某些解决方案已经相当成熟。试想一下,换脸之后的当红女神与小鲜肉,整晚开直播说一些迎合猎奇心理的话,礼物还不多到把平台挤爆了?——即使用户明知是假的。 当然,正规直播平台大概不敢这么做,使用这种技术会非常克制。但是假如有第三方插件可以做这件事呢?或者在缺乏监管的地下直播/半地下直播平台上呢?毕竟利益和猎奇可以驱使人去做各种事情,技术的门槛一旦解禁,滚滚而来的法律问题很可能决堤。
这里隐藏的伦理陷阱,是肖像权这个东西可能会前所未有的复杂化。无论是明星还是普通人,大概都不希望被别人“易容”成自己的样子来进行直播。
但问题是,你如何证明易容的是你呢?或者说你如何证明你是你?我们知道,肖像权是指你本人拍摄的图像和视频。但是用你的面部数据搭建起来的AI模型还属于你的肖像权范畴吗? 更困难的是,你根本无从证明AI搭建出来的肖像模型跟你有直接关系。毕竟深度学习训练是在看不见的后端完成的,制作者大可以说是臆想出来,或者用跟你很像的人来搭建的。再或者只比你脸上多一颗痣,是不是就不是你了呢? 更复杂的伦理情况还有很多,比如一个人享有故去亲人的肖像权吗?假如一个人希望用AI来重现已故的亲属,与亡者进行视频通话,但另一个亲属却坚决认为这是违法行为,那么到底该听谁的? 这还是基础的伦理与法律矛盾,在这之外,是大把可以用AI换脸术进行的非法勾当。比如诈骗、勒索、诬陷等等等等。总而言之,AI换脸术这件事在今天可以归纳为三句话:一、火是肯定要火的;二、乱是一定要乱的;三、如何监管,大概是不知道的。
哦对了,最后应该说一下如何防止别人做出你的AI换脸视频:不要发太多自拍。
Deepfakes and the New Disinformation War: The Coming Age of Post-Truth Geopolitics 深度换脸技术与新型虚假信息战争:后真相地缘政治时代来临 By Robert Chesney and Danielle Citron
小编注:译文部分仅供参考;本期共享文章摘自“外交事务”网站,文章有点小长,也有点小难,因此搜了好多相关信息,整理出上面那篇较长的导读;本公众号更多优质文章,见文末“往期精彩”。
A picture may be worth a thousand words, but there is nothing that persuades quite like an audio or video recording of an event. At a time when partisans can barely agree on facts, such persuasiveness might seem as if it could bring a welcome clarity. Audio and video recordings allow people to become firsthand witnesses of an event, sparing them the need to decide whether to trust someone else’s account of it. And thanks to smartphones, which make it easy to capture audio and video content, and social media platforms, which allow that content to be shared and consumed, people today can rely on their own eyes and ears to an unprecedented degree. 与文字比起来,照片更具直观性。远比照片更具直观性的,则是音视频。现如今这个信息时代,人们通过眼睛耳朵所能接触到的音视频信息数量之巨,前所未有。 Therein lies a great danger. Imagine a video depicting the Israeli prime minister in private conversation with a colleague, seemingly revealing a plan to carry out a series of political assassinations in Tehran. Or an audio clip of Iranian officials planning a covert operation to kill Sunni leaders in a particular province of Iraq. Or a video showing an American general in Afghanistan burning a Koran. In a world already primed for violence, such recordings would have a powerful potential for incitement. Now imagine that these recordings could be faked using tools available to almost anyone with a laptop and access to the Internet—and that the resulting fakes are so convincing that they are impossible to distinguish from the real thing. 有很多影音视频,可以是假的。 Advances in digital technology could soon make this nightmare a reality. Thanks to the rise of “deepfakes”—highly realistic and difficult-to-detect digital manipulations of audio or video—it is becoming easier than ever to portray someone saying or doing something he or she never said or did. Worse, the means to create deepfakes are likely to proliferate quickly, producing an ever-widening circle of actors capable of deploying them for political purposes. Disinformation is an ancient art, of course, and one with a renewed relevance today. But as deepfake technology develops and spreads, the current disinformation wars may soon look like the propaganda equivalent of the era of swords and shields. 技术的进步,也带来了虚假信息的泛滥。 DAWN OF THE DEEPFAKES 深度换脸技术时代来临 Deepfakes are the product of recent advances in a form of artificial intelligence known as “deep learning,” in which sets of algorithms called “neural networks” learn to infer rules and replicate patterns by sifting through large data sets. (Google, for instance, has used this technique to develop powerful image-classification algorithms for its search engine.) Deepfakes emerge from a specific type of deep learning in which pairs of algorithms are pitted against each other in “generative adversarial networks,” or GANS. In a GAN, one algorithm, the “generator,” creates content modeled on source data (for instance, making artificial images of cats from a database of real cat pictures), while a second algorithm, the “discriminator,” tries to spot the artificial content (pick out the fake cat images). Since each algorithm is constantly training against the other, such pairings can lead to rapid improvement, allowing GANS to produce highly realistic yet fake audio and video content. 通过生成式对抗网络技术,可做出极其逼真但实际上却是虚假的影音视频。 小编注:生成式对抗网络(GAN, Generative Adversarial Networks )是一种深度学习模型,是近年来复杂分布上无监督学习最具前景的方法之一。模型通过框架中(至少)两个模块:生成模型(Generative Model)和判别模型(Discriminative Model)的互相博弈学习产生相当好的输出。该方法由伊恩·古德费洛等人于2014年提出。生成网络从潜在空间(latent space)中随机采样作为输入,其输出结果需要尽量模仿训练集中的真实样本。判别网络的输入则为真实样本或生成网络的输出,其目的是将生成网络的输出从真实样本中尽可能分辨出来。而生成网络则要尽可能地欺骗判别网络。两个网络相互对抗、不断调整参数,最终目的是使判别网络无法判断生成网络的输出结果是否真实。生成对抗网络常用于生成以假乱真的图片。此外,该方法还被用于生成视频、三维物体模型等。 This technology has the potential to proliferate widely. Commercial and even free deepfake services have already appeared in the open market, and versions with alarmingly few safeguards are likely to emerge on the black market. The spread of these services will lower the barriers to entry, meaning that soon, the only practical constraint on one’s ability to produce a deepfake will be access to training materials—that is, audio and video of the person to be modeled—to feed the GAN. The capacity to create professional-grade forgeries will come within reach of nearly anyone with sufficient interest and the knowledge of where to go for help. 技术很容易获得,唯一的约束性条件就是控制“训练材料”的获得。
Deepfakes have a number of worthy applications. Modified audio or video of a historical figure, for example, could be created for the purpose of educating children. One company even claims that it can use the technology to restore speech to individuals who have lost their voice to disease. But deepfakes can and will be used for darker purposes, as well. Users have already employed deepfake technology to insert people’s faces into pornography without their consent or knowledge, and the growing ease of making fake audio and video content will create ample opportunities for blackmail, intimidation, and sabotage. The most frightening applications of deepfake technology, however, may well be in the realms of politics and international affairs. There, deepfakes may be used to create unusually effective lies capable of inciting violence, discrediting leaders and institutions, or even tipping elections. 对很多领域来说,这项技术具有很大应用价值。但技术是把双刃剑,技术能应用于善,也能应用于恶。 Deepfakes have the potential to be especially destructive because they are arriving at a time when it already is becoming harder to separate fact from fiction. For much of the twentieth century, magazines, newspapers, and television broadcasters managed the flow of information to the public. Journalists established rigorous professional standards to control the quality of news, and the relatively small number of mass media outlets meant that only a limited number of individuals and organizations could distribute information widely. Over the last decade, however, more and more people have begun to get their information from social media platforms, such as Facebook and Twitter, which depend on a vast array of users to generate relatively unfiltered content. Users tend to curate their experiences so that they mostly encounter perspectives they already agree with (a tendency heightened by the platforms’ algorithms), turning their social media feeds into echo chambers. These platforms are also susceptible to so-called information cascades, whereby people pass along information shared by others without bothering to check if it is true, making it appear more credible in the process. The end result is that falsehoods can spread faster than ever before. These dynamics will make social media fertile ground for circulating deepfakes, with potentially explosive implications for politics. Russia’s attempt to influence the 2016 U.S. presidential election—spreading divisive and politically inflammatory messages on Facebook and Twitter—already demonstrated how easily disinformation can be injected into the social media bloodstream. The deepfakes of tomorrow will be more vivid and realistic and thus more shareable than the fake news of 2016. And because people are especially prone to sharing negative and novel information, the more salacious the deepfakes, the better. DEMOCRATIZING FRAUD 民主幻像 The use of fraud, forgery, and other forms of deception to influence politics is nothing new, of course. When the USS Maine exploded in Havana Harbor in 1898, American tabloids used misleading accounts of the incident to incite the public toward war with Spain. The anti-Semitic tract Protocols of the Elders of Zion, which described a fictional Jewish conspiracy, circulated widely during the first half of the twentieth century. More recently, technologies such as Photoshop have made doctoring images as easy as forging text. What makes deepfakes unprecedented is their combination of quality, applicability to persuasive formats such as audio and video, and resistance to detection. And as deepfake technology spreads, an ever-increasing number of actors will be able to convincingly manipulate audio and video content in a way that once was restricted to Hollywood studios or the most well-funded intelligence agencies.
Deepfakes will be particularly useful to nonstate actors, such as insurgent groups and terrorist organizations, which have historically lacked the resources to make and disseminate fraudulent yet credible audio or video content. These groups will be able to depict their adversaries—including government officials—spouting inflammatory words or engaging in provocative actions, with the specific content carefully chosen to maximize the galvanizing impact on their target audiences. An affiliate of the Islamic State (or ISIS), for instance, could create a video depicting a U.S. soldier shooting civilians or discussing a plan to bomb a mosque, thereby aiding the terrorist group’s recruitment. Such videos will be especially difficult to debunk in cases where the target audience already distrusts the person shown in the deepfake. States can and no doubt will make parallel use of deepfakes to undermine their nonstate opponents. Deepfakes will also exacerbate the disinformation wars that increasingly disrupt domestic politics in the United States and elsewhere. In 2016, Russia’s state-sponsored disinformation operations were remarkably successful in deepening existing social cleavages in the United States. To cite just one example, fake Russian accounts on social media claiming to be affiliated with the Black Lives Matter movement shared inflammatory content purposely designed to stoke racial tensions. Next time, instead of tweets and Facebook posts, such disinformation could come in the form of a fake video of a white police officer shouting racial slurs or a Black Lives Matter activist calling for violence. Perhaps the most acute threat associated with deepfakes is the possibility that a well-timed forgery could tip an election. In May 2017, Moscow attempted something along these lines. On the eve of the French election, Russian hackers tried to undermine the presidential campaign of Emmanuel Macron by releasing a cache of stolen documents, many of them doctored. That effort failed for a number of reasons, including the relatively boring nature of the documents and the effects of a French media law that prohibits election coverage in the 44 hours immediately before a vote. But in most countries, most of the time, there is no media blackout, and the nature of deepfakes means that damaging content can be guaranteed to be salacious or worse. A convincing video in which Macron appeared to admit to corruption, released on social media only 24 hours before the election, could have spread like wildfire and proved impossible to debunk in time. Deepfakes may also erode democracy in other, less direct ways. The problem is not just that deepfakes can be used to stoke social and ideological divisions. They can create a “liar’s dividend”: as people become more aware of the existence of deepfakes, public figures caught in genuine recordings of misbehavior will find it easier to cast doubt on the evidence against them. (If deepfakes were prevalent during the 2016 U.S. presidential election, imagine how much easier it would have been for Donald Trump to have disputed the authenticity of the infamous audio tape in which he brags about groping women.) More broadly, as the public becomes sensitized to the threat of deepfakes, it may become less inclined to trust news in general. And journalists, for their part, may become more wary about relying on, let alone publishing, audio or video of fast-breaking events for fear that the evidence will turn out to have been faked. DEEP FIX 深度技术需深度应付 There is no silver bullet for countering deepfakes. There are several legal and technological approaches—some already existing, others likely to emerge—that can help mitigate the threat. But none will overcome the problem altogether. Instead of full solutions, the rise of deepfakes calls for resilience. Three technological approaches deserve special attention. The first relates to forensic technology, or the detection of forgeries through technical means.Just as researchers are putting a great deal of time and effort into creating credible fakes, so, too, are they developing methods of enhanced detection. In June 2018, computer scientists at Dartmouth and the University at Albany, SUNY, announced that they had created a program that detects deepfakes by looking for abnormal patterns of eyelid movement when the subject of a video blinks. In the deepfakes arms race, however, such advances serve only to inform the next wave of innovation. In the future, GANS will be fed training videos that include examples of normal blinking. And even if extremely capable detection algorithms emerge, the speed with which deepfakes can circulate on social media will make debunking them an uphill battle. By the time the forensic alarm bell rings, the damage may already be done. A second technological remedy involves authenticating content before it ever spreads—an approach sometimes referred to as a “digital provenance” solution. Companies such as Truepic are developing ways to digitally watermark audio, photo, and video content at the moment of its creation, using meta data that can be logged immutably on a distributed ledger, or blockchain. In other words, one could effectively stamp content with a record of authenticity that could be used later as a reference to compare to suspected fakes. In theory, digital provenance solutions are an ideal fix. In practice, they face two big obstacles. First, they would need to be ubiquitously deployed in the vast array of devices that capture content, including laptops and smartphones. Second, their use would need to be made a precondition for uploading content to the most popular digital platforms, such as Facebook, Twitter, and YouTube. Neither condition is likely to be met. Device makers, absent some legal or regulatory obligation, will not adopt digital authentication until they know it is affordable, in demand, and unlikely to interfere with the performance of their products. And few social media platforms will want to block people from uploading unauthenticated content, especially when the first one to do so will risk losing market share to less rigorous competitors.
A third, more speculative technological approach involves what has been called “authenticated alibi services,” which might soon begin emerging from the private sector. Consider that deepfakes are especially dangerous to high-profile individuals, such as politicians and celebrities, with valuable but fragile reputations. To protect themselves against deepfakes, some of these individuals may choose to engage in enhanced forms of “lifelogging”—the practice of recording nearly every aspect of one’s life—in order to prove where they were and what they were saying or doing at any given time. Companies might begin offering bundles of alibi services, including wearables to make lifelogging convenient, storage to cope with the vast amount of resulting data, and credible authentication of those data. These bundles could even include partnerships with major news and social media platforms, which would enable rapid confirmation or debunking of content. Such logging would be deeply invasive, and many people would want nothing to do with it. But in addition to the high-profile individuals who choose to adopt lifelogging to protect themselves, some employers might begin insisting on it for certain categories of employees, much as police departments increasingly require officers to use body cameras. And even if only a relatively small number of people took up intensive lifelogging, they would produce vast repositories of data in which the rest of us would find ourselves inadvertently caught, creating a massive peer-to-peer surveillance network for constantly recording our activities. LAYING DOWN THE LAW 张开法网 If these technological fixes have limited upsides, what about legal remedies? Depending on the circumstances, making or sharing a deepfake could constitute defamation, fraud, or misappropriation of a person’s likeness, among other civil and criminal violations. In theory, one could close any remaining gaps by criminalizing (or attaching civil liability to) specific acts—for instance, creating a deepfake of a real person with the intent to deceive a viewer or listener and with the expectation that this deception would cause some specific kind of harm. But it could be hard to make these claims or charges stick in practice. To begin with, it will likely prove very difficult to attribute the creation of a deepfake to a particular person or group. And even if perpetrators are identified, they may be beyond a court’s reach, as in the case of foreign individuals or governments. Another legal solution could involve incentivizing social media platforms to do more to identify and remove deepfakes or fraudulent content more generally. Under current U.S. law, the companies that own these platforms are largely immune from liability for the content they host, thanks to Section 230 of the Communications Decency Act of 1996. Congress could modify this immunity, perhaps by amending Section 230 to make companies liable for harmful and fraudulent information distributed through their platforms unless they have made reasonable efforts to detect and remove it. Other countries have used a similar approach for a different problem: in 2017, for instance, Germany passed a law imposing stiff fines on social media companies that failed to remove racist or threatening content within 24 hours of it being reported. Yet this approach would bring challenges of its own. Most notably, it could lead to excessive censorship. Companies anxious to avoid legal liability would likely err on the side of policing content too aggressively, and users themselves might begin to self-censor in order to avoid the risk of having their content suppressed. It is far from obvious that the notional benefits of improved fraud protection would justify these costs to free expression. Such a system would also run the risk of insulating incumbent platforms, which have the resources to police content and pay for legal battles, against competition from smaller firms. LIVING WITH LIES 与虚假为伴 But although deepfakes are dangerous, they will not necessarily be disastrous. Detection will improve, prosecutors and plaintiffs will occasionally win legal victories against the creators of harmful fakes, and the major social media platforms will gradually get better at flagging and removing fraudulent content. And digital provenance solutions could, if widely adopted, provide a more durable fix at some point in the future. In the meantime, democratic societies will have to learn resilience. On the one hand, this will mean accepting that audio and video content cannot be taken at face value; on the other, it will mean fighting the descent into a post-truth world, in which citizens retreat to their private information bubbles and regard as fact only that which flatters their own beliefs. In short, democracies will have to accept an uncomfortable truth: in order to survive the threat of deepfakes, they are going to have to learn how to live with lies.
来源:微信公众号“我与我们的世界”
编辑:马晓晴
|