Chat with us, powered by LiveChat Danielle Citron and Robert Chesney?explore the social and cultural contexts of deep fake technologies.? According to Citron and Chesney, why does - EssayAbode

Danielle Citron and Robert Chesney?explore the social and cultural contexts of deep fake technologies.? According to Citron and Chesney, why does

1.  Danielle Citron and Robert Chesney explore the social and cultural contexts of deep fake technologies.  According to Citron and Chesney, why does misinformation spread so quickly on social media?  How might deep fake technologies be useful and how might they cause harm? (cite specifics from the article).  What potential legal solutions do they propose and why might they be difficult to enforce? (select one example from the article to discuss).

2.  According to Cass Sunstein's piece, does the government have the right to regulate deep fakes?  Why or why not?

Use the resources attached ONLY

Boston University School of Law Boston University School of Law

Scholarly Commons at Boston University School of Law Scholarly Commons at Boston University School of Law

Faculty Scholarship

12-2019

Deep Fakes: A Looming Challenge for Privacy, Democracy, and Deep Fakes: A Looming Challenge for Privacy, Democracy, and

National Security National Security

Danielle K. Citron Boston University School of Law

Robert Chesney University of Texas

Follow this and additional works at: https://scholarship.law.bu.edu/faculty_scholarship

Part of the First Amendment Commons, Internet Law Commons, and the Privacy Law Commons

Recommended Citation Recommended Citation Danielle K. Citron & Robert Chesney, Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security, 107 California Law Review 1753 (2019). Available at: https://scholarship.law.bu.edu/faculty_scholarship/640

This Article is brought to you for free and open access by Scholarly Commons at Boston University School of Law. It has been accepted for inclusion in Faculty Scholarship by an authorized administrator of Scholarly Commons at Boston University School of Law. For more information, please contact [email protected]

1753

Deep Fakes: A Looming Challenge for Privacy, Democracy, and National

Security

Bobby Chesney* and Danielle Citron**

Harmful lies are nothing new. But the ability to distort reality has taken an exponential leap forward with “deep fake” technology. This capability makes it possible to create audio and video of real people saying and doing things they never said or did. Machine learning techniques are escalating the technology’s sophistication, making deep fakes ever more realistic and increasingly resistant to detection. Deep-fake technology has characteristics that enable rapid and widespread diffusion, putting it into the hands of both sophisticated and unsophisticated actors.

DOI: https://doi.org/10.15779/Z38RV0D15J Copyright © 2019 California Law Review, Inc. California Law Review, Inc. (CLR) is a California nonprofit corporation. CLR and the authors are solely responsible for the content of their publications. * James Baker Chair, University of Texas School of Law; co-founder of Lawfare. ** Professor of Law, Boston University School of Law; Vice President, Cyber Civil Rights Initiative; Affiliate Fellow, Yale Information Society Project; Affiliate Scholar, Stanford Center on Internet and Society. We thank Benjamin Wittes, Quinta Jurecic, Marc Blitz, Jennifer Finney Boylan, Chris Bregler, Rebecca Crootof, Jeanmarie Fenrich, Mary Anne Franks, Nathaniel Gleicher, Patrick Gray, Yasmin Green, Klon Kitchen, Woodrow Hartzog, Herb Lin, Helen Norton, Suzanne Nossel, Andreas Schou, and Jessica Silbey for helpful suggestions. We are grateful to Susan McCarty, Samuel Morse, Jessica Burgard, and Alex Holland for research assistance. We had the great fortune of getting feedback from audiences at the PEN Board of Trustees meeting; Heritage Foundation; Yale Information Society Project; University of California, Hastings College of the Law; Northeastern School of Journalism 2019 symposium on AI, Media, and the Threat to Democracy; and the University of Maryland School of Law’s Trust and Truth Decay symposium. We appreciate the Deans who generously supported this research: Dean Ward Farnsworth of the University of Texas School of Law, and Dean Donald Tobin and Associate Dean Mike Pappas of the University of Maryland Carey School of Law. We are grateful to the editors of the California Law Review, especially Erik Kundu, Alex Copper, Yesenia Flores, Faye Hipsman, Gus Tupper, and Brady Williams, for their superb editing and advice.

Electronic copy available at: https://ssrn.com/abstract=3213954

1754 CALIFORNIA LAW REVIEW [Vol. 107:1753

While deep-fake technology will bring certain benefits, it also will introduce many harms. The marketplace of ideas already suffers from truth decay as our networked information environment interacts in toxic ways with our cognitive biases. Deep fakes will exacerbate this problem significantly. Individuals and businesses will face novel forms of exploitation, intimidation, and personal sabotage. The risks to our democracy and to national security are profound as well.

Our aim is to provide the first in-depth assessment of the causes and consequences of this disruptive technological change, and to explore the existing and potential tools for responding to it. We survey a broad array of responses, including: the role of technological solutions; criminal penalties, civil liability, and regulatory action; military and covert-action responses; economic sanctions; and market developments. We cover the waterfront from immunities to immutable authentication trails, offering recommendations to improve law and policy and anticipating the pitfalls embedded in various solutions.

Introduction ……………………………………………………………………………… 1755  I. Technological Foundations of the Deep-Fakes Problem ………………. 1758 

A.  Emergent Technology for Robust Deep Fakes ……………… 1759  B.  Diffusion of Deep-Fake Technology …………………………… 1762  C.  Fueling the Fire ………………………………………………………… 1763 

II. Costs and Benefits …………………………………………………………………. 1768  A.  Beneficial Uses of Deep-Fake Technology ………………….. 1769 

1.  Education …………………………………………………………… 1769  2.  Art …………………………………………………………………… 1770  3.  Autonomy ………………………………………………………….. 1770 

B.  Harmful Uses of Deep-Fake Technology …………………….. 1771  1.  Harm to Individuals or Organizations…………………….. 1771 

a.  Exploitation ………………………………………………….. 1772  b.  Sabotage ………………………………………………………. 1774 

2.  Harm to Society ………………………………………………….. 1776  a.  Distortion of Democratic Discourse ………………… 1777  b.  Manipulation of Elections ………………………………. 1778  c.  Eroding Trust in Institutions …………………………… 1779  d.  Exacerbating Social Divisions ………………………… 1780  e.  Undermining Public Safety …………………………….. 1781  f.  Undermining Diplomacy ……………………………….. 1782  g.  Jeopardizing National Security ……………………….. 1783  h.  Undermining Journalism ………………………………… 1784  i.  The Liar’s Dividend: Beware the Cry of Deep-Fake

News …………………………………………………………… 1785  III. What Can Be Done? Evaluating Technical, Legal, and Market

Responses ………………………………………………………………………. 1786 

Electronic copy available at: https://ssrn.com/abstract=3213954

2019] DEEP FAKES 1755

A.  Technological Solutions ……………………………………………. 1787  B.  Legal Solutions ………………………………………………………… 1788 

1.  Problems with an Outright Ban …………………………….. 1788  2.  Specific Categories of Civil Liability …………………….. 1792 

a.  Threshold Obstacles ………………………………………. 1792  b.  Suing the Creators of Deep Fakes ……………………. 1793  c.  Suing the Platforms ……………………………………….. 1795 

3.  Specific Categories of Criminal Liability ……………….. 1801  C.  Administrative Agency Solutions ……………………………….. 1804 

1.  The FTC …………………………………………………………….. 1804  2.  The FCC ……………………………………………………………. 1806  3.  The FEC …………………………………………………………….. 1807 

D.  Coercive Responses ………………………………………………….. 1808  1.  Military Responses ……………………………………………… 1808  2.  Covert Action……………………………………………………… 1810  3.  Sanctions ……………………………………………………………. 1811 

E.  Market Solutions ………………………………………………………. 1813  1.  Immutable Life Logs as an Alibi Service ……………….. 1814  2.  Speech Policies of Platforms ………………………………… 1817 

Conclusion ……………………………………………………………………………….. 1819 

INTRODUCTION

Through the magic of social media, it all went viral: a vivid photograph, an inflammatory fake version, an animation expanding on the fake, posts debunking the fakes, and stories trying to make sense of the situation.1 It was both a sign of the times and a cautionary tale about the challenges ahead.

The episode centered on Emma González, a student who survived the horrific shooting at Marjory Stoneman Douglas High School in Parkland, Florida, in February 2018. In the aftermath of the shooting, a number of the students emerged as potent voices in the national debate over gun control. Emma, in particular, gained prominence thanks to the closing speech she delivered during the “March for Our Lives” protest in Washington, D.C., as well as a contemporaneous article she wrote for Teen Vogue.2 Fatefully, the Teen Vogue

1. Alex Horton, A Fake Photo of Emma González Went Viral on the Far Right, Where Parkland Teens are Villains, WASH. POST (Mar. 26, 2018), https://www.washingtonpost.com/news/the- intersect/wp/2018/03/25/a-fake-photo-of-emma-gonzalez-went-viral-on-the-far-right-where-parkland- teens-are-villains/?utm_term=.0b0f8655530d [https://perma.cc/6NDJ-WADV]. 2. Florida Student Emma Gonzalez [sic] to Lawmakers and Gun Advocates: ‘We call BS’, CNN (Feb. 17, 2018), https://www.cnn.com/2018/02/17/us/florida-student-emma-gonzalez- speech/index.html [https://perma.cc/ZE3B-MVPD]; Emma González, Emma González on Why This Generation Needs Gun Control, TEEN VOGUE (Mar. 23, 2018), https://www.teenvogue.com/story/emma-gonzalez-parkland-gun-control-cover?mbid=social_twitter [https://perma.cc/P8TQ-P2ZR].

Electronic copy available at: https://ssrn.com/abstract=3213954

1756 CALIFORNIA LAW REVIEW [Vol. 107:1753

piece incorporated a video entitled “This Is Why We March,” including a visually arresting sequence in which Emma rips up a large sheet displaying a bullseye target.

A powerful still image of Emma ripping up the bullseye target began to circulate on the Internet. But soon someone generated a fake version, in which the torn sheet is not a bullseye, but rather a copy of the Constitution of the United States. While on some level the fake image might be construed as artistic fiction highlighting the inconsistency of gun control with the Second Amendment, the fake was not framed that way. Instead, it was depicted as a true image of Emma González ripping up the Constitution.

The image soon went viral. A fake of the video also appeared, though it

was more obvious that it had been manipulated. Still, the video circulated widely, thanks in part to actor Adam Baldwin circulating it to a quarter million followers on Twitter (along with the disturbing hashtag #Vorwärts—the German word for “forward,” a reference to neo-Nazis’ nod to the word’s role in a Hitler Youth anthem). 3

Several factors combined to limit the harm from this fakery. First, the genuine image already was in wide circulation and available at its original source. This made it fast and easy to fact-check the fakes. Second, the intense national attention associated with the post-Parkland gun control debate and, especially, the role of students like Emma in that debate, ensured that journalists paid attention to the issue, spending time and effort to debunk the fakes. Third, the fakes were of poor quality (though audiences inclined to believe their message might disregard the red flags).

Even with those constraints, though, many believed the fakes, and harm ensued. Our national dialogue on gun control has suffered some degree of

3. See Horton, supra note 1.

Electronic copy available at: https://ssrn.com/abstract=3213954

2019] DEEP FAKES 1757

distortion; Emma has likely suffered some degree of anguish over the episode; and other Parkland victims likely felt maligned and discredited. Falsified imagery, in short, has already exacted significant costs for individuals and society. But the situation is about to get much worse, as this Article shows.

Technologies for altering images, video, or audio (or even creating them from scratch) in ways that are highly -realistic and difficult to detect are maturing rapidly. As they ripen and diffuse, the problems illustrated by the Emma González episode will expand and generate significant policy and legal challenges. Imagine a deep fake video, released the day before an election, making it appear that a candidate for office has made an inflammatory statement. Or what if, in the wake of the Trump-Putin tête-à-tête at Helsinki in 2018, someone circulated a deep fake audio recording that seemed to portray President Trump as promising not to take any action should Russia interfere with certain NATO allies. Screenwriters are already building such prospects into their plotlines.4 The real world will not lag far behind.

Pornographers have been early adopters of the technology, interposing the faces of celebrities into sex videos. This has given rise to the label “deep fake” for such digitized impersonations. We use that label here more broadly, as shorthand for the full range of hyper-realistic digital falsification of images, video, and audio.

This full range will entail, sooner rather than later, a disturbing array of malicious uses. We are by no means the first to observe that deep fakes will migrate far beyond the pornography context, with great potential for harm.5 We

4. See, e.g., Vindu Goel & Sheera Frenkel, In India Election, False Posts and Hate Speech Flummox Facebook, N. Y. TIMES (Apr. 1, 2019), https://www.nytimes.com/2019/04/01/technology/india-elections-facebook.html [https://perma.cc/B9CP-MPPK] (describing the deluge of fake and manipulated videos and images circulated in the lead up to elections in India); Homeland: Like Bad at Things (Showtime television broadcast Mar. 4, 2018), https://www.sho.com/homeland/season/7/episode/4/like-bad-at-things [https://perma.cc/25XK-NN3Y]; Taken: Verum Nocet (NBC television broadcast Mar. 30, 2018) https://www.nbc.com/taken/video/verum-nocet/3688929 [https://perma.cc/CVP2-PNXZ] (depicting a deep-fake video in which a character appears to recite song lyrics); The Good Fight: Day 408 (CBS television broadcast Mar. 4, 2018) (depicting fake audio purporting to be President Trump); The Good Fight: Day 464 (CBS television broadcast Apr. 29, 2018) (featuring a deep-fake video of the alleged “golden shower” incident involving President Trump). 5. See, e.g., Samantha Cole, We Are Truly Fucked: Everyone is Making AI-Generated Fake Porn Now, VICE: MOTHERBOARD (Jan. 24, 2018), https://motherboard.vice.com/en_us/article/bjye8a/reddit-fake-porn-app-daisy-ridley [https://perma.cc/V9NT-CBW8] (“[T]echnology[] allows anyone with sufficient raw footage to work with to convincingly place any face in any video.”); see also @BuzzFeed, You Won’t Believe What Obama Says in This Video, TWITTER (Apr. 17, 2018, 8:00 AM), https://twitter.com/BuzzFeed/status/986257991799222272 [https://perma.cc/C38K-B377] (“We’re entering an era in which our enemies can make anyone say anything at any point in time.”); Tim Mak, All Things Considered: Technologies to Create Fake Audio and Video Are Quickly Evolving, NAT’L PUB. RADIO (Apr. 2, 2018), https://www.npr.org/2018/04/02/598916380/technologies-to-create-fake- audio-and-video-are-quickly-evolving [https://perma.cc/NY23-YVQD] (discussing deep-fake videos created for political reasons and misinformation campaigns); Julian Sanchez (@normative), TWITTER (Jan. 24, 2018, 12:26 PM) (“The prospect of any Internet rando being able to swap anyone’s face into

Electronic copy available at: https://ssrn.com/abstract=3213954

1758 CALIFORNIA LAW REVIEW [Vol. 107:1753

do, however, provide the first comprehensive survey of these harms and potential responses to them. We break new ground by giving early warning regarding the powerful incentives that deep fakes produce for privacy-destructive solutions.

This Article unfolds as follows. Part I begins with a description of the technological innovations pushing deep fakes into the realm of hyper-realism and making them increasingly difficult to debunk. It then discusses the amplifying power of social media and the confounding influence of cognitive biases.

Part II surveys the benefits and the costs of deep fakes. The upsides of deep fakes include artistic exploration and educative contributions. The downsides of deep fakes, however, are as varied as they are costly. Some harms are suffered by individuals or groups, such as when deep fakes are deployed to exploit or sabotage individual identities and corporate opportunities. Others impact society more broadly, such as distortion of policy debates, manipulation of elections, erosion of trust in institutions, exacerbation of social divisions, damage to national security, and disruption of international relations. And, in what we call the “liar’s dividend,” deep fakes make it easier for liars to avoid accountability for things that are in fact true.

Part III turns to the question of remedies. We survey an array of existing or potential solutions involving civil and criminal liability, agency regulation, and “active measures” in special contexts like armed conflict and covert action. We also discuss technology-driven market responses, including not just the promotion of debunking technologies, but also the prospect of an alibi service, such as privacy-destructive life logging. We find, in the end, that there are no silver-bullet solutions. Thus, we couple our recommendations with warnings to the public, policymakers, and educators.

I. TECHNOLOGICAL FOUNDATIONS OF THE DEEP-FAKES PROBLEM

Digital impersonation is increasingly realistic and convincing. Deep-fake technology is the cutting-edge of that trend. It leverages machine-learning algorithms to insert faces and voices into video and audio recordings of actual people and enables the creation of realistic impersonations out of digital whole cloth.6 The end result is realistic-looking video or audio making it appear that someone said or did something. Although deep fakes can be created with the consent of people being featured, more often they will be created without it. This Part describes the technology and the forces ensuring its diffusion, virality, and entrenchment.

porn is incredibly creepy. But my first thought is that we have not even scratched the surface of how bad ‘fake news’ is going to get.”). 6. See Cole, supra note 5.

Electronic copy available at: https://ssrn.com/abstract=3213954

2019] DEEP FAKES 1759

A. Emergent Technology for Robust Deep Fakes

Doctored imagery is neither new nor rare. Innocuous doctoring of images— such as tweaks to lighting or the application of a filter to improve image quality—is ubiquitous. Tools like Photoshop enable images to be tweaked in both superficial and substantive ways.7 The field of digital forensics has been grappling with the challenge of detecting digital alterations for some time.8 Generally, forensic techniques are automated and thus less dependent on the human eye to spot discrepancies.9 While the detection of doctored audio and video was once fairly straightforward,10 the emergence of generative technology capitalizing on machine learning promises to shift this balance. It will enable the production of altered (or even wholly invented) images, videos, and audios that are more realistic and more difficult to debunk than they have been in the past. This technology often involves the use of a “neural network” for machine learning. The neural network begins as a kind of tabula rasa featuring a nodal network controlled by a set of numerical standards set at random.11 Much as experience refines the brain’s neural nodes, examples train the neural network system.12 If the network processes a broad array of training examples, it should be able to create increasingly accurate models.13 It is through this process that neural networks categorize audio, video, or images and generate realistic impersonations or alterations.14

7. See, e.g., Stan Horaczek, Spot Faked Photos Using Digital Forensic Techniques, POPULAR SCIENCE (July 21, 2017), https://www.popsci.com/use-photo-forensics-to-spot-faked-images [https://perma.cc/G72B-VLF2] (depicting and discussing a series of manipulated photographs). 8. Doctored images have been prevalent since the advent of the photography. See PHOTO TAMPERING THROUGHOUT HISTORY, http://pth.izitru.com [https://perma.cc/5QSZ-NULR]. The gallery was curated by FourandSix Technologies, Inc. 9. See Tiffanie Wen, The Hidden Signs That Can Reveal a Fake Photo, BBC FUTURE (June 30, 2017), http://www.bbc.com/future/story/20170629-the-hidden-signs-that-can-reveal-if-a-photo-is- fake [https://perma.cc/W9NX-XGKJ]. IZITRU.COM was a project spearheaded by Dartmouth’s Dr. Hany Farid. It allowed users to upload photos to determine if they were fakes. The service was aimed at “legions of citizen journalists who want[ed] to dispel doubts that what they [were] posting [wa]s real.” Rick Gladstone, Photos Trusted but Verified, N.Y. TIMES (May 7, 2014), https://lens.blogs.nytimes.com/2014/05/07/photos-trusted-but-verified [https://perma.cc/7A73-URKP]. 10. See Steven Melendez, How DARPA‘s Fighting Deepfakes, FAST COMPANY (Apr. 4, 2018), https://www.fastcompany.com/40551971/can-new-forensic-tech-win-war-on-ai-generated-fake- images [https://perma.cc/9A8L-LFTQ]. 11. Larry Hardesty, Explained: Neural Networks, MIT NEWS (Apr. 14, 2017), http://news.mit.edu/2017/explained-neural-networks-deep-learning-0414 [https://perma.cc/VTA6- 4Z2D]. 12. Natalie Wolchover, New Theory Cracks Open the Black Box of Deep Neural Networks, WIRED (Oct. 8, 2017), https://www.wired.com/story/new-theory-deep-learning [https://perma.cc/UEL5-69ND]. 13. Will Knight, Meet the Fake Celebrities Dreamed Up By AI, MIT TECH. REV. (Oct. 31, 2017), https://www.technologyreview.com/the-download/609290/meet-the-fake-celebrities-dreamed- up-by-ai [https://perma.cc/D3A3-JFY4]. 14. Will Knight, Real or Fake? AI is Making it Very Hard to Know, MIT TECH. REV. (May 1, 2017), https://www.technologyreview.com/s/604270/real-or-fake-ai-is-making-it-very-hard-to-know [https://perma.cc/3MQN-A4VH].

Electronic copy available at: https://ssrn.com/abstract=3213954

1760 CALIFORNIA LAW REVIEW [Vol. 107:1753

To take a prominent example, researchers at the University of Washington have created a neural network tool that alters videos so speakers say something different from what they originally said.15 They demonstrated the technology with a video of former President Barack Obama (for whom plentiful video footage was available to train the network) that made it appear that he said things that he had not.16

By itself, the emergence of machine learning through neural network methods would portend a significant increase in the capacity to create false images, videos, and audio. But the story does not end there. Enter “generative adversarial networks,” otherwise known as GANs. The GAN approach, invented by Google researcher Ian Goodfellow, brings two neural networks to bear simultaneously.17 One network, known as the generator, draws on a dataset to produce a sample that mimics the dataset.18 The other network, the discriminator, assesses the degree to which the generator succeeded.19 In an iterative fashion, the assessments from the discriminator inform the assessments of the generator. The result far exceeds the speed, scale, and nuance of what human reviewers could achieve.20 Growing sophistication of the GAN approach is sure to lead to the production of increasingly convincing deep fakes.21

15. SUPASORN SUWAJANAKORN ET AL., SYNTHESIZING OBAMA: LEARNING LIP SYNC FROM AUDIO, 36 ACM TRANSACTIONS ON GRAPHICS, no. 4, art. 95 (July 2017), http://grail.cs.washington.edu/projects/AudioToObama/siggraph17_obama.pdf [https://perma.cc/7DCY-XK58]; James Vincent, New AI Research Makes It Easier to Create Fake Footage of Someone Speaking, VERGE (July 12, 2017), https://www.theverge.com/2017/7/12/15957844/ai-fake-video-audio-speech-obama [https://perma.cc/3SKP-EKGT]. 16. Charles Q. Choi, AI Creates Fake Obama, IEEE SPECTRUM (July 12, 2017), https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/ai-creates-fake-obama [https://perma.cc/M6GP-TNZ4]; see also Joon Son Chung et al., You Said That? (July 18, 2017) (British Machine Vision conference paper), https://arxiv.org/abs/1705.02966 [https://perma.cc/6NAH-MAYL]. 17. See Ian J. Goodfellow et al., Generative Adversarial Nets (June 10, 2014) (Neural Information Processing Systems conference paper), https://arxiv.org/abs/1406.2661 [https://perma.cc/97SH-H7DD] (introducing the GAN approach); see also Tero Karras, et al., Progressive Growing of GANs for Improved Quality, Stability, and Variation, ICLR 2018, at 1-2 (Apr. 2018) (conference paper), http://research.nvidia.com/sites/default/files/pubs/2017-10_Progressive- Growing-of/karras2018iclr-paper.pdf [https://perma.cc/RSK2-NBAE] (explaining neural networks in the GAN approach). 18. Karras, supra note 17, at 1. 19. Id. 20. Id. at 2. 21. Consider research conducted at Nvidia. Karras, supra note 17, at 2 (explaining a novel approach that begins training cycles with low-resolution images and gradually shifts to higher-resolution images, producing better and much quicker results). The New York Times recently profiled the Nvidia team’s work. See Cade Metz & Keith Collins, How an A.I. ‘Cat-and-Mouse Game’ Generates Believable Fake Photos, N.Y. TIMES (Jan. 2, 2018), https://www.nytimes.com/interactive/2018/01/02/technology/ai-generated-photos.html [https://perma.cc/6DLQ-RDWD]. For further illustrations of the GAN approach, see Martin Arjovsky et al., Wasserstein GAN (Dec. 6, 2017) (unpublished manuscript) (on file with California Law Review); Chris Donahue et al., Semantically Decomposing the Latent Spaces of Generative Adversarial Networks, ICLR 2018 (Feb. 22, 2018) (conference paper) (on file with California Law Review),

Electronic copy available at: https://ssrn.com/abstract=3213954

2019] DEEP FAKES 1761

The same is true with respect to generating convincing audio fakes. In the past, the primary method of generating audio entailed the creation of a large database of sound fragments from a source, which would then be combined and reordered to generate simulated speech. New approaches promise greater sophistication, including Google DeepMind’s “Wavenet” model,22 Baidu’s DeepVoice,23 and GAN models.24 Startup Lyrebird has posted short audio clips simulating Barack Obama, Donald Trump, and Hillary Clinton discussing its technology with admiration.25

In comparison to private and academic efforts to develop deep-fake technology, less is currently known about governmental research.26 Given the possible utility of deep-fake techniques for various government purposes— including the need to defend against hostile uses—it is a safe bet that state actors

https://github.com/chrisdonahue/sdgan; Phillip Isola et al., Image-to-Image Translation with Conditional Adversarial Nets (Nov. 26, 2018) (unpublished manuscript) (on file with California Law Review); Alec Radford et al., Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks (Jan. 7, 2016) (unpublished manuscript) (on file with Califor

Related Tags

Academic APA Assignment Business Capstone College Conclusion Course Day Discussion Double Spaced Essay English Finance General Graduate History Information Justify Literature Management Market Masters Math Minimum MLA Nursing Organizational Outline Pages Paper Presentation Questions Questionnaire Reference Response Response School Subject Slides Sources Student Support Times New Roman Title Topics Word Write Writing