newyorker:

In a media environment saturated with fake news, “synthetic media” technology has disturbing implications. 

Last fall, an anonymous Redditor with the username Deepfakes released a software tool kit that allows anyone to make synthetic videos in which a neural network substitutes one person’s face for another’s, while keeping their expressions consistent. Around the same time, “Synthesizing Obama,” a paper published by a research group at the University of Washington, showed that a neural network could create believable videos in which the former President appeared to be saying words that were really spoken by someone else. In a video voiced by Jordan Peele, Obama seems to say that “President Trump is a total and complete dipshit,” and warns that “how we move forward in the age of information” will determine “whether we become some kind of fucked-up dystopia.”

Matt Turek, a program manager at the Defense Advanced Research Projects Agency, predicts that, when it comes to images and video, we will arrive at a new, lower “trust point.” “I’ve heard people talk about how we might land at a ‘zero trust’ model, where by default you believe nothing. That could be a difficult thing to recover from,” he says. 

Read the full story, “In the Age of A.I., Is Seeing Still Believing?” here. 

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.