#6 - Deepfakes
Fake news, filter bubbles and where to next for critical thinking
|James Pember||Sep 10, 2019|| 1|
Let me paint you a picture. A video of Obama endorsing Trump for the 2020 presidency starts going viral on social media. Within minutes it’s shared all over the globe, racking up thousands of retweets and likes. It’s everywhere. Even your mum is talking about. Crazy, right?
Well, back in 2018, a video of Obama did go viral - in which Obama called Trump a “dipshit”. It turned out, it was a Deepfake created by Jordan Peele.
Deepfake refers to a technique which put simply, can be used to manipulate media with the goal of spreading a mistruth. Peele was using the technology to warn against it - to raise awareness for this burgeoning technology that could create even more division, in these already shaky fake-news times.
In another case of uncanny deepfakeness, someone online created a video a few months ago, in which they took an existing video, where Bill Hader was on the David Letterman show impersonating Tom Cruise and Seth Rogen. The new, deepfake video manipulated the original video and transformed Hader into Cruise and Rogan as he switched into those impersonations.
Imagine where this technology could be in 2, 3 or 5 years.
Now, yes, it’s fairly unlikely that The New York Times or CNN will fall for these kinds of videos. They (and many others like them) build their reputations on being skeptical, on fact-checking, on being critical of the source and just generally doing their research.
However, as we’ve all witnessed over the past few years, it doesn’t have to make the New York Times for it to be adopted as truth, at least by large portions of the population, especially on social media.
If the “Fake News” era has taught us anything - it’s that far more people than we think really do believe what they read on the internet.
Videos of Obama calling Trump a dipshit, or Bill Hader turning into Tom Cruise may be comical, but imagine if these tools were to be used “against us”, extra ammunition in the war of disinformation.
In a time when many are worried that our democracy is wobbling (see Cambridge Analytica/2016) and there is more political polarisation than ever - this does seem a pressing problem.
Unfortunately for us, these deepfakes are increasingly hard to spot, and as the technology continues to improve, this presents an extremely pressing problem for our society, our media corporations and the technology platforms that enable content to spread like wildfire across the globe.
I’m clearly not going out on a limb here, but the opportunity surrounding technology for verifying media content and fighting against the spread of deepfakes is surely going to be a big one.
Just imagine what the world’s biggest media companies would pay for insurance against them publishing something that turned out to be fake. Imagine if the technology platforms could solve their reputation problem by blocking, or at least warning against the spread of clearly tampered with media.
It’s clearly valuable to a company like Facebook too, but more on that below.
The News Provenance Project
The R&D team inside the New York Times launched recently an initiative they call the News Provenance Project. It’s aim? To fight disinformation and the spread of fake news across the world’s media networks.
Their first concept is around helping to add a verification-layer to photos. This article has until now focused very much on video, but of course photos are much, much easier to manipulate.
The project will focus on using Blockchain (yes, I know) to better help keep track of the flow of an image across the internet. The basic idea being that if you can better understand the provenance (a fancy word for history) of an image, and the user is presented with this information, the easier it is for a user to determine the validity of the image.
“By experimenting with publishing photos on a blockchain, we might in theory provide audiences with a way to determine the source of a photo, or whether it had been edited after it was published.” Project lead Sasha Koren on Medium
Now, exactly how you present vast amounts of meta-data to a user in a UX-friendly manner is another question - but the core hypothesis around the product I think is sound.
Side note: Back in 2017, Spotify acquired a blockchain startup called MediaChain which aimed to solve the same problem. The idea of attaching rights and provenance to media content like music, to fight against plagiarism or unauthorised sampling makes total sense for Spotify.
However, whilst the aim is noble, there are of course some gaps in the product conceptually - and many are skeptical.
As Joshua Benton notes, an important thing to remember is that often those who are consuming media we might consider “fake news” are not open to questioning its validity. In other words, we may do all we can do help increase awareness of media provenance, but that will amount to nothing if people ignore it anyway.
In addition, as many have noted, trying to fight the rise of deepfake technology with more technology may be a long-winded way of ending up at the same conclusion.
That is that, ultimately - the challenges presented by deepfakes have more to do with media literacy, critical thinking and the questioning of what you see on the internet than anything else.
Facebook and Deepfake Detection
Facebook has too a pretty big incentive to try solve this problem. Apart from the fact that Zuck himself has been targeted by deepfakes, they’re constantly criticised for driving political and social division and their role in our democracy has landed them in front of US congress.
It was reported the other day that Facebook and Microsoft have just invested $10 million into a Deepfake Detection Challenge. They will make public a data-set and help work with the community to build out tools to identify tampered, fake content.
The goal of the challenge is to produce technology that everyone can use to better detect when AI has been used to alter a video in order to mislead the viewer.
It’s not entirely clear how the challenge will work yet, but Facebook claim it will include leaderboards as well as grants and awards to spur on participating engineers.
The “long tail” of Deepfakes
What happens when creating a deepfake is as easy as creating an Instagram story?
Zao went viral on social media a few weeks ago. It’s a Chinese deepfake-app which lets users replace faces from video clips with yourself. In other words, you can put yourself in almost any scene from any movie. See the tweet below for an example.
Now, the geo-political implications of the Obamas and the Trumps being “deepfaked” are certainly big, but what happens when everyone is “deepfaking” each other? Maybe it will even become a verb liked “Snapped” or “Tweeted”.
Bullying in the schoolyard and revenge-porn are both very real potential outcomes from this sort of technology. And it’s already happening.
Deepfake technology isn’t going anywhere, and the opportunity to help fight against it is right there in front us. It won’t be an easy problem to solve, and it probably won’t be solved by one team, but if there was ever a problem to go after for high impact, high reward, this may be it.
Until next time,