We live at a moment in time when concepts of truth are increasingly difficult to pin down. Thousands of people regularly fall for pranks committed by users of Twitter accounts impersonating celebrities or news organizations, and faked screenshots of social media posts can also be done easily with a handful of knowledge.
Even more ominous are deepfakes: videos in which machine learning is used to create something both uncannily realistic and uncannily false. Deepfakes are often used to create bizarre juxtapositions, like the video of Bill Hader morphing into Tom Cruise that went viral earlier this week. That video’s creators, Ctrl Shift Face, have embraced the bizarre side of this technology: they’re also behind videos that, say, insert Bill Murray into Full Metal Jacket.
But what happens when someone uses this technology for malice as opposed to mischief? An accurate enough deepfake could upset an election, destroy a company from within, or cause a major diplomatic incident.
It’s something that technologist Hao Li is very concerned by. Among other things, Li developed the technology that allowed new Furious 7 scenes to be shot with Paul Walker’s character after Walker’s death; he works regularly with the film industry to scan actors’ faces for ease in reshoots and digital effects.
In a conversation with MIT Technology Review, Li raised his concerns about the harmful potential of deepfakes. Before the article even begins, readers are treated to a deepfake in which author Will Knight is transformed into Elon Musk.
The article also explored some of his work on internet security, including this head-spinning detail:
Earlier this year, Matt Turek, DARPA program manager for MediFor, asked Li to demonstrate his fakes to the MediFor researchers. This led to a collaboration with Hany Farid, a professor at UC Berkeley and one of the world’s foremost authorities on digital forensics. The pair are now engaged in a digital game of cat-and-mouse, with Li developing deepfakes for Farid to catch, and then refining them to evade detection.
Li tells Knight that, regarding deepfakes, “We’re sitting in front of a problem.” The same technology that can delight us might also make our world a more sinister place. It’s one of the paradoxes of technological advancement — and it shows no signs of slowing down.
Editor’s Note: RealClearLife, a news and lifestyle publisher, is now a part of InsideHook. Together, we’ll be covering current events, pop culture, sports, travel, health and the world. Subscribe here for our free daily newsletter.
Thanks for reading InsideHook. Sign up for our daily newsletter and be in the know.