Quick access:
Go directly to content (Alt 1)Go directly to second-level navigation (Alt 3)Go directly to first-level navigation (Alt 2)

Frankly ... Posthuman
Deepfakes Technology and Our Identity Crisis

Deepfake
Did it really happen? Individuals can lose their identity to Deepfakes faster than they think. | Photo (detail): Christian Gertenbach © Unsplash

Anyone who places pictures of themselves on the Internet can easily become the target of deepfakes. Computers are already able to use faces from photos for fake news or fake porn videos. Liwen Qin worries about the new possibilities of technology.

By Liwen Qin

Have you watched the 2016 film Rogue One: A Star Wars Story? If you have, do you remember the character Grand Moff Tarkin and the young Princess Leia? They are a good actor and actress respectively, but they are not “real”. Filmmakers used face-swapping and video synthesis technology to recreate models of the original actors’ faces, and superimposed them onto stand-in actors. No one could spot the trick. Today this technology is posing unsolvable challenges to the human society.

Computers are always learning to fake better

The technology using artificial intelligence to generate image and voice, is termed “deepfakes”, a combination of “deep learning” and “fakes”. For years, this magical trick has become widely used in entertainment industry and pornography. Now it is more advanced: using a person’s facial gestures, images or video, the artificial intelligence can synthesize a completely new model of him/her. It can even make facial models based on one image, though the result is not yet very convincing. Deepfakes is developed on open-source platforms, being refined and improved every day.

Fake nude images of women and revenge porn

People with a lot of public visual data, such as politicians and celebrities, are the easier targets of deepfakes technology. Their faces could be used on fake news or fake pornography videos. As more and more people post selfies and short videos of themselves online, this technology can be used with malicious intentions against them. Such cases pop up increasingly regularly. The Virginia State in the US has recently had to expand a revenge porn law to include deepfakes. This is to stop resentful people from using it to hurt their former spouses or lovers. Another app called DeepNude was recently posted online to create fake nude images of women.

Support this presidential candidate really the death penalty?

It is worrisome enough for individuals to lose their identity to deepfakes. What’s even more worrisome is the possibility of deepfakes being employed in fake news manufacturing. If used intentionally to settle political scores, it could shake the very foundation of democracy and the mechanism of forming social consensus. Has this presidential candidate really made a speech in private supporting the death penalty? No one can prove if it happened or not.

Unfortunately there is still no technology that can permanently detect deepfakes, because artificial intelligence is learning very fast. Once a fake video is found unnatural in a certain way, like if it is not blinking enough, an algorithm is employed to improve this feature.

While scientists and lawmakers are struggling with preventive measures against the abuse of deepfakes, individuals could help to raise more awareness of this technology. “To see is to believe” is no longer a reliable guideline. We need to develop more complex methods to recognize the truth.
 

“Frankly …

On an alternating basis each week, our “Frankly …” column series is written by Liwen Qin, Maximilian Buddenbohm, Dominic Otiang’a and Gerasimos Bekas. In “Frankly … posthuman”, Liwen Qin takes a look at technical advances and how they affect our lives and our society: in the car, in the office, and at the supermarket checkout.

Top