Deepfake

Digital technologies are proving to us that we can’t completely rely on “our own eyes” and “our own ears”. With the development of neural networks, the first applications for “mapping” one speaker onto another have begun to appear both in video and audio format. In practice, for example, this means that former president Obama can say things that in reality are said by Trump, Zeman or even Bagdadi. You see a video that speaks like Obama and has a voice like him, but he’s saying things that he would probably never say in real life. That’s why the term “deepfake” has come to be used for such manipulation with reality. 

Simply speaking, a deepfake is a digital object in which a person is embedded into the existing image or modified into a form that doesn’t correspond to the original act. What does this mean? Imagine that you’ve got a video with the Governor of the Czech National Bank, in which he declares a decrease in interest rates, which is a statement that the currency and stock markets will react to. If you managed to modify the video with him saying that the interest rate will increase and got this video into proper “circulation”, then you can react by buying and selling and make huge profits. People are used to the fact that when someone talks like John, looks like John and moves like John, it must be John. This can certainly be true, but also doesn’t have to be. The problem with deepfakes is that suddenly we can no longer rely on our own eyes or our common sense.  

Deepfakes are made with the help of neural networks, which are able to analyze an image and then manipulate it accordingly. You can see an example of how it works at the This Person Does Not Exist project, which actually does something very simple – it takes several photographs of different people, averages them out, and creates a new person from them – one that has never existed.  

The most common methods for misusing deepfakes are the creation of hoaxes or manipulative reports, but also personal revenge – it’s relatively easy to use a neural network to take the body of an actress or actor of an adult film and replace their head with that of someone you don’t like, or replace them with celebrities that wouldn’t otherwise be acting in a film like that. All of these examples clearly show that deepfake is a method we can use to manipulate reality. It can change lies to truth, and the method in which it is done is fairly difficult to reveal.  

At present, this is one of the strongest and most poorly identifiable manipulative techniques in the online environment. For example, Adobe claims that in order for its algorithm to master someone’s voice, it only needs a 20-minute recording of it. In other words, if a person has ever spoken somewhere while being recorded for twenty minutes (and this doesn’t have to be one continual speech), it’s possible to use that person’s mouth to say just about anything. A similar tool is Descript, which you can try out for free.  

Completely false photographs of people can also be created (examples are also available). In addition to these audiovisual tools, there are also bots that function on social networks. They can be used to buy likes on Instagram or fans on Facebook. These algorithmic structures infiltrate friends in a similar manner, and they can participate in sharing hard-to-recognize advertisements or political marketing.  

No description
No description

Have a look at this deepfake gif of actress Amy Adams and Nicolas Cage. Amy Cage?  Source: Wikiwand.com - Deepfake. 

This is why we talk today about a certain renaissance of classical media, which sign and code their videos to guarantee quality and authenticity against manipulators. Offering up a simple recipe to reveal a deepfake is extremely difficult; nonetheless, we’ll list a few points here to think about:  

  • Try to create a deepfake yourself – you’ll get experience on how it’s done and maybe a better eye for what might look suspicious.  
  • Have you seen this speech before? If a scene is familiar but you can’t place the speech, it might be a strong indicator for suspicion.  
  • Don’t believe information on social media as much. Social networks are a key area where this information spreads.  
  • Try to verify the information on official websites. For example, politicians have their own websites, and if you don’t find anything there about an important speech, there is a certain risk that it might not be authentic.  

You are running an old browser version. We recommend updating your browser to its latest version.

More info