Deepfakes - what are they and why should we care?

You may or may not have heard about deepfakes already. These are fake audio or video files showing situations or conversations that never really happened, something like a photoshopped picture, but potentially much more dangerous. But how can a fake video be dangerous, and why should we care about this?

What exactly is a deepfake?

Deepfakes are files manipulated using artificial intelligence and machine learning, specifically deep learning – which is where the name deepfake comes from. These advanced technologies are used to replace a person in a video with another one, while the image and sound are processed to look almost identical to the original video. This creates very realistic videos which can be very hard to distinguish from a true recording.

Deepfakes can be created several different ways, usually using a certain video as the base and then being fed many photos of the individual that should be put in place of the current person on the video. Through different angles and perspectives the program learns better and better how to simulate the person in a realistic way, constructing the facial expressions and movements to match the new person. There are programs available for this purpose as well as companies who will create your desired deepfake for you. In theory anyone could try making a deepfake, but currently you still need a lot of skill as well as a high quality computer to create a believable deepfake with the existing technology. 

Why is this important?

As AI technologies become more and more advanced, so does their potential to be misused. And this becomes increasingly important when you consider that video recordings are one of the media we used to trust more than text and photos. This is partly due to the fact that such video-manipulating technologies are relatively new, so we are not used to doubting them and verifying their credibility. Photos can be photoshopped and changed, but until recently videos were not so susceptible to such realistic changes and manipulation. As humans, we are taught not to believe everything we hear, but to trust our eyes and believe things when we see them. So what happens when the things that we see are also not necessarily true anymore?

Another issue is that creating deepfakes is currently not considered a crime or in any way bad. Most deepfakes are created for the purpose of entertainment, and deepfake softwares can be downloaded from open sources on the internet. And while most people would use this to put themselves in their favorite movie, create videos of themselves or their best friends as actors or politicians and have a good laugh, they can also be used to spread fake news and manipulate the public.

If these technologies are improved and fake videos made even more realistic in the future, which they almost certainly will be, they could easily be used to interfere with elections and other political processes, put fake news and disinformation on TV channels or social media to create tensions, help with fraud and many other criminal activities. They can also be used to shame and hurt people through fake humiliating videos, slander, even porn or whatever other situation someone thinks to create.

Online safety and vigilance

With the rise of deepfakes as well as other kinds of technologies which can be used to spread disinformation or fool the public, it’s important to stay vigilant and be careful of the sources we are using to educate and inform ourselves. Finding trustworthy media, but also learning to question things that don’t make sense and do additional research can be very helpful and let us stay safe.

With deepfakes there are currently still things to help us notice them, such as unrealistic facial expressions, blurred images or unnatural lighting, as well as audio which is not perfectly paired to the video. But as these technologies improve, it will get harder and harder for our eyes to detect fake videos, which is why it’s crucial to find sources of information which we trust, not just get news from random social media posts and videos.

There are currently startups and even large companies investing in the detection of deepfakes through their digital footprints, and there will surely be more programs made for this purpose. It is important for us all to go in the right direction and use these wonderful technologies for good. The further development of AI is very closely dependent on ethical and moral practices, as well as learning to detect and stop those who are trying to misuse them. It is up to all of us to do the right thing and develop these technologies in the right direction, as well as educate the everyday person to be careful and not believe everything they hear or see online. Protecting truth and freedom of information is something that should concern us all. 

Resources