Will we be able to trust text and images in the future? Deepfake is being used not just for innocent fun, but also to influence voters in the world’s most powerful countries.
At the Norwegian science institute SINTEF, Hollywood star Brad Pitt recently opened a conference on digital security. Well, actually, no, he didn’t.
Researcher Viggo Tellefsen Wivestad converted himself into Brad Pitt in record time. Illustration borrowed from the researcher.
Wivestad began his talk on deepfake with himself on video, but as Brad, with his characteristic sexy voice: “Deepfake. Scary stuff, right?” And that is precisely the researcher’s message.
“Deepfake will become a growing threat to us as both private individuals and employees, and to society at large. The technology is still in its infancy. Artificial intelligence is opening up unimaginable opportunities and becoming harder and harder to detect.
The word “deep” in deepfake comes from “deep learning”, a field of artificial intelligence. In practice, deepfake means any use of AI to create fake media content.
We have seen many examples of this in the US election campaign. Even those who have used the technology to make deepfake videos themselves, have issued warnings.
Very credible hoaxes
Cloning someone’s voice is easy, as long as you have a recording to use as a basis. The same goes for still images. Deepfake videos are more advanced, but there are already very credible fake videos around,” says Wivestad.
“Microsoft has a technology that produces fake videos based on just a single still image. You decide what the person should say, and the lips and face move in a natural way.
Swapping faces over is a simple matter
“Everyone” thinks it is fun to use the filters on Snapchat, which can give us rabbit ears and a miserly face. However, when the technology is used to swap people’s faces over on photos and videos, it can be anything but fun.
“In South Korea, students posting porn videos with the faces of their fellow students added is a major problem. In six out of ten cases, the cyberbullying involves minors. Popstars and actors around the world have been exposed to the same. More apps are offering face swap technology. It becomes even more advanced when, by simply producing a descriptive text of a well-known person, you can use artificial intelligence to generate both their voice and live images of them.
Embraced by criminals
Deepfake and AI are powerful tools in the hands of criminals. Viggo Tellefsen Wivestad talks about a Hong Kong financier who was tricked into transferring USD 25 million after a video conference with someone he thought was the CFO. The man was unsure, but he became convinced when he recognised several colleagues who were attending the same meeting and agreed with the CFO’s suggestion to execute a fast-track money transfer.
“The problem was that everyone he “met” during the video conference was deepfake. The scammers had cloned both the faces and the voices of the man’s colleagues. This was a sophisticated operation, but it is becoming easier to commit advanced fraud due to increasingly powerful digital tools. You can get anyone to say anything, at any time.
Damaging society
Wivestad is afraid deepfake could weaken confidence in society.
“During the US election campaign, Biden’s voice was cloned and used in robocalls where the president asked people not to vote. Later, Trump reposted fake photos apparently showing Taylor Swift fans who were Trump supporters.
When trust is weakened, it is easy to create uncertainty. Trump claimed that news articles showing images from the crowd at a Harris gathering were generated by AI, despite overwhelming evidence to the contrary.
“We also have what is known as ‘Liar’s Dividend’ – that it is easier for people to claim that real events are fake if they don’t like what it depicts. One famous example is Trump’s claim that Kamala Harris had exaggerated the size of the crowd at her gathering.
“Has anyone noticed that Kamala CHEATED at the airport? There was nobody at the plane, and she ‘A.I.’d’ it, and showed a massive ‘crowd’ of so-called followers, BUT THEY DIDN’T EXIST!” Trump declared.” claimed Trump on Truth Social.
“Insurance companies are seeing an increase in the number of fraudulent claims they receive. This is particularly worrying given that we know that fake news spreads ten times faster than the truth,” says the SINTEF researcher.
Good tips
Wivestad offers some helpful tips for detecting deepfake videos and images – with the caveat that his advice will quickly become outdated.
- Look closely at the details in an image or video. Is there anything that doesn’t seem quite right – the number of fingers, an unusual shape to something?
- What about shadows and reflections? It can be easier to detect fakes by looking at things in the background. Is there any text in the picture, and does it make sense?
- Do the images come from a credible source? Does the image stir up strong feelings, could someone have malicious intentions behind using such an image?
“As regards the human voice, we have the advantage that Norwegian is a small language, especially in the case of dialects. It is easier to create deepfakes from Jonas Gahr Støre’s Eastern Norway dialect than Erna Solberg’s Bergen dialect,” says SINTEF researcher.
Create codewords
If you have any doubts about whether the person you are talking to in a video conference is real, Wivestad suggests having an agreed codeword, or asking about something that the fraudsters will probably not know. If you are put under time pressure or the messages stir up strong emotions, you may want to think twice and check whether the person you are talking to is real.
“You can still avoid being fooled if you keep up-to-date with critical thinking, media knowledge and netiquette. Even if deepfakes become perfect one day, they will still have some weaknesses: Just because someone pretends to be someone you know does not mean that the fraudster has an in-depth knowledge of your relationship with them.
“Someone who is looking to defraud you will normally need to initiate the contact themselves, so if you are in any doubt, end the conversation and contact the person via the channels you would normally use. It is also not possible to deepfake reality, so you might want to consider having sensitive conversations face-to-face.
Tech giants are working on the problem
The general population, companies, institutions and authorities alike all need to improve their understanding and awareness about the threats that deepfake poses. The more we know, the easier it is to detect it,” says Viggo Tellefsen Wivestad.
“Companies like Microsoft, Google and OpenAI are working on the problem. However, there is a race between technology for detecting deepfake and for finding ways around the security barriers. It is not the tools themselves that define our future, but how we regulate and use them.
“We need institutions and organisations, journalists and researchers that we can trust. For research institutions such as SINTEF, credibility is crucial. I think we are becoming more important than ever,” says the researcher.