Deepfake is threatening trust in society
en-GBde-DEes-ESfr-FR

Deepfake is threatening trust in society

20/02/2025 SINTEF

Will we be able to trust text and images in the future? Deepfake is being used not just for innocent fun, but also to influence voters in the world’s most powerful countries.

At the Norwegian science institute SINTEF, Hollywood star Brad Pitt recently opened a conference on digital security. Well, actually, no, he didn’t.

Researcher Viggo Tellefsen Wivestad converted himself into Brad Pitt in record time. Illustration borrowed from the researcher.

Wivestad began his talk on deepfake with himself on video, but as Brad, with his characteristic sexy voice: “Deepfake. Scary stuff, right?” And that is precisely the researcher’s message.

“Deepfake will become a growing threat to us as both private individuals and employees, and to society at large. The technology is still in its infancy. Artificial intelligence is opening up unimaginable opportunities and becoming harder and harder to detect.

The word “deep” in deepfake comes from “deep learning”, a field of artificial intelligence. In practice, deepfake means any use of AI to create fake media content.

We have seen many examples of this in the US election campaign. Even those who have used the technology to make deepfake videos themselves, have issued warnings.

Very credible hoaxes

Cloning someone’s voice is easy, as long as you have a recording to use as a basis. The same goes for still images. Deepfake videos are more advanced, but there are already very credible fake videos around,” says Wivestad.

“Microsoft has a technology that produces fake videos based on just a single still image. You decide what the person should say, and the lips and face move in a natural way.

Swapping faces over is a simple matter

“Everyone” thinks it is fun to use the filters on Snapchat, which can give us rabbit ears and a miserly face. However, when the technology is used to swap people’s faces over on photos and videos, it can be anything but fun.

“In South Korea, students posting porn videos with the faces of their fellow students added is a major problem. In six out of ten cases, the cyberbullying involves minors. Popstars and actors around the world have been exposed to the same. More apps are offering face swap technology. It becomes even more advanced when, by simply producing a descriptive text of a well-known person, you can use artificial intelligence to generate both their voice and live images of them.

Embraced by criminals

Deepfake and AI are powerful tools in the hands of criminals. Viggo Tellefsen Wivestad talks about a Hong Kong financier who was tricked into transferring USD 25 million after a video conference with someone he thought was the CFO. The man was unsure, but he became convinced when he recognised several colleagues who were attending the same meeting and agreed with the CFO’s suggestion to execute a fast-track money transfer.

“The problem was that everyone he “met” during the video conference was deepfake. The scammers had cloned both the faces and the voices of the man’s colleagues. This was a sophisticated operation, but it is becoming easier to commit advanced fraud due to increasingly powerful digital tools. You can get anyone to say anything, at any time.

Damaging society

Wivestad is afraid deepfake could weaken confidence in society.

“During the US election campaign, Biden’s voice was cloned and used in robocalls where the president asked people not to vote. Later, Trump reposted fake photos apparently showing Taylor Swift fans who were Trump supporters.

When trust is weakened, it is easy to create uncertainty. Trump claimed that news articles showing images from the crowd at a Harris gathering were generated by AI, despite overwhelming evidence to the contrary.

“We also have what is known as ‘Liar’s Dividend’ – that it is easier for people to claim that real events are fake if they don’t like what it depicts. One famous example is Trump’s claim that Kamala Harris had exaggerated the size of the crowd at her gathering.

“Has anyone noticed that Kamala CHEATED at the airport? There was nobody at the plane, and she ‘A.I.’d’ it, and showed a massive ‘crowd’ of so-called followers, BUT THEY DIDN’T EXIST!” Trump declared.” claimed Trump on Truth Social.

“Insurance companies are seeing an increase in the number of fraudulent claims they receive. This is particularly worrying given that we know that fake news spreads ten times faster than the truth,” says the SINTEF researcher.

Good tips

Wivestad offers some helpful tips for detecting deepfake videos and images – with the caveat that his advice will quickly become outdated.

  • Look closely at the details in an image or video. Is there anything that doesn’t seem quite right – the number of fingers, an unusual shape to something?
  • What about shadows and reflections? It can be easier to detect fakes by looking at things in the background. Is there any text in the picture, and does it make sense?
  • Do the images come from a credible source? Does the image stir up strong feelings, could someone have malicious intentions behind using such an image?

“As regards the human voice, we have the advantage that Norwegian is a small language, especially in the case of dialects. It is easier to create deepfakes from Jonas Gahr Støre’s Eastern Norway dialect than Erna Solberg’s Bergen dialect,” says SINTEF researcher.

Create codewords

If you have any doubts about whether the person you are talking to in a video conference is real, Wivestad suggests having an agreed codeword, or asking about something that the fraudsters will probably not know. If you are put under time pressure or the messages stir up strong emotions, you may want to think twice and check whether the person you are talking to is real.

“You can still avoid being fooled if you keep up-to-date with critical thinking, media knowledge and netiquette. Even if deepfakes become perfect one day, they will still have some weaknesses: Just because someone pretends to be someone you know does not mean that the fraudster has an in-depth knowledge of your relationship with them.

“Someone who is looking to defraud you will normally need to initiate the contact themselves, so if you are in any doubt, end the conversation and contact the person via the channels you would normally use. It is also not possible to deepfake reality, so you might want to consider having sensitive conversations face-to-face.

Tech giants are working on the problem

The general population, companies, institutions and authorities alike all need to improve their understanding and awareness about the threats that deepfake poses. The more we know, the easier it is to detect it,” says Viggo Tellefsen Wivestad.

“Companies like Microsoft, Google and OpenAI are working on the problem. However, there is a race between technology for detecting deepfake and for finding ways around the security barriers. It is not the tools themselves that define our future, but how we regulate and use them.

“We need institutions and organisations, journalists and researchers that we can trust. For research institutions such as SINTEF, credibility is crucial. I think we are becoming more important than ever,” says the researcher.

Attached files
  • viggoblebrad-jpg.png
20/02/2025 SINTEF
Regions: Europe, Norway
Keywords: Applied science, Artificial Intelligence, Computing, Society, Social Sciences, Business, Telecommunications & the Internet

Disclaimer: AlphaGalileo is not responsible for the accuracy of content posted to AlphaGalileo by contributing institutions or for the use of any information through the AlphaGalileo system.

Testimonials

For well over a decade, in my capacity as a researcher, broadcaster, and producer, I have relied heavily on Alphagalileo.
All of my work trips have been planned around stories that I've found on this site.
The under embargo section allows us to plan ahead and the news releases enable us to find key experts.
Going through the tailored daily updates is the best way to start the day. It's such a critical service for me and many of my colleagues.
Koula Bouloukos, Senior manager, Editorial & Production Underknown
We have used AlphaGalileo since its foundation but frankly we need it more than ever now to ensure our research news is heard across Europe, Asia and North America. As one of the UK’s leading research universities we want to continue to work with other outstanding researchers in Europe. AlphaGalileo helps us to continue to bring our research story to them and the rest of the world.
Peter Dunn, Director of Press and Media Relations at the University of Warwick
AlphaGalileo has helped us more than double our reach at SciDev.Net. The service has enabled our journalists around the world to reach the mainstream media with articles about the impact of science on people in low- and middle-income countries, leading to big increases in the number of SciDev.Net articles that have been republished.
Ben Deighton, SciDevNet

We Work Closely With...


  • BBC
  • The Times
  • National Geographic
  • The University of Edinburgh
  • University of Cambridge
  • iesResearch
Copyright 2025 by AlphaGalileo Terms Of Use Privacy Statement