New deepfake video effort discovered

David Strom 29 Jun 2022

The mayor-to-mayor video calls are a warning to us all to not accept things without some proper vetting.

Since I wrote about the creation and weaponization of deepfake videos back in October 2020, the situation has worsened. Earlier this month, several European mayors received video calls from Vitali Klitschko, the mayor of Kyiv. These calls turned out to be impersonations, generated by tricksters. The mayor of Berlin, Franziska Giffey, was one such recipient and told reporters that the person on these calls looked and sounded like Klitschko, but he wasn’t an actual participant. When Berlin authorities checked with their ambassador, they were told Klitschko wasn’t calling her. Fake calls to other mayors around Europe have since been found by reporters

Were these calls deepfakes? 

Hard to say for sure. One German news report claims the images were merely copied and not computer-generated. But no matter what techniques were used, deepfakes are clearly getting better and more pernicious. The FBI recently issued this warning, saying that they are being used to apply for remote work interviews. 

It isn’t clear why someone would go through all the trouble to make and fake these calls, other than to waste a lot of time. Our blog post from 2020 mentions some other reasons, such as criminal intent or revenge, but neither of these scenarios seems to fit. And sadly, the situation is such that they can be relatively easy to construct, especially if you are doing a video call to someone that doesn’t know you all that well or isn’t paying attention to properly vet the call.

In late 2021, Shelly Palmer cataloged the various algorithms that can be used to produce these videos, and this more recent list adds a few others. This how-to video shows you the steps involved, not that you should try this on your own. Hany Farid, a professor at the University of California, Berkeley, has developed a tool that can be used to determine if a Zelensky video is real or fake.

One blog post that I read on the ethics of “synthetic media” (that is, what the people who write the deepfake algorithms call their work product to make it sound more legitimate) compared the deepfake world with the introduction of the Kodak camera back 130 years ago. Back then, folks were worried about image manipulation by newbie photographers, and whether we could use photos to show anything other than the literal, “real” state of the world. The doomsday scenarios didn’t materialize back then, and as you know now we all walk around with digital cameras that carry multiple lenses and built-in effect filters that previously were only found on the higher-end pro gear.

As I mentioned back in 2020, social media isn’t helping the situation, and what used to take days or weeks to reach any viral notice now gets quickly amplified within hours or even minutes. Sadly, the social platforms haven’t really taken any leadership position or gotten any better at being more proactive in detecting and removing deepfakes.  

The mayor-to-mayor video calls are a warning to us all to not accept things without some proper vetting, especially when it comes to well-known (or semi-well-known) individuals. Distrust and verify before taking any particular action.

Michal Pechoucek, Chief Technology Officer at Avast, says the following on the topic: "I believe the future of preventing deepfakes will be a future where videos and other content are verified and certified on the internet. This can work as people will have digital wallets with data stored on them, verifying their identity. When uploading video content showing people and faces, it will only be trustworthy if verified. In an ideal world, platforms such as YouTube, Instagram, and TikTok will allow, or even require the verification of images and videos, so deepfakes will be a thing of the past. The same applies to video call applications like Microsoft Teams and Zoom, which should provide means of verification of people in live calls.

That said, the issue of deepfakes will certainly get worse first before it gets better; while there are indications that the Klitschko video might actually not have been a deepfake, top politicians getting tricked for half an hour with a faked video shows that future deepfake video call attempts actually may be successful. This prominent example is scary, yet might also have an educational effect for many, to be careful with any video content, as even if it seems live, it may be manipulated."

--> -->