Press
20. 7. 2023

Michal Srnec: Excited about AI? Beware of the perfect deepfake

In what ways and why should people be wary of artificial intelligence (AI)?

Looking at the evolution of cybersecurity, it is clear that it has changed by leaps and bounds since its early days. However, new forms of artificial intelligence have come along with it. In what ways and why should people be wary of artificial intelligence (AI)?

Fifteen years ago, companies feared viruses; corporate measures focused on securing networks and systems that were often isolated from the outside world. Back then, targeted attacks were done in a one-to-one fashion – a hacker or group of hackers would “program something” and execute an attack on a specific victim. Ordinary people and even IT specialists paid no attention to the topic of cyber security.

However, technology is evolving, computing power is growing, the internet has massively spread to all the devices around us and many businesses exist only in an online form. Attacks are widespread, organised and largely automated.

What does AI abuse look like?

Today, the world of online security and attacks is fundamentally different; AI has appeared on the scene, bringing benefits but also risks in every area. Therefore, people should guard their “reality”. So what does an example of AI misuse look like? Everyone is probably familiar with the classic spam and phishing messages. Just imagine these dangerous, but still ordinary emails or text messages becoming comprehensive campaigns attacking every level of human perception, with the sole aim of gaining complete trust. Perhaps a little scary, but we are still talking ‘only’ about improvements to existing deception mechanisms.

What is the real threat?

Deepfake or FakeNews. It is a method of creating fake content using artificial intelligence. It can produce authentic facial expressions, movements or voice nuances of real people, allowing for near-perfect manipulation of any multimedia content.

What are the consequences?

Deepfake videos with sensitive content can be used to blackmail, damage a person’s reputation or influence public opinion. However, there are also Deepfake voice clips that can be used to forge authorisations and fraudulent online banking transactions. Various FakeNews multimedia are used to create fake news, misinformation and manipulate the entire information space, including news websites and television programmes. In addition, political leaders, candidates and public figures can also be targeted for manipulation and dissemination of fake information. Fake videos often create the illusion that a person has said something that he or she has not actually said. In this way, public opinion, as well as electoral decisions and political campaigns, are influenced.

SOURCE: FORBES