CLOUDY podcast | #29 How the EU is Taming AI

  • News
The twenty-ninth episode of the CLOUDY podcast explores how the European Union is regulating—and plans to regulate—artificial intelligence. How is the AI landscape changing, why does regulation matter, and what does it mean for everyday users? We discuss what the AI Act is, what is and isn’t ethical, and which regulatory authorities exist. Do we even need regulation in this space? Hear this and more in a conversation between Andrej Kratochvíl and Peter Bakonyi, Senior Data Scientist at Aliter Technologies.

Why has AI regulation become such an important topic?

Artificial intelligence is essentially a black box—we can’t see inside it or fully understand how it works. We know that if we feed it some input, we’ll get some output. But we don’t know what happens in between, and that creates distrust and fear for some people and organizations about where their data goes and what happens to it.

That’s why the EU is preparing legislation—the so-called AI Act.

Are we at risk of losing critical thinking and the ability to distinguish reality?

Yes—and that’s exactly why the proposed regulation pushes for transparency and verifiability. Systems should be able to cite sources or flag that information may not be entirely accurate—AI can hallucinate (i.e., make things up).

Today, AI is the first stop for many people when searching for information. We need to get used to verifying what we read. Sometimes one quick Google search can save us from problems. If users stop fact-checking, they lose vigilance and critical thinking.

What is the main goal of the EU’s AI Act?

The core goal of the AI Act is to protect individuals’ privacy within the EU. It aims to prevent misuse of AI—for example, manipulation, systematic opinion-shaping, surveillance, or the social “scoring” of people. Every technology has two sides; this is about minimizing the risks.

An everyday user can be at risk, especially when entering sensitive or private data into AI tools. Data are valuable goods. By combining data points, third parties can build profiles about us and influence us in targeted ways.

For example: more precise ad targeting, higher fraud risk, and more sophisticated phishing techniques.

How does the AI Act categorize AI systems by risk?

Unacceptable risk: e.g., social scoring of citizens, mass biometric surveillance.

High risk: decisions in banking (loans, credit scoring), use in critical infrastructure, law-enforcement and defense deployments—any context with a major impact on people where they are outside the “decision loop.”

Limited risk: e.g., chatbots—if users apply critical thinking and transparency is ensured, the risk is lower.

Minimal risk: explainable models whose behavior can be documented and tested (e.g., with methods like LIME).

An example of citizen-protecting regulation would be an interconnected network of city cameras that not only issue fines but also lower a “social score”—this is exactly what the AI Act prohibits.

Traffic enforcement AI speed cameras are considered high-risk and must have strict safeguards to prevent misuse.

When using AI, the “human-in-the-loop” rule applies—i.e., people must be able to interact with the AI and switch it off; humans should retain control. A typical example is an autonomous vehicle, where you can disengage the autopilot and drive yourself.
In short: I decide whether I drive or leave it to AI.

What differences in AI regulation can we see around the world?

The key regions here are China, Europe, and the United States.

China: AI is regulated by the state; what a commission deems appropriate is used—in the interest of the state.

United States: AI is more often used to drive innovation—so it’s more business-oriented. Regulation and ethical frameworks exist but are enforced less strictly.

Europe: The focus is primarily on protecting individual privacy. It’s not just about fines; it’s about ensuring we aren’t reduced to a database entry on the basis of which a system decides someone is more likely to commit a crime and begins monitoring them automatically. Put simply, the most permissive approach in this area is likely in the U.S.

You can listen to the full podcast on SpotifyApple podcast or watch it on YouTube.

decor

News and articles