Can we trust artificial intelligence - what's real and what's fiction?
- Press
Trusted AI must meet several criteria. It must be legal, meaning it must meet all legal and legislative requirements. It must also respect and adhere to ethical principles. It must not discriminate, meaning that the system cannot behave differently towards different races, genders or age categories and, in general, cannot favor some groups over others.
The security of the AI system is also a very critical part. Both the data and the system must be sufficiently protected so that they cannot be misused or compromised. The transparency of the system must also be clearly defined so that we know why and how the system makes decisions. Finally, the responsibility of the AI itself must be clearly defined. If there is a part of the AI in some common system, for example to control a production line, its responsibility must be clearly defined. A similar example is a diagnostic system in a hospital that evaluates whether a patient has a tumor and uses elements of artificial intelligence in this process, the entire system must be trustworthy, otherwise it can end disastrously.
During the development and training of various artificial intelligence systems, metrics are used that indicate the accuracy and effectiveness of the model. However, it is often forgotten where the artificial intelligence is ultimately deployed and for whom it actually works. If the results are good and credible, but the system is not secure, explainable (why it does what it does), discriminates against individual groups or is used in an unethical environment, we cannot consider such a system trustworthy.
In practice, this means that if we want to build a quality product using artificial intelligence, we must first think about whether we meet the above points. If not, we need to reevaluate the very usefulness and suitability of the AI product.

From the perspective of an ordinary person, we most often encounter generative artificial intelligence tools in these areas:
Text generation
The principle of operation is based on input. The input for the so-called large language models is text from the user. The most widespread forms are ChatGPT, Claude, Gemini, Mistral or DeepSeek. The reliability of the answers of these models is often questionable. Sometimes they generate the correct answer that we need, but often they invent their own answer (hallucinate). Such an answer may sound convincing, but it is just a false fabrication. Therefore, these services have also added functions for searching the Internet so that the user can gain access to source information that he can verify. And each of us should do it.
Image generation
The generation or editing of images itself has been widespread for several years. In the beginning, it was manual editing using specific editing software. However, in the last two years, artificial intelligence has taken over this role in the public sphere with tools like Flux, Foocus and many others. And what kind of abuse can this be? It consists in generating photos, or just their alternative backgrounds. The subsequent sharing of such modified content on social networks can cause unpleasant comments or the spread of misinformation.
Voice generation
Converting text to audio is not new in itself. However, what is new are tools that only need a few sentences to be able to realistically imitate and replicate our voice, which can lead to fraud. They can usually imitate anyone they have recorded. In this case, defense is more difficult and groups of people who have not yet encountered the possibilities of AI are particularly susceptible, and therefore do not realize that the unusual stylistics of the imitated person pose a risk, because the color of the voice is familiar to them.
Video generation
Last year, OpenAI introduced video generation. We can try the Sora tool with a ChatGPT subscription. This technology can generate a video from just the text you enter – similar to communicating with a chatbot, but instead of a question, we enter the details of the desired video and it will generate them for us. This method of content generation is still relatively easy to detect, because some of the generated details seem artificial.
Overall, the rule of additional control and common sense applies – generated content is all around us, even if we often don’t realize it. That’s why it’s important to always think about what we see and read.
A sample of the evolution of video generation:
SOURCE: Pravda