Artificial Intelligence: The Story of a Revolution That Is Just Beginning
- Press

Computers thus reached universities and technical enthusiasts, who were introduced to this new technology not only from a theoretical and mathematical point of view, but also from a practical point of view - they could try out their first own programs and calculations, which previously took a very long time, but now took incomparably less time.
As the computing power and availability of computers grew, so did the complexity of programs, experiments and the first attempts at functional artificial intelligence. Groundbreaking, practically applicable advances in artificial intelligence came in the modern era, especially after 1995, in the form of regression and classification. This meant that we were able to teach a mathematical model based on the data collected, so that the model could, for example, estimate real estate prices based on houses sold in a neighborhood according to price, house size, and lot size. Classification worked similarly, initially binary - for example, if we had the parameters of a flower such as colour, size, length of flower and leaves, we could determine with some probability whether it was the flower we were looking for.
Later, this method was extended to multiclass classification, which meant that we could use learned mathematical models to assign a particular flower to the appropriate class (type/name) according to its parameters. In today's terms, these approaches are referred to as "shallow models" because we can apply them without intensive computational effort - their needs can be covered by an ordinary desktop or home computer.
Around 2010, the concept of neural networks came to the fore, which are more versatile and work effectively even today on various AI methods. As the potential applications of AI grew, they began to be used in areas such as image processing and classification, prediction, projection, image generation, and large-scale language models.
Examples of AI applications today:
Image processing involves breaking down an image into pixels and looking for features in the image based on pixel connections. For example, to identify a car in an image, all a model needs to do is recognize the frame of the car by its typical silhouette.
Prediction is used, for example, when a machine fails in a production line. We track historical failures and values from sensors. If the current data starts to resemble the state before the last failure, the model detects that we need to perform maintenance on the machine to prevent a failure.
Projection means, for example, predicting the number of vehicles on a busy road at a certain time. Based on historical data on the number of cars at different times, the model can predict the traffic load for example at 4:00 pm when we leave work.
Image generation is a popular method of creating art or content for the internet. It uses an approach called "stable diffusion". In learning, the model works with a large number of images and their descriptions, to which noise is added so that the model understands how the image is created and can generate it according to the task. The generation process is the opposite - the model "assembles" the final image from the noise.
Large language models are currently the most widely used form of artificial intelligence among the general public. These are massive models requiring a lot of computational power, learning on the question-answer principle, where text is converted into numerical vectors and the similarity between the question and the answer generates the answer.
Chatbots are an integral part of society today, especially OpenAI's GPT models. However, these models are not a monopoly in the field of artificial intelligence. There are also models like Gemini, Claude, Mistral, DeepSeek, Salamander and many others that are freely available and installable for anyone - whether at home or in industry.
The disadvantage of deploying these models is the high computational complexity, especially in terms of graphical computing power. The models come in a range of sizes - from smaller ones that can handle powerful gaming PCs to enormously large models that require graphics cards costing tens of thousands of euros. As a general rule, the bigger the model, the more "reasonable" it is. The problem with language models was mainly that the information was not up-to-date - the model was only trained on data up to a certain point and did not have access to the latest data. This problem has been alleviated by adding functionality that allows searching the internet or uploading files, so that the chatbot also has access to up-to-date information.
Currently, the latest trend in artificial intelligence is so-called agents. In simple terms, these are chatbots that can independently call various services when a request is fulfilled. Let's imagine we are planning a holiday: we give the chatbot requirements for a destination and the agent will search until it finds an offer matching our criteria. It will then book it for us and arrange everything we need. This is where the potential is huge today and we can expect great things from this technology.
SOURCE: Pravda