Trend: Security processes are becoming digital, but they must be controlled by people
- Press

Modern information technologies are experiencing a huge boom, with artificial intelligence (AI) becoming one of the key elements of their development. It is no longer just about automating routine activities. Artificial intelligence penetrates deep into the infrastructure of organizations, analyzes user behavior, identifies threats in real time, and responds to security incidents before they develop into critical dimensions.
AI in cybersecurity
In the field of cybersecurity, AI elements are not as new as in other industries. Various statistical or regression models have been part of defense strategies for more than a decade, and their application is still expanding. AI is used mainly where large amounts of data need to be processed. These systems help not only in searching for specific anomalies or suspicious cases, but also in filtering out most of the less important events. This relieves security analysts, who can focus on truly critical cases.
This is confirmed by Michal Srnec, an expert at Aliter Technologies: "Artificial intelligence is beneficial where it is necessary to process a lot of data and analyze it, looking for correlations, patterns or even anomalies. This typically happens when analyzing user behavior, networks or correlations of security reports. However, it is important to remember that AI may not point out everything suspicious. Its added value is that it filters out many cases that analysts most likely do not need to deal with."
Prevention and Detection
AI elements are present in almost all types of solutions, whether it is prevention, detection, or response to security incidents. The use of "state of the art" tools becomes a necessity, because the same or similar technologies are used by attackers themselves.
Implementing these solutions requires a thorough approach. It is not enough to simply integrate available tools. A critical aspect is trust in the models used, their transparency and security. Companies that implement AI into their security infrastructure must have a detailed overview of where and how the model works, what data they trained it on, and how it is secured against misuse.
"In various forms, it can be said that AI elements are present in almost all of our solutions, whether we are talking about preventive or reactive security measures. As for implementation, we cautiously use tools that are available or offered by reputable manufacturers," explains M. Srnec.
Critical Infrastructure
Separate and isolated AI systems play a crucial role in security strategies. In the context of critical infrastructure protection, they ensure control over data flows and increase the overall resilience of systems. One of the main reasons why isolated systems are preferred is their ability to adapt.
Organizations that have full control over the system can flexibly respond to new threats and minimize the risk of penetration through standard external connections. This is important when processing data of a sensitive nature, for example, in defense security systems or in the government sector. “Isolated and separated AI systems allow for a higher level of security from a data flow perspective. Separate AI systems over which we have full control and the ability to adapt them to our current requirements are, in our opinion, crucial,” agrees M. Srnec.
AI is not a panacea
Although it may seem that automation thanks to AI can replace humans, this is not yet the case in the field of security. AI can effectively process a huge amount of information, but final decisions, especially those with the potential to affect critical areas or even a person’s life, must remain in human hands.
Development is heading towards so-called decision support systems, which support people in choosing the right decisions. They can suggest the most effective course of action, but they do not have the autonomy to act. Such an arrangement combines the power of machine processing with the ethical and legal framework of human decision-making. “Partial automation is an inevitable step into the future, but the main decision-making must remain with humans,” adds M. Srnec.
Cooperation between AI and military systems is also becoming a reality. Drones with autonomous decision-making, adaptation to the environment or with independent missions are already part of deployment on battlefields. Their development is also based on experience from recent or current conflicts, which provide valuable insights into their effectiveness and limits.
Ethics and Regulatory Frameworks
As autonomous systems develop, there is increasing pressure to create ethical and legal frameworks. It is not just about preventing misuse, but also about society’s ability to accept that a machine may eventually make decisions about physical action against people. Regulations such as the AI Act are a first step, but their compliance is questionable – especially in the context of armed conflict.
The technological complexity of autonomous weapons is not as great as it might seem. The real problem lies in how society can morally cope with the idea that machines will initiate lethal actions without human control. “Any system in which AI is applied should meet all the requirements of the AI Act and other regulations to prevent their misuse,” the expert emphasizes.
The conflict in Ukraine shows that AI will be used more and more often. Its properties such as speed of decision-making and reaction times far exceed human reactions.
“The combination of AI with advanced or simple weapons systems creates a difficult interface to overcome. Such a technical combination is not difficult. The question is rather whether society can morally cope with the fact that AI will trigger a process of physical restraint, including the elimination of people by machines, whether by human decision or fully automatic weapons systems,” warns security consultant from Aliter Technologies Imrich Petruf. He himself sees the introduction of regulatory frameworks as necessary, but their practical implementation in real combat is questionable. “There is no need to have any illusions that all states would comply with such regulations,” adds I. Petruf.
Certifications as a guarantee of trust
Trustworthiness: The trustworthiness of technologies and their manufacturers is a major concern in the defense and security sector. Protection against poor quality: Globalization brings the risk of unverified quality. Certifications serve as a tool that helps prevent technical failures and accidents. Durability guarantee: The American military standards MIL-STD, also recognized within NATO, define minimum requirements for the durability of materials and technologies. Product testing: Third parties test products for electromagnetic coexistence, vibration, resistance to dust, water or salt mist. Reliability: Certified devices can be used in life-threatening conditions, such as combat zones or rescue operations. Lifecycle standards: Certification does not only apply to products, but also to production and control processes throughout the entire life cycle of the technology. ISO and AQAP as a guarantee of processes: Production is subject to ISO standards and the NATO quality standard AQAP, which ensure systematic quality control and compliance. Audits and Controls: Certificates are subject to regular, repeated, rigorous audits that verify compliance with requirements.
SOURCE: TREND