In February 2025, the European Commission adopted non-binding guidelines aimed at clarifying and elaborating the concept of an artificial intelligence system, as defined in Article 3(1) of the AI Act. These guidelines should facilitate the application of legislative provisions and provide a clearer framework for identifying and understanding the different types of AI systems, which is of particular importance for those who develop or use AI-based systems within their business operations.

Definition of an Artificial Intelligence System
The AI Act defines an artificial intelligence system as “a machine-based system designed to operate with varying levels of autonomy and capability of demonstrating adaptability after deployment.” Such systems also have the ability to analyze data and generate outputs such as predictions, recommendations, or decisions that may significantly impact physical or virtual environments.
Which systems does the definition apply to?
The guidelines emphasize that the definition covers a wide range of systems, and whether a particular software system qualifies as an AI system under the new rules depends on the specific architecture and functionality of that system. The guidelines specifically highlight seven functional elements of an AI system to consider when assessing it: (i) the system is machine-based, (ii) it has the ability to function with varying levels of autonomy, (iii) adaptability after deployment, (iv) operation for explicit or implicit objectives, (v) drawing conclusions based on input, (vi) generating outputs such as predictions, content, recommendations, or decisions, (vii) impact on physical or virtual environments.
In addition to these key elements, the guidelines emphasize the need to distinguish between two phases of a system’s lifecycle: the pre-deployment phase and the post-deployment phase, and they highlight that not all elements need to be present in both phases, allowing flexibility in the application of the regulations.
Regulation of high-risk and other systems
In addition to describing the definition, the guidelines clarify that not all AI systems are subject to the same regulatory obligations. Only those systems used in specific, high-risk contexts, such as the areas listed in Annexes I and III of the AI Act, are subject to all prescribed obligations, including strict oversight. Furthermore, companies developing AI systems with the capacity to operate with limited or no human oversight, i.e. those with a high level of autonomy in operation, should approach risk assessment and the implementation of appropriate protective measures with particular care.
***
Just as it is important to understand what AI systems are and in which cases they qualify as high-risk, it is also important to emphasize that not all software systems with complex algorithms are necessarily AI systems within the meaning of the AI Act. The guidelines state that, for example, simple event prediction systems and basic data processing do not fall under the definition of an AI system.
If you have any questions or need further information, our office is here to provide legal assistance in navigating the innovative regulatory framework for the responsible use and development of artificial intelligence.