Artificial intelligence (AI) has long been in use across various sectors of society. It seems to have mainly operated under the hood, out of sight. For many, the consequences of AI are most clearly seen in the recommendation algorithms of social media platforms, which aim to keep us glued to our screens, even when our well-being no longer benefits from it.
AI is not disappearing, on the contrary, it is spreading to new areas. AI assistants will be coworkers for many human employees. And precisely for this reason, organisations from businesses to primary schools must ensure their staff’s AI literacy.
For instance, if a police officer does not understand that sensitive information should not be put into ChatGPT, they might ask AI to draft a crime report based on the information provided to ease their workload. And when work efficiency improves, colleagues begin to follow suit. This has already happened in the Netherlands.
Thanks to the AI regulation, in a few years, we as individuals will be able to trust that self-driving cars on the roads are not dangerous; that AI-generated news summaries are carefully crafted; that talking toys do not contain foreign propaganda; and that social media algorithms do not incite self-harm, depression, or violence.
It is the employer’s responsibility to train the staff before something goes wrong.
Not to worry, if your staff training is still in progress. This requirement should be approached like the competence requirement of the data protection legislation: if employees handle personal data, they must understand what they can and cannot do. Similarly, if employees use or develop various AI tools in their work, they must understand how to act in each situation. It is the employer’s responsibility to train the staff before something goes wrong.
Developers and implementers of AI systems must be particularly careful if there is a risk of harm associated with the system. High-risk AI systems must be developed and documented carefully to minimise risks to human well-being.
We all surely understand that if AI drives a car; makes cancer diagnoses; evaluates students’ exam responses; or makes employment-related decisions, the AI models used should be carefully created. This is precisely the goal of the EU’s AI regulation. It is the world’s first law aimed at curbing the misuse of AI and it will gradually come into effect during 2025–2026.
The first obligations of the AI regulation came into effect on 2 February 2025. They concern AI literacy in general, and the total ban on certain dangerous AI systems in the EU. The prohibited systems include biometric profiling, social scoring, and emotion recognition systems in work and school environments.
Want to learn more?
Sitra is launching an open and free online course on the AI regulation. This third part of the Basics of the Data Economy courses is, like its predecessors, practical and aimed at both individuals and professionals. The course will be released on 12 March 2025, on the Basics of the Data Economy website.
Additionally, Sitra has funded the development of a regulatory technology tool. The tool eases understanding of the AI regulation, and it allows organisations to agilely assess whether the AI regulation affects their operations and what implications there might be.