From August, rules for artificial intelligence models apply, according to experts late

Prague – The new rules of the Artificial Intelligence (AI) Act, which came into effect on Saturday, will primarily affect large providers of AI models, such as ChatGPT, Gemini, and Claude. According to experts, the rules come too late, and until the last moment, it was unclear how to interpret them. This is evident from statements by experts to ČTK. The AI Act was adopted in August of last year as the world’s first comprehensive regulation of AI. It was supplemented by a voluntary code, which aims, among other things, to limit the generation of content that violates copyright and to introduce mechanisms for risk assessment before and after the model is brought to market.
Even at the beginning of July, according to lawyer Štěpánka Havlíková from the Prague office of Dentons, it was very unclear how to interpret the AI Act rules. “After a very intense debate, the Code of Good Practice for General AI Models was subsequently published, the publication of which is directly anticipated by the AI Act. The code clarifies how to fulfill the obligations arising from the AI Act and will simultaneously facilitate providers in demonstrating their compliance with the requirements of the act,” she added.
“The code is not legally binding, and its adoption is voluntary. However, it will significantly facilitate providers in proving that they meet the requirements for transparency, copyright, and safety according to the AI Act,” noted Havlíková. Providers who choose not to sign the code will have to demonstrate that they have taken appropriate, effective, and proportionate measures to ensure compliance with the obligations arising from the act.
According to co-owner of the technology project Lexicon Labs Daniil Shakhovsky, voluntary codes are not enough. “Slow waiting puts us in dependence on the models of large players. Those who do not have tools for verifying the authenticity and origin of content today are losing. We need mandatory minimum standards and infrastructure that works even outside the oversight of platforms. The one who acts wins – not the one who waits for instructions,” he responded.
The publication of the code is, according to the director of the Czech Association of AI Lukáš Benzl, an important step, but it comes relatively late. Providers of general artificial intelligence models did not know until the last moment what exactly Europe would require of them in practice. “Such an approach complicates planning, slows down innovation, and undermines trust in the stability of the regulatory framework. It is time to reassess the pace and complexity of regulation, especially in light of the AI action plans of the USA and China. If the EU wants to become a global leader, it must lead not only in ambition but also in the quality and predictability of the process,” Benzl added. (August 3)