How will the EU Artificial Intelligence Act affect ChatGPT?

Most people associate artificial intelligence with ChatGPT, a digital tool that has reshaped the world in just a short time. For many, it has already become an indispensable part of both work and daily life. It is therefore especially important to consider how the new Artificial Intelligence Act (AI Act) – the first law of its kind in the world, adopted by the EU – will affect this tool, classified as a general-purpose AI system(GPAI). Although the regulation has faced its share of criticism, the fact remains that it was approved by both the Council and the European Parliament, making it a new regulatory reality. Since Serbia, like other candidate countries, will be required to adopt the Act before joining the Union, this issue warrants closer examination to clarify potential concerns among citizens who rely on ChatGPT.

Regulation of limited-risk systems

In one of previous analysis, it was noted that the Union has chosen a nuanced approach to regulating artificial intelligence by classifying AI systems according to their level of risk. These are divided into categories of “unacceptable risk”, “high risk”, “limited risk”, and “minimal risk”. For ChatGPT users, the reassuring news is that this tool falls into the “limited risk” category. Unlike systems of unacceptable risk, which are prohibited, and high-risk systems, which face significant restrictions, providers of GPAI systems deemed limited risk must primarily comply with stricter transparency requirements. In practice, this means ensuring that users are clearly informed they are interacting with AI, while AI-generated content – such as audio, images, or video – must be appropriately labelled. In an era dominated by disinformation, such precautionary measures gain increasing importance.

Nevertheless, the obligations for providers of GPAI systems extend well beyond content labelling. OpenAI, as the provider of ChatGPT, will be required to: 1) prepare and maintain up-to-date technical documentation; 2) publish a summary of the data used to train the model, following a predefined template; 3) establish and publicly release a copyright policy that ensures respect for content owners; and 4) cooperate with the newly established AI Office of the European Commission, which is tasked with implementing the Act and supporting providers. These obligations are closely linked to the AI Code of Practice. Although not legally binding, the Code serves as a guide to acceptable AI conduct. OpenAI, along with Google and IBM, is already among its signatories . In practice, adherence to the Code may serve as evidence of compliance with the rules and act as a mitigating factor in cases of (unintentional) breaches.

Since ChatGPT has been available on the EU market since 2022 – that is, before the AI Act entered into force on 1 August 2024 – a special transitional regime applies to it. On the one hand, for all GPAI models placed on the market after 2 August 2025, the obligations take effect immediately, while from 2 August 2026 the Commission will also gain powers of supervision and the authority to impose fines. On the other hand, OpenAI has until 2 August 2027 to bring ChatGPT fully into compliance with the new rules. This approach gives providers of existing models additional time to introduce the necessary mechanisms for transparency, copyright protection, safety and other standards set out in the Act, while at the same time ensuring that no system remains beyond regulatory reach in the long term.

Stricter rules around the corner

Although it seems that OpenAI will not have to worry too much about regulatory obstacles, it is important to note that the rules can become stricter in exceptional circumstances. This may happen if ChatGPT is deemed to pose a “systemic risk.” Such a designation may arise in two ways: either automatically – if the model in training exceeds the threshold of 10²⁵ FLOPS (floating point operations) – or by decision of the European Commission, which can, on its own initiative or on the recommendation of a scientific panel, determine an equivalent impact even without the threshold being crossed. The risk associated with such models stems from the assessment that they could significantly affect the EU market or cause serious harm to health, safety, fundamental rights, or society as a whole. The most serious of these risks involves the potential misuse of AI in developing chemical or biological weapons.

In such cases, the duties of providers multiply. First, once the threshold is crossed, OpenAI must report it to the Commission without delay, even during the training process, after which the Commission will consider measures to address the risks involved. Second, providers are required to actively assess and mitigate systemic risks through continuous model evaluation, monitoring, documentation and reporting of serious incidents, and by ensuring robust cybersecurity for both the model and its physical infrastructure. In the most serious scenario, the model may be withdrawn from the market, and fines of up to 3% of global annual turnover or €15 million – whichever is higher – may be imposed. Consequently, if future models such as GPT-6 exceed the 10²⁵ FLOPS threshold during training, they will automatically be classified as systemically risky. Conversely, if AI development follows a regulated path, ChatGPT will remain free to operate in the European market, including in candidate countries. In the meantime, all providers will remain under the close supervision of the AI Office.

Originally published on EUpravozato.