The EU AI Act’s transparency rules: A practical guide to Article 50

14 May, 2026

What providers and deployers must do by August 2026

Article 50 of the EU AI Act may affect more organisations than almost any other provision. It introduces transparency obligations on providers and deployers of certain AI systems, under which users must be informed when they are interacting with an AI system or where content is AI-generated. These obligations extend to providers and deployers of open-source AI systems, which are not exempt.

Transparency obligations are not limited to systems classified as “high-risk”: they apply to any AI system used in the four situations the Article covers. In practice, Article 50 is relevant to every business that uses generative AI to produce content. Here is what you need to know before the August 2026 deadline.

Summary

Article 50 of the EU AI Act introduces transparency obligations in four situations:

  1. when AI interacts directly with people,
  2. when AI generates synthetic content,
  3. when AI is used for emotion recognition or biometric categorisation, and
  4. when AI creates deepfakes or text published on matters of public interest.

These obligations apply to all AI systems used in the four situations set out in Article 50, not just to high-risk systems.

These obligations apply from 2 August 2026. The Commission has published draft Guidelines on the scope and application of Article 50, and a Code of Practice on AI-generated content is being developed to provide practical solutions on marking and labelling.

Rules for Providers:

Providers of chatbots, virtual assistants and other systems intended to interact with people must design them so that users are informed they are interacting with AI.

Providers of generative AI systems — producing text, images, audio, video — must mark outputs in a machine-readable format and ensure they are detectable as artificially generated or manipulated. A standardised EU label is being developed.

Rules for Deployers:

Deployers of emotion recognition or biometric categorisation systems must inform exposed individuals.

Deployers using AI to create deepfakes must disclose that the content has been artificially generated or manipulated. Deployers publishing AI-generated text with the purpose of informing the public on matters of public interest must disclose that the text is AI-generated, unless it has been subject to human review and editorial responsibility.

Why Article 50 matters

Much of the AI Act focuses on high-risk AI systems, including conformity assessments, technical documentation and CE marking requirements. But high-risk status only applies to a subset of AI uses.

Article 50 works differently. Its transparency obligations apply broadly, to any AI system used in the four situations it covers. An organisation with no high-risk AI may still have significant obligations under Article 50: for example, because it develops a customer-facing chatbot, deploys an AI tool that generates news content for publication, or relies on a system that produces deepfake imagery.

Our Compliance Checker data shows that transparency obligations are the second most common compliance trigger after AI literacy, affecting around 33% of all respondents (as of the time of publication). For many organisations, Article 50 will be their primary compliance challenge.


The four transparency obligations

1. Informing people they are interacting with AI (Article 50(1))

When an AI system is intended to interact directly with people (such as chatbots, virtual assistants and automated phone systems), its provider must design and develop it so that users are informed they are interacting with an AI. The draft Guidelines confirm that AI agents fall within Article 50(1), and where the provider cannot reliably predict whether the agent will interact with a human, it should be designed to disclose its AI nature in every such situation.

There is an exception where it is “obvious from the point of view of a natural person who is reasonably well-informed, observant and circumspect” that an AI is involved — but this should not be over-relied on. The draft Guidelines set out a two-step approach to establish this: first, assess the target audience, and second, examine how reasonably well-informed, observant and circumspect an average member of that group is.

There is also an exception for AI systems that are lawfully authorised to detect, prevent, investigate or prosecute criminal offences, unless those systems are available for the public to report a criminal offence.

2. Marking AI-generated synthetic content (Article 50(2))

Providers of AI systems (including general-purpose AI systems) that generate synthetic audio, image, video, or text must ensure that outputs are both marked in a machine-readable format and detectable as AI-generated. This is a technical obligation aimed at provenance: enabling detection tools to verify whether content is AI-generated.

The specific technical standards for this marking are being developed through the Code of Practice and complementary EU standardisation work. Providers should track these developments closely, as practical implementation details are still being finalised ahead of August 2026.

This obligation does not apply where the AI system performs only an assistive function for standard editing (e.g. grammar correction) or where it does not substantially alter the input data or its semantics. There is also a carve-out for systems authorised by law to detect, prevent, investigate or prosecute criminal offences.

3. Disclosing emotion recognition and biometric categorisation (Article 50(3))

When an AI system is used specifically to recognise people’s emotions or categorise them biometrically (for example to assess stress, sentiment, or demographic characteristics), deployers must inform the natural persons exposed to it. Processing must meet other applicable laws, most notably the General Data Protection Regulation.

This obligation is distinct from the prohibition on emotion recognition in workplaces and education institutions under Article 5, which is already in force. Outside of those settings, emotion recognition is generally permitted and Article 50(3) disclosure obligations apply. A carve-out applies for systems permitted by law to detect, prevent or investigate criminal offences.

4. Labelling deepfakes and AI-generated public interest text (Article 50(4))

This is the most scrutinised part of Article 50. It imposes obligations on deployers (in this context, often the publisher or content producer using AI systems) and covers two distinct cases:

Deepfakes. Deployers using AI to create deepfakes (defined in Article 3(60) as AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places, entities or events and would falsely appear authentic or truthful) must disclose this. The draft Guidelines clarify that clearly fantastical or physically impossible content (e.g. dragons or humans flying unaided) falls outside the deepfake definition. The proposed approaches include persistent visual labels, opening disclaimers for video, and audible warnings for audio. Where deepfake content forms part of an evidently artistic, creative, satirical, fictional or analogous work or programme, the disclosure obligation is reduced: it is limited to disclosing the existence of the generated or manipulated content “in an appropriate manner that does not hamper the display or enjoyment of the work”.

However, this obligation does not apply where the use is authorised by law to detect, prevent, investigate or prosecute criminal offences.

AI-generated text published on matters of public interest. Where deployers publish AI-generated or manipulated text with the purpose of informing the public on matters of public interest, they must disclose that the text is artificially generated or manipulated. The trigger turns on the publisher’s purpose, not merely the subject matter. The obligation does not apply where the AI-generated content has undergone a process of human review or editorial control and a natural or legal person holds editorial responsibility for the publication. Such checks must be substantive and not limited to superficial matters or cursory approval.


Details on the information to be provided

This notification in relation to the use of AI in these four cases must be provided at the latest at the time of the first interaction or exposure (in other words, before or during the first interaction or exposure). For example, for an AI chatbot this notification must be provided before or at the very beginning of the conversation; for an AI-generated audio or video clip the notification must be provided at the beginning of the clip. In some sensitive contexts, one-time disclosure may be insufficient, so it may have to be repeated.

It must be conveyed in a clear and distinguishable manner, conforming to applicable accessibility requirements. What this means in practical terms will vary significantly based on the context, including the media type and use-case. It is easier to describe the opposite and what would not qualify as clear and distinguishable: a very small snippet of text hidden in the footer of a website; a faint label on an image; a brief label flashing for only an instant on a video clip. Similarly, disclosures buried in T&Cs or vague labels will not satisfy the threshold.

Commission’s Guidelines

The Commission has published draft Guidelines on Article 50 for stakeholder consultation, providing practical interpretation of the scope, definitions, exceptions and horizontal issues. A final version is expected ahead of the August 2026 deadline.

The Code of Practice on AI-generated content

The European Commission started work on a Code of Practice on Article 50 compliance in late 2025, involving providers, deployers, civil society, and technical experts. The Code is focused on Article 50(2) (marking and detection) and 50(4) (labelling). As of March 2026, a second draft has been published, with a final version expected by June 2026 — ahead of the August 2026 deadline. See here for more background on the drafting process.

Key elements being developed in the Code include:

  • A standardised EU label for AI-generated content — currently proposed as an “AI” visual label (localised as “KI” in German, “IA” in French, etc.)
  • A taxonomy distinguishing “fully AI-generated” content from “AI-assisted” content, with different disclosure requirements for each
  • Technical standards for watermarking, metadata, and provenance tools
  • Guidance on modality-specific labelling (persistent labels for video, visible labels for images, audible disclaimers for audio)

Although the Code of Practice is voluntary, it is expected to become the practical benchmark for regulatory assessment. Organisations that comply with the Code will be well-positioned to demonstrate Article 50 compliance.

What you should do now

If you provide a customer-facing chatbot or virtual assistant: review your current disclosure practices. Ensure users are informed they are interacting with AI at the start of every interaction. Review your interface design — the disclosure should be clear and not buried.

If you provide generative AI systems: begin assessing your technical capacity for machine-readable marking of outputs. Engage with the Code of Practice process and ensure your systems will produce compliant output markings at scale.

If you deploy emotion recognition or biometric categorisation systems: screen first for the Article 5 prohibitions, then design clear notice for exposed individuals and ensure processing meets other legal requirements.

If you publish AI-generated content: inventory your content production workflows. Identify where AI-generated content is published externally (website, social media, reports, marketing materials). For deepfake imagery, audio or video, plan disclosure from the outset. For text, assess whether it is being published with the purpose of informing the public on matters of public interest, and whether you can rely on the human-review and editorial-responsibility carve-out.

Key dates

DateWhat applies
2 February 2025Prohibitions on unacceptable AI practices (Article 5) in force
June 2026Code of Practice on AI-generated content expected to be finalised
2 August 2026Article 50 transparency obligations apply to providers and deployers

Practical checklist

All organisations

  • Map all AI systems in use across the organisation and identify those covered by Article 50.
  • Check if any exemptions or carve-outs may apply.
  • Monitor the Code of Practice on AI-generated content (final version expected June 2026) and the Commission’s Guidelines.
  • Require compliance from AI vendors contractually.

Providers

  • For systems that interact directly with people, ensure clear AI disclosure at the time of first interaction, in a manner that meets accessibility requirements.
  • For generative AI systems (including GPAI), implement machine-readable marking of synthetic outputs across all modalities (audio, image, video, text).

Deployers

  • For emotion recognition or biometric categorisation, screen each use case against the Article 5 prohibition before considering Article 50(3) disclosure.
  • For permitted uses, design notice mechanisms that inform exposed individuals in clear and distinguishable form.
  • For deepfake-capable systems, map all use cases producing image, audio or video content that may meet the “deepfake” definition and design appropriate disclosure.
  • Inventory text outputs published and identify those published with the purpose of informing the public on matters of public interest. For in-scope text, either implement disclosure or establish the process for human review and editorial responsibility to rely on the carve-out.

Browse Article 50 in the AI Act Explorer for the full legal text, or use our Compliance Checker to see which transparency obligations apply to your specific situation.

This post was published on 14 May, 2026

Related articles

Enforcement of Chapter V under the EU AI Act

This page aims to provide an overview of the EU AI Act's enforcement provisions relating to Chapter V, namely the provisions that impose obligations on providers of general-purpose AI (GPAI) models. It also aims to explore the role that other actors can play in the...

What the EU AI Act Means for Staffing Businesses

If your business uses AI to screen, rank, or match candidates, the EU now regulates those tools as high-risk systems. Here is what changed, what it means for your operating model, and what you should be doing about it.

Whistleblowing and the EU AI Act

This page aims to provide an overview of the EU Whistleblowing Directive (2019) and how it relates to the EU AI Act, as well as provide useful resources for potential whistleblowers. This resource was put together by Santeri Koivula, an EU Fellow at the Future of Life...

Overview of Guidelines for GPAI Models

On 18 July 2025, the European Commission published draft Guidelines clarifying key provisions of the EU AI Act applicable to General Purpose AI (GPAI) models. The Guidelines provide interpretive guidance on the definition and scope of GPAI models, related lifecycle...