On 8 May 2026, the European Commission opened a consultation on draft guidelines for the EU AI Act's Article 50 transparency obligations. Responses are due by 3 June 2026. The transparency rules are scheduled to apply from 2 August 2026.
For a Luxembourg SME, the practical decision is not whether to panic about the AI Act. It is who owns AI disclosure discipline this quarter. If a customer is talking to an AI system, if staff use a chatbot in a customer workflow, if marketing publishes AI-generated public-interest text, or if synthetic audio, image, or video could be mistaken for real media, the business needs a visible rule and an owner.
The useful move is a two-page AI transparency register: touchpoint, owner, audience exposed, disclosure wording, vendor check, and review date.
What changed
The Commission's consultation page says the draft guidelines are intended to help providers and deployers meet transparency requirements for AI systems under Article 50. The accompanying draft guidelines page describes them as practical guidance for competent authorities, providers, and deployers.
The draft guidelines themselves are careful about status. They are a draft working document for stakeholder consultation, and they do not prejudge the Commission's final decision. They also say the guidelines are non-binding, with authoritative interpretation ultimately for the Court of Justice of the European Union.
That caveat matters. This is not a moment for a small company to pretend it has final legal certainty. It is a moment to make the operational work visible enough that counsel, a DPO, or an external adviser can review it without rebuilding the facts from scratch.

The important split
There is a separate AI Act timeline story running at the same time. On 7 May 2026, the Council of the EU said Council and Parliament negotiators reached a provisional agreement that would delay application dates for high-risk AI rules: 2 December 2027 for stand-alone high-risk AI systems and 2 August 2028 for high-risk AI systems embedded in products. The Council also said that provisional agreement still needs endorsement, legal and linguistic revision, and formal adoption.
That context should not hide the Article 50 date. The Commission's consultation page says the transparency rules become applicable on 2 August 2026. The AI Act Service Desk timeline also lists Article 50 transparency rules starting to apply on 2 August 2026.
So the management question is narrow: even if some high-risk work moves later, what transparency housekeeping needs ownership before summer 2026?
What an SME should map
The draft guidelines summarise four transparency buckets.
First, providers of AI systems that directly interact with people are in the disclosure frame. For an SME deploying a bought tool, that still creates a practical vendor question: does the product make the AI interaction clear to the person using it?
Second, providers of systems that generate or manipulate synthetic image, video, audio, or text content are in the marking and detection frame. For an SME, the operational version is simpler: ask vendors what marking, labelling, metadata, or content-origin controls are available, then decide what the company will preserve when it publishes or reuses outputs.
Third, deployers of emotion recognition or biometric categorisation systems are in a separate information frame. Most SMEs should treat that as a high-friction category and seek qualified review before use, especially where staff, customers, recruitment, access control, or monitoring are involved.
Fourth, deployers have disclosure work around deep fakes and around AI-generated or manipulated text published to inform the public on matters of public interest, subject to defined exceptions and special regimes. This is directly relevant to marketing, thought leadership, PR, investor communication, and public website content.
The draft also says the information should be clear and distinguishable, provided by the first interaction or exposure, and aligned with accessibility requirements. In plain business terms: do not bury the AI notice where only a lawyer can find it.

The transparency register
The register does not need to start as a complex governance platform. A spreadsheet or two-page document is enough for the first pass.
Record the customer-facing chatbot. Record the website assistant. Record AI-written blog drafts and LinkedIn posts. Record AI image, audio, and video generation. Record customer-service templates. Record HR, recruitment, support, or operations tools that may expose people to an AI system or AI-generated output.
For each row, capture five fields: owner, audience exposed, AI role, disclosure wording, and review date. Add one vendor question where relevant: what does the tool provide for AI interaction notices, generated-content marking, human review, and export of labels or metadata?
That register gives leadership a basic control surface. It also makes it easier to separate low-risk productivity use from public or customer-facing use where disclosure and review matter more.
My opinion
My opinion: SMEs should not wait for perfect guidance before doing the boring work. The practical risk is not that a 40-person firm lacks a complete AI Act programme in May. The practical risk is that nobody can say where AI touches customers, employees, or public content.
I would start with visibility, not policy theatre. A named owner, a short register, a few standard labels, and a documented human-review rule will beat a large unread policy for most SMEs this quarter.
Five moves for this month
- Name one owner for AI transparency across marketing, customer service, operations, IT, and HR.
- Inventory AI touchpoints where a person interacts with AI or is exposed to AI-generated content.
- Update vendor questions for chatbots, content tools, support tools, and media generation tools.
- Draft plain disclosure wording for customer chat, synthetic media, and AI-assisted public content.
- Send the register to counsel, a DPO, or a qualified adviser before relying on exceptions or final wording.
Caveats
This is a draft-guidelines story, not a final legal memo. The Commission consultation closes on 3 June 2026. The draft guidelines could change. The high-risk timeline context is based on a provisional Council and Parliament agreement that still needs formal adoption. GDPR, employment, consumer-protection, sector regulation, and professional rules may also affect the right disclosure approach.
The immediate business value is still clear: make AI use visible, boring, and owned before the August Article 50 date.