EU AI Act Compliance for UK Businesses
(TDM-EUAI)
Audience:
UK-based AI developers, deployers, compliance officers, legal teams, and business leaders.
Goal:
Equip participants with the knowledge and tools to ensure their AI systems comply with the EU AI Act.
Module 1: Introduction to the EU AI Act
Objectives:
- Understand the purpose and scope of the EU AI Act.
- Identify when and how it applies to UK businesses.
- Explore the risk-based classification of AI systems (prohibited, high-risk, limited-risk, minimal-risk).
Module 2: Key Compliance Requirements
Objectives:
- Learn transparency, data quality, and human oversight requirements.
- Understand obligations for high-risk AI systems (technical documentation, risk management, conformity assessments).
- Explore record-keeping and post-market monitoring.
Module 3: Prohibited Practices and Ethical Use
Objectives:
- Identify prohibited AI practices (e.g., subliminal manipulation, social scoring).
- Discuss ethical boundaries and reputational risk.
- Understand implications for marketing, HR, and financial services AI tools.
Module 4: Transparency and Labeling Requirements
Objectives:
- Understand labeling rules for AI-generated content.
- Learn how to disclose AI interaction and data provenance.
- Review documentation and communication templates.
Module 5: Human Oversight and Governance
Objectives:
- Define human oversight roles and accountability models.
- Design governance frameworks aligned with EU expectations.
- Map responsibilities across the AI lifecycle (developer → deployer).
Module 6: Implementation & Ongoing Compliance
Objectives:
- Build a compliance roadmap and timeline.
- Integrate with existing GDPR and ISO frameworks.
- Learn about auditing, documentation, and incident reporting.
Optional Module 7: Sector-Specific Breakouts
Objectives:
- Tailored compliance examples and risk assessment exercises.

