Contents
The way we develop; deploy; and interact with AI is changing; fast. But with innovation comes regulation. The EU AI Act; the world’s first comprehensive AI legislation; is now in place and will soon shape how businesses manage risk; compliance; and trust in their AI systems.
Whether you are building your own models or using third-party AI tools; this regulation matters; and preparation starts now.
What is the EU AI Act?
Officially adopted in July 2024; the EU AI Act introduces a structured; risk-based approach to AI governance. The goal? Ensure AI is safe; transparent; and respects fundamental rights.
AI systems are classified into four risk categories:
- Unacceptable risk – Banned entirely (e.g. social scoring; real-time biometric surveillance).
- High risk – Heavily regulated; covering use cases like recruitment; healthcare; law enforcement; and infrastructure. s
- Limited risk – Subject to transparency requirements (e.g. chatbots; emotion recognition tools).
- Minimal risk – Low concern AI (e.g. spam filters); with no specific legal obligations.
There are also added responsibilities for General-Purpose AI (GPAI) systems and foundation models; including transparency around training data and safeguards for highly capable models.
Does this affect you?
If your AI systems operate within the EU or serve EU users; the EU AI Act applies to you – even if your company is located outside the EU.
This applies to:
- AI developers and providers – companies that create AI systems and place them on the EU market.
- AI users/deployers – companies or organisations that use AI tools in their operations within the EU.
- AI importers and distributors – businesses that import or resell AI tools developed outside the EU.
Why this matters to your business
Beyond compliance; the EU AI Act sets the tone for global regulation. Failing to prepare now could lead to:
- Significant penalties – up to 7% of global turnover for serious breaches.
- Loss of trust – from clients; investors; and the public.
- Competitive risk – as customers and partners increasingly expect transparent; ethical AI.
Early compliance unlocks opportunity. By taking action now; companies can build a futureproof AI governance framework; align with other standards such as ISO 42001; and demonstrate leadership in responsible AI.
Timeline: when is this happening?
- Feb 2025: Ban on unacceptable risk AI systems takes effect.
- Aug 2025: GP AI transparency and risk requirements begin.
- Aug 2026: High risk system obligations enforced.
- Aug 2027: Full compliance required; including CE marking.
How your organisation can get ahead
Map your AI landscape
Audit where and how AI is used across your organisation; including externally sourced tools. Understand their purpose; data flows; and outcomes.
Assess risk levels
Classify each system using the EU AI Act’s risk tiers. If high-risk; identify the gaps in documentation; oversight; and monitoring.
Build your AI governance framework
Establish clear ownership. Define roles; responsibilities; and escalation paths for AI-related decisions. Integrate human oversight; ethical principles; and ongoing model evaluation.
Prepare technical documentation
Especially for high-risk systems; you will need comprehensive records; from training datasets to performance metrics; security controls; and user guidance.
Upskill teams
AI literacy matters. Equip teams with the knowledge to identify risks; follow procedures; and stay compliant.
How we can help
The EU AI Act introduces complexity; but compliance does not have to be chaotic.
If you are already using Hicomply for ISO 27001; SOC 2; or GDPR frameworks; you know the value of structured; scalable compliance. That same logic applies to AI governance.
Hicomply can support you to:
- Map and manage AI-specific risks.
- Centralise documentation and audit logs.
- Automate compliance processes and evidence collection.
- Align AI governance with ISO 42001 and other emerging standards.
The EU AI Act is more than just a regulatory milestone; it is a catalyst for better; more transparent AI. Organisations that act now will be better placed to adapt; differentiate; and thrive in a world where ethical AI is not a nice-to-have; but a legal necessity.
Ready to simplify your AI compliance journey? Book a demo with our team today to see how Hicomply can help.
Newsletter
Stay ahead with the latest expert insights and news on compliance.
Unlock Your Path to ISO 27001 Success
Download our Ultimate ISO 27001 Compliance Checklist for clear, step-by-step guidance to fast-track your certification.
The way we develop; deploy; and interact with AI is changing; fast. But with innovation comes regulation. The EU AI Act; the world’s first comprehensive AI legislation; is now in place and will soon shape how businesses manage risk; compliance; and trust in their AI systems.
Whether you are building your own models or using third-party AI tools; this regulation matters; and preparation starts now.
What is the EU AI Act?
Officially adopted in July 2024; the EU AI Act introduces a structured; risk-based approach to AI governance. The goal? Ensure AI is safe; transparent; and respects fundamental rights.
AI systems are classified into four risk categories:
- Unacceptable risk – Banned entirely (e.g. social scoring; real-time biometric surveillance).
- High risk – Heavily regulated; covering use cases like recruitment; healthcare; law enforcement; and infrastructure. s
- Limited risk – Subject to transparency requirements (e.g. chatbots; emotion recognition tools).
- Minimal risk – Low concern AI (e.g. spam filters); with no specific legal obligations.
There are also added responsibilities for General-Purpose AI (GPAI) systems and foundation models; including transparency around training data and safeguards for highly capable models.
Does this affect you?
If your AI systems operate within the EU or serve EU users; the EU AI Act applies to you – even if your company is located outside the EU.
This applies to:
- AI developers and providers – companies that create AI systems and place them on the EU market.
- AI users/deployers – companies or organisations that use AI tools in their operations within the EU.
- AI importers and distributors – businesses that import or resell AI tools developed outside the EU.
Why this matters to your business
Beyond compliance; the EU AI Act sets the tone for global regulation. Failing to prepare now could lead to:
- Significant penalties – up to 7% of global turnover for serious breaches.
- Loss of trust – from clients; investors; and the public.
- Competitive risk – as customers and partners increasingly expect transparent; ethical AI.
Early compliance unlocks opportunity. By taking action now; companies can build a futureproof AI governance framework; align with other standards such as ISO 42001; and demonstrate leadership in responsible AI.
Timeline: when is this happening?
- Feb 2025: Ban on unacceptable risk AI systems takes effect.
- Aug 2025: GP AI transparency and risk requirements begin.
- Aug 2026: High risk system obligations enforced.
- Aug 2027: Full compliance required; including CE marking.
How your organisation can get ahead
Map your AI landscape
Audit where and how AI is used across your organisation; including externally sourced tools. Understand their purpose; data flows; and outcomes.
Assess risk levels
Classify each system using the EU AI Act’s risk tiers. If high-risk; identify the gaps in documentation; oversight; and monitoring.
Build your AI governance framework
Establish clear ownership. Define roles; responsibilities; and escalation paths for AI-related decisions. Integrate human oversight; ethical principles; and ongoing model evaluation.
Prepare technical documentation
Especially for high-risk systems; you will need comprehensive records; from training datasets to performance metrics; security controls; and user guidance.
Upskill teams
AI literacy matters. Equip teams with the knowledge to identify risks; follow procedures; and stay compliant.
How we can help
The EU AI Act introduces complexity; but compliance does not have to be chaotic.
If you are already using Hicomply for ISO 27001; SOC 2; or GDPR frameworks; you know the value of structured; scalable compliance. That same logic applies to AI governance.
Hicomply can support you to:
- Map and manage AI-specific risks.
- Centralise documentation and audit logs.
- Automate compliance processes and evidence collection.
- Align AI governance with ISO 42001 and other emerging standards.
The EU AI Act is more than just a regulatory milestone; it is a catalyst for better; more transparent AI. Organisations that act now will be better placed to adapt; differentiate; and thrive in a world where ethical AI is not a nice-to-have; but a legal necessity.
Ready to simplify your AI compliance journey? Book a demo with our team today to see how Hicomply can help.
Get Started With
GDPR
Everything you need to know before you pursue ISO 27001 compliance.
Take Your Learning Further
Discover research, playbooks, checklists, and other resources on
GDPR
compliance.
