EU AI Act Compliance for SMBs — What You Actually Need to Do Before August 2026
A practical step-by-step guide for 20-200 person European companies. No jargon, no consultant-speak — just what to actually do.

The EU AI Act became enforceable in phases starting February 2025. The prohibitions have been live since then. The high-risk system requirements hit in August 2026. If you're running a 20-200 person European company and you're using AI in any capacity, here's what you actually need to do.
The honest framing first
78% of European companies say they're concerned about AI Act compliance. Roughly 15% have done anything about it.
This is partly fear of complexity, partly consultant-generated confusion, and partly genuine ambiguity in the regulation itself. Let's cut through it.
The AI Act is a risk-based regulation. Most SMBs won't be affected by the high-risk provisions — because most SMBs don't build or deploy high-risk AI systems. But you do need to understand where you sit.
Step 1: Classify your AI use
The AI Act creates four risk tiers:
Unacceptable risk — Prohibited outright. Includes social scoring by public authorities, real-time biometric surveillance in public spaces, subliminal manipulation. If you're a normal business, you're not doing this.
High risk — Requires conformity assessment, registration, technical documentation, human oversight. Includes: AI in CV screening, AI used in creditworthiness assessment, AI in safety-critical systems.
Limited risk — Transparency obligations. If you're using AI to interact with customers (chatbots), you must disclose it's AI. Relatively simple to implement.
Minimal risk — No requirements. Includes most AI features in standard business software, spam filters, recommendation systems.
For most SMBs: You're in "limited risk" or "minimal risk." Your compliance checklist is short.
Step 2: For "limited risk" — disclose AI interactions
If you have:
- A chatbot on your website
- AI-generated content presented as original
- Deepfake-style content (synthetic media)
You must:
- Tell users they're talking to AI (not a human)
- Label AI-generated content as AI-generated
How to implement: Add a disclosure notice to your chatbot UI. One sentence: "This assistant is AI-powered." Done.
Step 3: If you use AI in hiring or creditworthiness — read this
CV screening tools, AI-assisted hiring decisions, and AI-based creditworthiness assessments are classified as high-risk. If you're a SaaS company with B2B customers in these sectors, and your product includes AI that touches these decisions — you need to take this seriously.
Required actions:
- Technical documentation of the AI system
- Conformity assessment (self-assessment is permitted for most cases)
- Registration in the EU database (planned for August 2026)
- Human oversight mechanisms
- Risk management system
Realistic timeline: Start Q1 2026 if you haven't already. August 2026 is not far.
Step 4: Vendor diligence
You're likely using AI provided by third parties (OpenAI, Anthropic, Microsoft, Google). The AI Act has obligations for both providers and deployers.
Your obligations as a deployer:
- Use AI systems in accordance with the provider's intended purpose
- Implement appropriate human oversight
- Document your use case
What to ask your AI vendors:
- Are you compliant with the EU AI Act?
- Do you have a transparency notice I can reference?
- What human oversight mechanisms do you recommend for my use case?
Step 5: Appoint someone responsible
The AI Act expects someone to be responsible for AI governance. This doesn't need to be a full-time role in a 20-person company. But it should be a named person with a defined scope.
Minimum viable: Your CTO or Head of Product owns AI Act compliance. They review new AI use cases before deployment, maintain a simple log of AI systems in use, and have read the relevant provisions of the regulation.
The short version
| Your situation | What to do |
|---|---|
| Customer-facing chatbot | Add "AI-powered" disclosure |
| AI features in B2B SaaS | Assess risk tier; if high-risk, begin documentation |
| Using AI in HR/hiring | Take high-risk compliance seriously, start now |
| Internal AI tools only | Document use, assign responsibility, light touch |
| No AI use at all | Assess vendors — they may be using AI |
One honest take
Most SMBs are more afraid of the AI Act than they need to be. The regulation is primarily targeted at AI developers and high-risk system deployers — not at companies using AI tools to run their business.
The real risk isn't compliance failure for most SMBs. It's being unprepared for clients and procurement teams who will increasingly ask "are you AI Act compliant?" as a procurement condition. Have an answer.
Need help mapping your AI use cases to the regulation? Let's talk.
Get the good stuff directly.
Production AI notes, design decisions, and practical updates from the workbench.