Security & Compliance

EU AI Act — How We Handle It

Last updated: April 2026

The EU AI Act (Regulation 2024/1689) is the first comprehensive European regulation for artificial intelligence. As a company that advises on AI implementation and uses AI tools itself, we take our own compliance seriously.

Our Approach

We apply the same approach internally as we advise our clients:

  1. Inventory all AI systems we use or deploy
  2. Classify each system by risk level (minimal, limited, high, unacceptable)
  3. Document usage, purpose and decisions involved
  4. Evaluate periodically for bias, reliability and drift
  5. Human oversight for all systems that affect people or decisions

AI Systems We Use

System Purpose Risk level Human oversight
Claude (Anthropic) Text generation, code analysis, advisory support Limited Always — output reviewed before use
GPT-4o (OpenAI) Text generation, summaries Limited Always
Gemini (Google) Additional models via LiteLLM Limited Always
Langfuse Observability and logging of LLM usage Minimal N/A
Semgrep Static code analysis (SAST) Minimal Findings reviewed manually

None of the systems we use fall under the high-risk or unacceptable-risk categories of the EU AI Act.

What We Do Not Do

  • No automated decision-making affecting individuals without human intervention
  • No use of biometric identification systems
  • No AI systems for social scoring or manipulation
  • No deployment of prohibited AI practices (Art. 5 AI Act)

Transparency With Clients

When we use AI tools in delivering an engagement:

  • We inform the client in the engagement confirmation
  • We do not process client data through AI models without explicit consent
  • We ensure any outputs are reviewed by a human expert

AI Act Readiness for Your Organisation

Does your organisation need help classifying AI systems or building a compliance roadmap? We help businesses navigate the EU AI Act with practical steps and documentation.

More about our AI Act Readiness service