Skip to content

The AI Act’s specificity: why ML experts are mandatory, not optional

Most organizations preparing for AI Act compliance are missing something fundamental: they don’t have anyone who actually understands machine learning on their governance teams. This isn’t an oversight. It’s a disaster waiting to happen.

GDPR Was Complex. The AI Act Is Different.

GDPR required understanding data flows, implementing consent mechanisms, and ensuring proper security. It was genuinely complex, and organizations hired data protection officers and privacy engineers to handle it.

But GDPR’s complexity was manageable because it dealt with processes: where data goes, who accesses it, how long you keep it. These are things professionals can map and control.

The AI Act requires understanding algorithms themselves. When it asks about “robustness against adversarial examples,” it’s not asking about process, it’s asking about algorithmic properties of your model. When it requires “appropriate accuracy metrics,” it’s not asking for a percentage, it’s asking for stratified performance analysis across different populations, confidence intervals, and calibration metrics.

You can’t Google your way through these Machine Learning technical terms. You need people who understand the math.

What’s Actually Happening in Organizations

Without ML Experts:

  • “We test our models” = We check if they run without crashing
  • “We document accuracy” = We write down whatever number the vendor gave us
  • “We ensure robustness” = We have error handling
  • “We address bias” = We checked, the numbers look similar enough

With ML Experts:

  • “We test our models” = Systematic evaluation against adversarial attacks, OOD detection, distribution shift
  • “We document accuracy” = Stratified metrics, confidence intervals, calibration analysis, performance degradation over time
  • “We ensure robustness” = Certified defenses, adversarial training, formal verification where applicable
  • “We address bias” = Disparate impact analysis, fairness metrics, causal assessment

The first approach will fail the first serious audit. The second actually meets the Act’s requirements.

Article 15: A Reality Check

Article 15 requires high-risk AI systems to achieve “appropriate levels of accuracy, robustness, and cybersecurity.” Here’s what that actually means:

Accuracy isn’t just a number. The Act requires you to declare your accuracy metrics. But which metrics? On what data? With what confidence? An ML expert knows you need stratified metrics across populations, proper test set construction, calibration metrics, and temporal validation. Without this expertise, you’re reporting meaningless numbers.

Robustness has a precise technical meaning. It’s not about whether your system handles errors gracefully. It’s about mathematical guarantees against adversarial perturbations, certified defense mechanisms, and formal verification methods. If your team doesn’t know what PGD attacks or AutoAttack benchmarks are, you’re not testing robustness… you’re guessing.

Cybersecurity for AI is different. Traditional IT security won’t cut it. AI systems face unique attacks: model inversion, membership inference, backdoor attacks, data poisoning. These aren’t covered by your standard security playbook. They require ML-specific defenses that only ML experts understand.

Without ML expertise, you literally cannot comply with Article 15. It’s not about doing it poorly; it’s about being unable to do it at all.

Who You Actually Need

Stop looking for “AI consultants” or people who took an online course. You need actual ML practitioners:

Minimum viable expertise:

  • Understands how neural networks actually work (not just conceptually)
  • Has implemented and evaluated ML models in practice
  • Knows what adversarial examples are and why they matter
  • Can explain different fairness metrics and their trade-offs
  • Has hands-on experience with model evaluation beyond accuracy

Where to find them:

  • ML engineers, researchers, or senior data scientists with deep learning experience
  • People who’ve worked on model robustness, fairness, or safety
  • Practitioners who can code and have deployed models in production
  • They command good salaries because their skills are in demand

How many you need:

  • Minimum: 1 senior ML practitioner who owns technical compliance
  • Better: 2-3 ML engineers/scientists for implementation
  • Ideal (for large organizations): Full team including fairness specialist and AI security expert

Importantly, these aren’t advisors. They’re permanent team members with veto power over technical compliance decisions.

Why Proper Compliance Creates Better AI

When you bring ML expertise into governance, something interesting happens: compliance becomes a reason for excellence.

Knowledge exchange across teams: ML experts don’t just check boxes, they teach via their documentation. Your legal team starts understanding what robustness actually means. Your product teams learn why certain metrics matter. Your security team discovers AI-specific vulnerabilities they never knew existed. The entire organization levels up.

Homogenized best practices: With ML experts setting technical standards, you get consistency across departments. Everyone uses the same fairness metrics (or at least, the same process to decide upon a fairness metric), the same robustness benchmarks, the same evaluation protocols. No more ad-hoc testing or inconsistent standards.

Better AI, not just compliant AI: Teams that truly understand accuracy metrics build more reliable models. Teams that implement real robustness testing catch problems before production. Teams that properly measure fairness build systems people trust. The AI Act’s requirements, properly understood and implemented, are actually good engineering practices.

Competitive advantage: While others scramble to retrofit compliance, organizations with embedded ML expertise build it in from the start. They move faster, build better, and avoid the technical debt of fixing things later.

In a nutshell, proper compliance isn’t a burden, it’s a framework for building AI that actually works.

What to Do Now

Find skilled ML practitioners. Look for ML engineers, researchers, or senior data scientists who’ve actually built and deployed models. They don’t need to be AI safety specialists (whatever that means), but they need to understand how models can fail.

Give them real authority. ML practitioners need to be equal partners with legal in governance decisions. Technical requirements can’t be overruled by business preferences.

Start with assessment, not documentation. Have your ML practitioners evaluate your current AI systems against the Act’s actual requirements. You’ll likely find gaps you didn’t know existed.

Build technical compliance into development. Don’t treat this as a post-hoc exercise. Integrate robustness testing, fairness evaluation, and security assessment into your ML pipeline.

Create knowledge-sharing structures. Set up regular sessions where ML practitioners explain technical concepts to legal, product, and business teams. Build a common language.

Be realistic about timelines. Building proper technical compliance takes months, not weeks. Better start early.

The Bottom Line

  • The AI Act is not GDPR 2.0. It’s a technical regulation that requires technical expertise.
  • You cannot comply without ML experts. This isn’t a best practice or a recommendation. It’s a sine qua non… without this, nothing else works.
  • Organizations without ML experts on their governance teams aren’t preparing for compliance. They’re preparing for failure. The question isn’t whether you need ML experts. It’s whether you’ll hire them before your first audit or after your first fine.

Comments