Skip to content

EU AI Act

The EU AI Act Footnotes (3)

Have you noticed how legal definitions create odd boundaries?

The EU AI Act defines a “general-purpose AI model” as “an AI model […] that is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks.”

But here’s something interesting: sometimes the exact same task can be performed by models that fall on opposite sides of this definition. Take image segmentation. You could use a GPAI model like a large multimodal system to segment images. But you could also just download a specialized model from Huggingface that does nothing but image segmentation. Same task. Different legal categories.

This distinction matters when you’re cataloging your AI systems and classifying their risks for AI Act compliance. Two systems doing identical work might have completely different compliance requirements. The map is not the territory, but in this case, the map determines your legal obligations.

PS: Random observations on AI regulation. Not legal advice.

The EU AI Act Footnotes (2)

Article 4 of the EU AI Act seems deceptively simple. All it asks for is “AI literacy.” But there’s something subtle and important happening here.

Most organizations will check this box by running basic AI training. And that’s fine: track attendance, measure results, keep improving. But the real goal isn’t just teaching people what AI is. It’s teaching them how to think about AI.

The best mental model I’ve found is what I call “cautious confidence.” Use AI aggressively when it solves real problems. But maintain a permanent background mindset of skepticism. The moment you fully trust an AI system is the moment you’ve made a mistake.

This isn’t just theory. At AIAAIC we’ve documented over 1800 AI incidents. Many of which started with someone trusting AI too much. That’s also why we built our taxonomy of harms to be deeply practical, so people can identify failures before they happen.

The bureaucrats probably didn’t intend this, but Article 4 might be one of the Act’s most important contributions. Not because it mandates training, but because it forces organizations to build this way of thinking into their core culture.

PS: Random observations on AI regulation. Not legal advice.

The EU AI Act Footnotes (1)

Take a look at this official pyramid from the EC website 👇. It’s a neat visualization, but it might reinforce a common misconception about the EU AI Act’s “risk classification.”

The AI AI Act risk classification pyramid
The AI AI Act risk classification pyramid conveys a wrong idea of mutual exclusivity.

The pyramid layout suggests mutually exclusive levels, like you’re either on floor 3 or floor 2. But that’s not quite how it works.

While prohibited systems are indeed a separate category, the other designations (high-risk, limited risk, general-purpose AI) can apply simultaneously to the same system.

This isn’t a major issue, but it’s worth keeping in mind when setting up compliance processes. Your system might need to meet requirements from multiple categories, and your role might include both provider and deployer obligations.

The Act’s categories work more like tags than floors in a building. Small distinction, practical implications.

The AI AI Act risk classification venn diagram
How the EU AI Act risk "classification" should look like.

PS: Random observations on AI regulation. Not legal advice.

The EU AI Act: A Catalyst for Responsible Innovation

The multiple benefits of the EU AI Act
The AI AI Act risk classification pyramid conveys a wrong idea of mutual exclusivity.

When I first heard about the EU AI Act, I had the same reaction many in tech did: here comes another regulatory burden. But as I’ve dug deeper, I’ve come to see it differently. This isn’t just regulation; it’s an opportunity to reshape how we approach AI development.

Let’s face it: we’re in the midst of an AI hype cycle. Companies are making grand promises about AI capabilities, setting expectations sky-high. But as anyone who’s worked in tech knows, reality often lags behind the hype. We’re already seeing the fallout: users disappointed by AI that doesn’t live up to the marketing, trust eroding as quickly as it was built.

The AI Act might be just what we need to reset this dynamic. By pushing for transparency and accountability, it gives us a chance to rebuild trust on a more solid foundation. Instead of chasing the next big headline, we can focus on creating AI that genuinely delivers value and earns users’ confidence.

Critics worry that the Act will stifle innovation, particularly for smaller companies. But look closer, and you’ll see that the most stringent requirements are focused on high-risk systems. For many AI applications, the regulatory burden will be light. And even for high-risk systems, the costs of compliance should be a fraction of overall development expenses.