Skip to content

2023

The EU AI Act Footnotes (2)

Article 4 of the EU AI Act seems deceptively simple. All it asks for is “AI literacy.” But there’s something subtle and important happening here.

Most organizations will check this box by running basic AI training. And that’s fine: track attendance, measure results, keep improving. But the real goal isn’t just teaching people what AI is. It’s teaching them how to think about AI.

The best mental model I’ve found is what I call “cautious confidence.” Use AI aggressively when it solves real problems. But maintain a permanent background mindset of skepticism. The moment you fully trust an AI system is the moment you’ve made a mistake.

This isn’t just theory. At AIAAIC we’ve documented over 1800 AI incidents. Many of which started with someone trusting AI too much. That’s also why we built our taxonomy of harms to be deeply practical, so people can identify failures before they happen.

The bureaucrats probably didn’t intend this, but Article 4 might be one of the Act’s most important contributions. Not because it mandates training, but because it forces organizations to build this way of thinking into their core culture.

PS: Random observations on AI regulation. Not legal advice.

The EU AI Act Footnotes (1)

Take a look at this official pyramid from the EC website 👇. It’s a neat visualization, but it might reinforce a common misconception about the EU AI Act’s “risk classification.”

The AI AI Act risk classification pyramid
The AI AI Act risk classification pyramid conveys a wrong idea of mutual exclusivity.

The pyramid layout suggests mutually exclusive levels, like you’re either on floor 3 or floor 2. But that’s not quite how it works.

While prohibited systems are indeed a separate category, the other designations (high-risk, limited risk, general-purpose AI) can apply simultaneously to the same system.

This isn’t a major issue, but it’s worth keeping in mind when setting up compliance processes. Your system might need to meet requirements from multiple categories, and your role might include both provider and deployer obligations.

The Act’s categories work more like tags than floors in a building. Small distinction, practical implications.

The AI AI Act risk classification venn diagram
How the EU AI Act risk "classification" should look like.

PS: Random observations on AI regulation. Not legal advice.

The EU AI Act: A Catalyst for Responsible Innovation

The multiple benefits of the EU AI Act
The AI AI Act risk classification pyramid conveys a wrong idea of mutual exclusivity.

When I first heard about the EU AI Act, I had the same reaction many in tech did: here comes another regulatory burden. But as I’ve dug deeper, I’ve come to see it differently. This isn’t just regulation; it’s an opportunity to reshape how we approach AI development.

Let’s face it: we’re in the midst of an AI hype cycle. Companies are making grand promises about AI capabilities, setting expectations sky-high. But as anyone who’s worked in tech knows, reality often lags behind the hype. We’re already seeing the fallout: users disappointed by AI that doesn’t live up to the marketing, trust eroding as quickly as it was built.

The AI Act might be just what we need to reset this dynamic. By pushing for transparency and accountability, it gives us a chance to rebuild trust on a more solid foundation. Instead of chasing the next big headline, we can focus on creating AI that genuinely delivers value and earns users’ confidence.

Critics worry that the Act will stifle innovation, particularly for smaller companies. But look closer, and you’ll see that the most stringent requirements are focused on high-risk systems. For many AI applications, the regulatory burden will be light. And even for high-risk systems, the costs of compliance should be a fraction of overall development expenses.

Taming Dragons

Image of dragons. Generated with the Draw Things software.
Generated using Draw Things.

If you’ve worked with Large Language Models (LLMs), you’ve probably experienced a peculiar kind of cognitive dissonance. On one hand, it feels like magic. The possibilities seem endless. You can generate human-like text, answer questions, even write code. It’s as if we’ve unlocked a new superpower.

But on the other hand, it’s not fully predictable, let alone reliable. It’s like we’ve discovered a new species of bird that can provide flight to everyone, but that species happens to be a dragon. A dragon that occasionally and unpredictably breathes fire, destroying things and breaking the trust you so badly want to have in it.

This makes working with LLMs a highly non-trivial engineering challenge. It’s not just about implementation; it’s about taming a powerful but volatile force. So how do we do it? Here are some thoughts:

The Great Reversal

Comparing small AI models on performance vs cost. Credits: Artificial Analysis.
Comparing small AI models on performance vs cost. Credits: Artificial Analysis.

Something interesting is happening in AI. After years of “bigger is better,” we’re seeing a shift towards smaller, more efficient models. Mistral just released NeMo and OpenAI unveiled GPT40-mini. Google’s in on it too, with Gemini Flash. What’s going on?

It’s simple: we’ve hit diminishing returns with giant models.

Training massive AI models is expensive. Really expensive. We’re talking millions of dollars and enough energy to power a small town. For a while, this seemed worth it. Bigger models meant better performance, and tech giants were happy to foot the bill in the AI arms race.

But here’s the thing: throwing more parameters at the problem only gets you so far. It turns out that data quality matters way more than sheer model size. And high-quality data? That’s getting scarce.

The paradox of AI

I remember back in 1999 I got to try a speech recognition software for the first time. It required a training phase where I had to read paragraphs and paragraphs of text out loud so it could “pattern match” my voice. I was excited enough so I did the chore and waited for the moment of truth. I was about to experience magic, I was about to use “AI” for the first time.

Spoiler alert, it didn’t last for more than 10 minutes, I had a couple of “wow” moments but then ditched it and never looked at it again.

Why? 

Because it was good as something novel but not good enough to be part of my life.