The Rules Have Arrived: Europe's AI Police Are Now on Patrol
Let's be honest—we've all been waiting for the other shoe to drop. For years, artificial intelligence has been the wild west of technology, sprinting ahead while regulators fumbled with maps written in disappearing ink. That era ended on February 2nd. Quietly, without fanfare, the European Union flipped the switch. The AI Act isn't just legislation anymore; it's a reality with teeth.
I remember talking to a Berlin-based startup founder last autumn. "We're redesigning everything," she told me over surprisingly bad coffee. "The compliance budget now equals our R&D budget." At the time, it sounded like corporate paranoia. Today, it sounds like survival instinct. The EU AI Act enforcement phase has begun, and the landscape for high-risk AI systems—from the algorithms that might deny your mortgage to those scanning your face on the street—has changed forever.
What Actually Happened on February 2nd?
This wasn't a starting pistol; it was the removal of training wheels. The partial application of the AI Act began last year, giving everyone a grace period to figure things out. That grace period is over. As of last month, the obligations for high-risk AI are fully, legally binding for any company operating within the EU's massive single market.
What falls into this category? Think of it as the technologies that touch the raw nerves of daily life:
- Biometric identification and categorization (that's your face, your gait, your voice)
- AI managing critical infrastructure (energy grids, water supplies, transportation networks)
- Employment and worker management algorithms (hiring, firing, promotion decisions)
- Systems used in education or vocational training
- AI for access to essential private and public services (including that all-important credit scoring)
Gryphon AI's Regulatory Report for March 2026, published just last week, put numbers to the machinery. Twenty-seven National Competent Authorities (NCAs) have been designated across member states. These are the new sheriffs in town. Germany's Bundesnetzagentur and France's CNIL are leading the pack, already staffing up with specialists who speak the dual languages of law and machine learning.
The stakes for non-compliance are, frankly, terrifying for boardrooms. We're talking fines of up to €35 million or 7% of a company's global annual turnover—whichever is higher. For the most egregious violations involving 'unacceptable risk' AI, like social scoring or subliminal manipulation, that percentage is the hammer. It's a calculation designed to make even the largest tech conglomerates pause and recalculate.
The First Case: Algorithms on Trial
Theory became practice on March 18th. Italy's AGCM (Autorità Garante della Concorrenza e del Mercato) opened the first formal investigation under the AI Act. The target? A major Italian bank's AI-powered mortgage rejection algorithm.
The plaintiffs didn't just claim the algorithm was unfair; they alleged it violated Article 13 transparency requirements. In plain English: the system was a black box. Applicants couldn't understand why they were rejected, nor could they challenge the logic or correct potentially flawed data. This is the exact scenario the AI Act was built to prevent—life-altering decisions made by inscrutable code.
This case will be the precedent everyone watches. How the AGCM investigates, what evidence it demands from the bank, and the final ruling will become the de facto textbook for AI Act compliance across the continent.