There is a topic that continues to generate conferences, white papers, and statements of intent, while simultaneously meeting a certain reluctance to truly address it: the relationship between information management and artificial intelligence. AI is evolving at an unprecedented speed. But how do we govern this complexity? And above all: how can innovation be fostered without losing control?
In this article, we will analyze the evolution of the European regulatory framework for AI, address the issue of accountability when systems fail, and discuss one of the most underestimated aspects of the entire matter: how companies actually allocate resources—and what this reveals about their true priorities.
Table of Contents
The European Regulatory Framework for AI: Between Historical Prudence and New Pressures
Europe has built its digital identity on protection: the GDPR is its most complete expression. A clear, rigorous approach based on precise limits on the use of personal data and stringent controls on extra-EU transfers.
That model, however, is showing some cracks today. Global competitive pressure—the United States and China above all—is pushing toward a pragmatism that, until a few years ago, would have seemed out of place in the European debate. In some contexts, there is an opening to the possibility of using personal data to train AI models, provided that security, transparency, and accountability are guaranteed.
Speed vs. Control: Why Old Risk Management Tools Are No Longer Enough
AI is growing faster than the regulations meant to govern it. This creates a concrete problem: traditional risk management models—data quality, controlled access, and legal compliance—were designed for predictable systems. AI is not always predictable.
So-called hallucinations—incorrect outputs generated with apparent confidence—are not a defect that can be fixed with a patch. They require a different approach: predicting model behavior, tracing every line of reasoning, and assigning clear accountability. This includes decision-making levels that do not yet exist within corporate structures.
The stakes rise further when sensitive data is involved. In healthcare, for example, the risk does not stop at reputation: it becomes clinical, financial, and legal. Liberalizing data use can accelerate research and improve diagnosis and prevention. But without robust human oversight and rigorous validation processes, there is only one real risk: automating error on a large scale. Not a bug. Concrete consequences for real people.
Who is Liable When AI Makes a Mistake? The Question of Accountability Remains Open
This is perhaps the most underestimated point of the entire debate. If an AI model makes an incorrect decision, who is held accountable? The model provider? The company that integrated it? The IT team? The board?
Regulations are attempting to provide answers, but operational reality is outpacing them. This is why the most forward-thinking organizations have already understood the need to build internal governance that clearly establishes accountability for AI decisions, guarantees the traceability of the data used, and ensures continuous monitoring of performance, bias, and anomalies.
Further complicating the picture is the phenomenon of “shadow AI”: unauthorized language models used by employees via browsers or personal clouds, outside of any corporate control. Blocking IPs or websites is an ineffective—and often counterproductive—response. The real path forward involves a culture of transparency, where every experiment passes through a dedicated team. A team that, first and foremost, must exist.
Being compliant is no longer enough. It never really was.
Budgets Tell the Whole Story: Where the Priority on Governance Ends
Statements of intent are often impeccable. Then you look at the numbers.
Companies invest significant sums in new models, automation, and integrating AI into core processes. Much less—often very little—is spent on control structures, continuous auditing, widespread training, and developing internal skills capable of truly understanding what happens inside those systems.
Governance is treated as a cost, a slowdown, or an insurance policy for a risk that “might” materialize. The risk is acceptable. The error is manageable. These are statements often heard that become increasingly difficult to sustain as AI enters critical decision-making processes.
Natural Balances Do Not Exist; Choices Do
Slow down innovation and research in the name of safety? Or accelerate, hoping the consequences will be manageable?
The question is likely misplaced. There is no natural balance between speed and control. Choices exist. And choices are reflected in budgets.
In the coming years, it won’t be the fastest companies that prevail. Nor the most cautious. Those that survive will be the ones that had the courage to face an uncomfortable truth: every decision delegated to a system—or the choice to adopt one system over another—is a responsibility that someone, sooner or later, will have to assume and be able to explain.
The real disruption isn’t AI. It’s accountability. The ability—or inability—to answer the simplest yet most complex question that artificial intelligence brings with it: who decides, and who is accountable for what has been decided? Until this question is answered, every discussion about speed, control, and governance will, in fact, be merely a way to postpone the problem.
