3 Things Leaders Are Missing About Europe’s AI Bubble

As Europe stakes its claim in AI, technical debt and shadow identities loom as the real test for lasting impact.

By Emily Singleton | edited by Jason Fell | Mar 17, 2026
BlackJack3D | Getty Images
AI Large Language Model concepts. 3D render

Opinions expressed by Entrepreneur contributors are their own.

You're reading Entrepreneur Europe, an international franchise of Entrepreneur Media.

Across Europe, governments and enterprises are accelerating artificial intelligence (AI) adoption in response to global competition. The EU’s InvestAI initiative is designed to mobilize €200 billion in AI investment, including €20 billion earmarked for AI infrastructure and “gigafactories,” allowing for open and collaborative AI development and securing Europe’s place as an AI continent.

But as capital flows and companies rush to implement the latest technologies, many organisations are underestimating the hidden costs of moving fast. A recent report found AI-generated code is “highly functional but systematically lacking in architectural judgment,” leading to a new form of technical debt.

Overseas, we are already seeing AI agents that were built to ease workflows and reduce manual labour now magnifying governance roles eightfold. Systems once accessed exclusively by humans now host autonomous agents, creating an entirely new class of identities that must be secured. A 2025 report found non-human identities outnumber human users by roughly 82 to one

This is where the AI conversation stops being about innovation and starts becoming about durability. When the current tech valuation boom, which is especially apparent in AI-related stocks and investments, cools off, the EU has a chance to position itself with “trust-based” technology. But European business leaders must think beyond short-term exuberance and build resilient strategies that focus on quality and long-term competitiveness rather than getting swept up in speculative frenzy.

Speed compounds technical and organisational debt

When AI systems are deployed faster than teams can document, review, and govern them, debt forms simultaneously in the code they run on. McKinsey describes technical debt as the “tax” a company pays on any development to redress existing technology issues, and claims it accounts for about 40% of IT balance sheets.

“Technical debt is the result of shipping architecture before it is ready and layering systems instead of reworking models from their foundation, which is commonly seen when rushing to respond to hype pressure,” comments Asparuh Koev, CEO of AI logistics platform Transmetrics.

“Businesses deploying AI capabilities, whether plug-ins or bespoke models, need to ensure secure guardrails and responsible data practices are in place,” the logistics AI expert continues.

The trajectory for businesses is already measurable. Forrester predicted that over half of technology decision-makers will see their technical debt rise to a “moderate or high level of severity”, with that number projected to reach 75% in 2026. Accenture states that AI tools, including the generative variety, are now the highest contributors to tech debt.

However, there are always two sides to the coin: the same Accenture report also suggests that when used appropriately, generative AI can help manage tech debt remediation as well as minimize the creation of tech debt.

“Even before AI’s full productivity potential has been realized, many enterprise budgets are being recalibrated as if they already have. This assumption that AI will make delivery faster, cheaper, and smarter is now embedded into contracts and forecasts,” explains Dr. Ranjit Tinaikar, CEO of Ness Digital Engineering

When used deliberately and strategically, generative AI tools can assist in explaining legacy systems, make recommendations, and use AI-driven methods to keep systems up to date, lowering ongoing debt systematically.

As the AI bubble swells, identity becomes the fault line

In the first half of 2025, Microsoft witnessed identity-based attacks rise by 32%. Escalation, in part, reflects adversaries’ increasing use of AI to craft highly convincing social engineering lures. 

However, as phishing-resistant MFA hardens user access, attackers are shifting focus to leverage AI agents or “non-human” identities as entry points instead of personal logins. The Microsoft report notes that non-human identities often hold elevated privileges but lack sufficient security controls, resulting in a growing blind spot that attackers are exploiting.

Norman Menz, expert in threat exposure management and CEO of cybersecurity company Flare, states that, “Data that once lived in one place is now sprawled across organizational and AI vendor systems, dramatically expanding the number of credentials in play and increasing the risk of exposure.”

Narrowing in on the core issue, Menz adds, “On top of that, identity exposure is no longer limited to people. AI agents now act as independent identities, raising a new question of trust. So as leaders, we have to ask ourselves, is the agent still doing what it was designed to do, or has its data or instructions been corrupted?”

While almost all who have adopted AI agents have already reported revenue benefits at the use-case level, most don’t know where these identities reside, what they have access to, or how long they persist. CISOs are not taking this matter lightly. A recent report found 89% of CISOs plan to hire staff dedicated specifically to identity security in 2026. 

Reworking workflows from the top down drives successful AI 

AI experimentation is still largely driven by startups and technology firms with deep pockets. The hardest tests, however, are playing out inside institutions such as schools, healthcare systems, and public agencies that must balance lower budget ceilings with strict governance.

For Nate MacLeitch, CEO of QuickBlox, whose company provides communication tools and virtual assistants for healthcare providers and other industries, “the shift for leadership looking to scale AI will be in driving AI initiatives from the top and keeping final decisions with the experts,” he says. Gartner indicates that only 15% of IT leaders are considering deploying fully autonomous AI. Meanwhile, McKinsey’s research reinforces that redesigning workflows is a key success factor.

“We have to be able to trust the systems and the system logic, which comes with clear frameworks and escalation paths that live inside the architecture, with named owners to govern specific workflows,” the communications AI expert adds.

CEO and co-founder of NexStrat AI, Arda Ecevit, reinforces this, saying, “European entrepreneurs need to build or reshape their core strategies around AI, while treating it as a core business capability and transformational lever. AI strategy has to be led from the top, by founders and CEOs, with a bold and clearly articulated vision that’s embedded across every team to shape how value is created and decisions are made.”

Charlie Sander of K-12 cybersecurity company ManagedMethods warns that, “In school districts, you’re dealing with thousands of users’ sensitive data, and legacy systems that were designed before AI existed. Leaders must know who and what has access, whether that access is justified, and whether it can be revoked instantly. If you don’t have that visibility, you don’t have control, leaving schools susceptible to risk.”

European public buyers are preparing accordingly, aligning AI procurement practices with the EU AI Act and prioritising robustness over novelty. This shift is also shaping market opportunity. The EU-funded StepUp StartUps initiative recently argued that AI could reshape public services across the EU, creating concrete opportunities for startups and SMEs.

Challenges remain. A survey by Cisco, which polled IT leaders across seven EU countries, spotlights a disconnect between ambition and readiness. While 45% expect AI workloads to grow in the next three years, only 23% report having sufficient GPU capacity. The funding announced earlier last year intends to close that gap and prepare the EU for its AI moment.

Europe’s AI bubble will burst if leaders strap on out-of-the-box features without rethinking how its unique systems are built and governed at scale. European leaders who want to avoid a painful correction must look past short-term momentum and focus on what makes AI sustainable at an institutional scale.

Across Europe, governments and enterprises are accelerating artificial intelligence (AI) adoption in response to global competition. The EU’s InvestAI initiative is designed to mobilize €200 billion in AI investment, including €20 billion earmarked for AI infrastructure and “gigafactories,” allowing for open and collaborative AI development and securing Europe’s place as an AI continent.

But as capital flows and companies rush to implement the latest technologies, many organisations are underestimating the hidden costs of moving fast. A recent report found AI-generated code is “highly functional but systematically lacking in architectural judgment,” leading to a new form of technical debt.

Overseas, we are already seeing AI agents that were built to ease workflows and reduce manual labour now magnifying governance roles eightfold. Systems once accessed exclusively by humans now host autonomous agents, creating an entirely new class of identities that must be secured. A 2025 report found non-human identities outnumber human users by roughly 82 to one

Related Content