Europe’s AI Moonshot Chance
- Austin Nellessen

- 18 hours ago
- 10 min read

If you cannot go through something, go around it.
This piece of wisdom aptly encapsulates the current state of the so-called global Artificial Intelligence (AI) race. While every major lab rhetorically commits to chasing superintelligence – an undefined state of ample, adaptable, and actionable intelligence – national strategies have crystallised around well-defined limitations and advantages.
The U.S. maintains a recognised lead in terms of frontier model capability. Its advantages include a deep and vibrant financial market, significant pre-existing wells of compute (i.e., computation, or what data centres produce), and a first-mover advantage. As such, the world continues to watch the mantle for ‘best AI model’ juggle between OpenAI and Google. While considerable concerns remain regarding securing returns on overstretched AI investments and the ability of an old energy grid to handle the upcoming onboarding of multiple hyperscale data centres, the U.S. is well set to profit from an AI-powered productivity boom – and the government knows it. How exactly those profits will be distributed across society remains yet to be seen.
In China, the top models of Qwen, Kimi K2, and DeepSeek closely follow their American counterparts in terms of capability. In addition, many Chinese models make up for what they lack in power through their customisability and low cost, attracting even some high-profile Silicon Valley startups. Yet DeepSeek, in the report announcing their newest model V3.2, admitted that, despite continuous innovations in training and reasoning methods overcoming resource deficits, available compute is becoming a principal limiting factor:
‘However, a distinct divergence has emerged in the past months. While the open-source community (MiniMax, 2025; MoonShot, 2025; ZhiPu-AI, 2025) continues to make strides, the performance trajectory of closed-source proprietary models (Anthropic, 2025b; Deep Mind, 2025a; OpenAI, 2025) has accelerated at a significantly steeper rate. Consequently, rather than converging, the performance gap between closed-source and open-source models appears to be widening, with proprietary systems demonstrating increasingly superior capabilities in complex tasks.’
This analysis of the potential limits of Chinese open-source models was echoed at a recent event featuring tech leaders from the leading Chinese AI labs at Zhipu, Tencent, and Alibaba, where the head of Alibaba’s AI team, Justin Lin, stated: ‘A massive amount of OpenAI’s compute is dedicated to next-generation research, whereas we are stretched thin – just meeting delivery demands consumes most of our resources.’ This constrained situation is further exacerbated as the chip war heats up: China’s so-called ‘Manhattan Project’ aims to accelerate domestic semiconductor manufacturing at the same time as the government told tech companies to halt (and then gave conditional approval to continue) Nvidia H200 chip orders. Meanwhile, Nvidia is releasing its newest, much more capable chip way ahead of schedule.
Given the resource limitations, Beijing continues to emphasise the importance of deployment over development in its AI strategy. And nothing is more important for deployment in China than Embodied AI. From the central government encouraging local experimentation to develop and scale the country’s robotics ecosystem, to the CCP Central Committee recommending incorporating embodied AI as one of the new drivers of economic growth in China’s forthcoming 15th Five-Year Plan for Economic and Social Development, the country sees robotics as the key to cementing its manufacturing dominance and preventing deindustrialisation. According to the International Federation of Robotics, China accounted for 54% of global robot deployments in 2024, 10 times more than the U.S.
The Gulf, amid the backdrop of Saudi Arabia and the United Arab Emirates vying for regional supremacy, has pursued a strategy of attracting compute at all costs. From lobbying the Trump administration to lift Biden’s semiconductor export regime to bankrolling large-scale data centre construction for American AI Titans like OpenAI and xAI, both countries are seeking to weaponise their plentiful space, energy, and political leniency – the latter of which is in particularly short supply in Western democracies. The extent of their scramble for compute can be seen in the consideration of the somewhat unconventional idea of ‘digital embassies’, where foreign companies run their AI workloads in a data centre located in one country, but governed by the laws of another.
Thus, these partnerships stem solely from cold calculations of computational necessity, even at the cost of sacrificing a certain level of regulatory autonomy. If AI achieves its revolutionary potential and becomes endemic in modern, competitive business operations, the amount of data to be stored and the computing capacity needed for daily inference will only exponentially grow. And, according to Jevons Paradox, even as AI models proliferate and semiconductor production capacity grows, thereby decreasing cost per inference, demand for both products will inversely increase.
As I have previously written in more detail, the inability to secure sufficient resources for domestic compute carries the potential to relegate entire countries to a rentier state of compute dependency, echoing the colonial remnants of insufficient industrial infrastructure that has produced ongoing extractive economies. However, beyond this analysis, the deals have also secured valuable collaborations for national AI companies, such as Saudi Arabia’s Humain, leading to the release of the first Arabic AI model.

One trend pervades the analysis of almost every region outside the U.S., and to some extent even within it: data centre deficiency. This issue, however, is particularly emblematic of the EU’s current quagmire. Belgium’s cyber security chief recently affirmed that Europe had ‘lost the internet’, with the consequence that it was ‘currently impossible’ to store data fully in Europe. This outcome strongly affects the EU’s first AI plan centred around regulation, making its implementation nearly impossible.
The EU’s second AI plan, unofficially led by France under Macron, is similarly aspirational. Through mustering continental resources, the goal is to pursue a sort of technological ‘strategic autonomy.’ This includes the EuroStack initiative, a set of industrial policy recommendations to manufacture an independent digital infrastructure from semiconductors to data centres, and considerable support for domestic Large Language Model (LLM) development through open-source alternatives to Europe’s frontrunner Mistral AI. However, even if strategic autonomy did represent the right direction – which can be debated – the EU currently lacks the resources or ecosystem to achieve it. The two largest hurdles to such an achievement, perhaps, are an overly burdensome regulatory environment that inhibits innovation at speed and inadequate financial markets to fund cutting-edge technology startups.
The good news is that policymakers have rightly identified both challenges and are seeking solutions. In the 2025 State of the Union Address, President von der Leyen reiterated the need for a ‘Savings and Investments Union’ as well as announced a multi-billion-euro ’Scaleup Europe Fund’, both to prevent a situation in which ‘limited availability of risk capital forces [high potential startups] to turn to foreign investors.’ The bad news is that these initiatives, still, likely will not be enough to achieve model parity with China, let alone the U.S. The material fact of limited compute to train frontier models has led Mistral AI to release an innovative, small model with a fraction of competitors’ parameters (more parameters have traditionally been correlated with higher model complexity), potentially pivoting the company towards a position to compete in the consumer electronics space, like smartphones. Despite these efforts to find innovative alternatives, the computational bottleneck is unlikely to ease soon due to limited political and fiscal space, evident in the decision of the French AI company Poolside to build a bespoke data centre in West Texas.
Yet, recent rumours of a new project by one of the ‘Godfathers of AI,’ Yann Le Cun, along with a contentious thesis on what it will take to achieve superintelligence, may prove a once-in-a-lifetime opportunity for the EU not only to gain strategic wiggle-room, but also to potentially win the next internet. Europe’s moonshot chance lies within world models.
World Models
To discuss the importance of world models, one must first understand LLMs, or ‘word models.’ LLMs, the technology that undergirds products like ChatGPT, in their most basic form, are systems of next-token prediction. A token is a chunk of text with inherent, simple meaning, such as breaking the word ‘unbelievable’ into the tokens ‘un’ and ‘believable.’ By using tokens as the basic building blocks of understanding, LLMs are able to learn and build connections between words much more efficiently. As such, LLMs have improved leaps and bounds and continue to show impressive results when completing text-based tasks, ranging from intaking and parsing written data to ‘understanding’ and responding to queries.
Yet, LLMs continue to exhibit one key flaw: they hallucinate. According to American tech giant IBM, AI hallucination occurs when a generative LLM ‘perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate.’ Hallucination, or the return of false information, is a pivotal obstacle that continues to stand unanswered between any model and so-called superintelligence.
Here is where the leading theorists diverge. Some, such as OpenAI’s Sam Altman, profess that more compute will enable more in-depth training, which will, in turn, unlock higher levels of intelligence until hallucination is conquered and superintelligence is achieved. It is also important to note that Altman’s company, in no small manner, depends on public belief in AI’s revolutionary capability.
Others, notably represented by two of the field’s most well-known minds, Fei-Fei Li and Yann Le Cun, believe LLMs themselves are a dead end – albeit one not without use. A simple way to conceptualise their argument is to compare LLMs to a person who has only ever known a library filled with books. That person can read every single book in the library and piece together abstract conceptions of things by connecting words and ideas sequentially, but, crucially, they will never truly understand the laws that dictate the universe, such as those of physics. They will know the words – hard, soft, orange, and light – but never have the intuition of what it feels like to hold a rock, or to intuitively understand the manner in which a leaf falls from a tree. This is because LLMs are, these theorists claim, ‘ungrounded.’ Fei-Fei Li explains it as such: ‘Today, leading AI technology such as large language models (LLMs) have begun to transform how we access and work with abstract knowledge. Yet they remain wordsmiths in the dark; eloquent but inexperienced, knowledgeable but ungrounded.’
What is the true path to building superintelligence, then? These theorists point to a new structure of building intelligence: world models. world models, rather than focusing on creating correlational text-based representations of the world (that an apple connects with red and sweet), emphasise understanding the causal dynamics that regulate the world.
Imagine how long it would take that individual confined to a library to construct an internal representation of every concept purely through reading. Now, contrast that with how quickly a baby learns that pushing a tower of blocks will make it fall, in which direction it will topple, or how heavy an object is likely to be. The key difference is the action of doing, learning by acting in the world and updating internal models based on the consequences of those actions. Fei-Fei Li calls it spatial intelligence, and recently argued it could only be achieved with ‘world models, a new type of generative models whose capabilities of understanding, reasoning, generation and interaction with the semantically, physically, geometrically and dynamically complex worlds – virtual or real – are far beyond the reach of today’s LLMs.’
Successfully developing world models would unlock a drastically more powerful form of intelligence. Due to its causal understanding of the world, a world model could simulate sequences of actions before acting itself, with two key benefits. First, simulation would be a vastly more efficient method of computation, thus greatly reducing the importance of an overwhelming data centre buildout. Second, it would enable the capability to perform long-term projects like scientific research, which require multiple steps of planning, simulation, and execution.
Spatial intelligence, as the name implies, would also constitute a breakthrough for a perhaps even greater technology: robotics. The ungrounded nature of LLMs struggles once in charge of a physical body, unsure of how much pressure to apply to its grip when holding a handle and unable to make minute adjustments in the moment. In contrast, world models, given their simulation capabilities, would be able to plan their actions beforehand and make adjustments by constantly re-simulating the future – as humans do.
The principal obstacle to world model development is insufficient data – far more textual data is freely available on the internet compared to multimodal, real-world data like images and videos that world models require. In the same manner, it is much easier to create bespoke synthetic data for LLMs than it is for world models.
Yet, if a company were to succeed in proving world models’ potential, it would tap into a market worth up to $50 trillion, according to Nvidia’s CEO Jensen Huang. If the price of infinite labour were the fixed cost of a robot and the variable cost of electricity, this prospect would create immense possibilities. Yet, even more revolutionary than that, such a breakthrough would unlock a technology with a real shot at unravelling superintelligence.
Europe's Moonshot Chance
Luckily for the EU, Yann Le Cun, one of the chief architects of LLM technology and previously Chief AI Scientist at Meta, recently announced the founding of a Paris-based startup pursuing world model development. His startup, Advanced Machine Intelligence (AMI) Labs, is making the bet that America’s Silicon Valley is too financially and intellectually tunnel-visioned on LLMs to pivot to an alternative framework. And, for the most part, his intuition seems accurate. Not only did his former employer, Meta, recently overturn its executive leadership in a desperate attempt to mimic Google and OpenAI’s AI strategy, but AMI Labs also largely has only one competitor: Fei-Fei Li’s World Labs, based out of Stanford.
Moreover, there is further evidence to believe AMI currently holds the advantage. Indeed, the Union, by being less inclined to wholesale deindustrialisation than the U.S., is still home to some of the world's best industrial robotics and manufacturing giants: Siemens, ABB, and Schneider Electric, to name a few, each likely with existing multimodal data to train the startup’s algorithm. Beyond the proximity of symbiotic companies, AMI Labs will also likely benefit from the EU’s ongoing Apply AI Strategy. The strategy, drafted to supercharge AI adoption across nearly every sector, aims to do so through the creation [1][2] of Experience Centres for AI, such as AI Factories and AI Gigafactories, AI Testing and Experimentation Facilities, and AI regulatory sandboxes, which are places where vast troves of data will be created and collected.
However, several hurdles still stand in the way of the EU achieving technological parity. None more so than insufficient capital allocation, driving innovative startups and researchers to the U.S., threatens to hold the organisation back. Even more critically, overcoming this obstacle will likely require increased market centralisation to facilitate cross-border flows, something Member States and their electorate are increasingly shying away from.
Yet, if AMI succeeds in developing world model technology, it may provide an opportunity to leapfrog the U.S., reviving European innovation and achieving an industrial renaissance. So, while the AI development race continues, Le Cun’s startup is certainly one worth keeping an eye on. With enough support, it may be able to develop a model capable of shaking the world.
The views expressed in this article belong to the author(s) alone and do not necessarily reflect those of European Guanxi.
ABOUT THE AUTHOR
Austin Nellessen is a graduate from Georgetown University, where he focused on the intersection of international affairs, politics, and history, as well as U.S.-China relations. The Director of ATLAS, he also publishes COMPUTE/COMPETE, a series detailing the geopolitical implications of AI globally. Alongside professional experience using his Chinese language proficiency to conduct research at the Hudson Institute on Chinese domestic and foreign policy, he currently is Chief of Staff and Program Manager for M+D Advisors, a consultancy organizing the world’s biggest conferences.
This article was edited by Isabell Raue and Alice Bavarelli.
Featured Image: French computer scientist Yann LeCun / Creative Commons Attribution 2.0 Generic license / Free for use / Wikimedia Commons



Comments