The Counterculture#
The current meta of frontier labs is easy to describe. You poach a couple of key researchers, contact VCs, raise tens or hundreds of millions, burn through a mountain of compute, train a huge LLM, and position it as state of the art for its size—or competitive with frontier models. Then you differentiate by price, speed, specialization in some area… or you simply make it open source to capture attention faster. You raise another round, scale, and iterate again.
This dynamic applies almost identically to startups and labs working on video, audio, or image models.
But in parallel, there exists a group of labs and startups that chose a different path—more anarchic, and decidedly opposed to the dominant meta.
Decentralization#
The first “rebel” line emerged around a simple idea: if one day we create a super-powerful AI (or even an AGI), it shouldn’t be in the hands of a few. It should be developed and run in a decentralized way, distributed across the globe.
The first group to raise this flag was Nous Research, which began in 2023 as a community doing fine-tunings of LLaMA. Over time, they became a startup and started working on software for distributed pretraining, new optimizers, asynchronous reinforcement learning, and at one point even ventured into crypto and blockchain. Meanwhile, they maintained their series of instruct fine-tuned models: Hermes.
Soon after, Prime Intellect joined in—half GPU marketplace, half lab. Like Nous, they developed software for decentralized training and inference, and by mid-2025 they created an open hub of environments for training LLMs with RL.
Between 2024 and 2025, both groups launched their first “decentralized” training runs. The result: the hardware was almost entirely located in the United States (with some in Europe), and consisted exclusively of H100s and A100s. In other words, the training wasn’t nearly as heterogeneous or distributed as the narrative suggested—where individuals could contribute their consumer GPUs. And the resulting models—both pretrained ones and RL models with verified traces—were not practically useful. We can still consider them proofs of concept.
That said, the person who has reflected most publicly on a future where AI is controlled by large corporations versus one where the technology is open and amplifies human purpose rather than replacing it is Emad Mostaque. In 2025, he founded Intelligent Internet and published a book warning that the current economy will be disrupted by AI, and that we have about 1,000 days to rewrite the operating system of society toward a model of human symbiosis—avoiding both digital feudalism and a fragmented “cold war”–like scenario.
In practical terms, so far they’ve released a fine-tuning of a 4B model for agents, a dataset, and another fine-tuning for medical applications. Interesting as a vision—but without tangible results yet.
New Horizons#
Another countercultural current openly rejects the idea that current models—LLMs, scaling, more data, more parameters, more RL, more of the same—are the path to AGI. Their proposal is to seek new ideas, even if they’re far from immediate economic payoff.
The most visible example is Sakana AI, Japan’s most prominent AI lab, founded by David Ha in 2023. Their mission is to do the opposite of what everyone else is doing, and that philosophy has led them to explore an extremely wide range of research directions—from agents to new architectures.
Notable releases from Sakana AI
- Recurrent models inspired by the brain.
- Evolution-style combinations of models.
- LLM agents that write papers.
- Agents that rewrite their own code.
- Patches for LLMs to improve capabilities.
- Foundational vision models seeking life-like behavior in simulated environments.
- Agents that generate CUDA kernels.
They have no products. Likely no revenue, either. But they’re one of the public’s favorite labs, because every new paper is a surprise.
Dr. Fei-Fei Li, also unconvinced by the dominant paradigm (and with a natural bias toward computer vision), founded a startup in 2024 called World Labs, focused on spatial intelligence. So far, they’ve released a model that converts images into 3D environments—but little more.
Finally, there’s Yann LeCun, who also leaves Meta to create his own company. He’s betting on AMI (Artificial Machine Intelligence): a system based on self-supervised learning, planning in abstract space, and JEPA-like models.
And so continues a small list of other groups seeking alternative paths.
For now, none have achieved an impact comparable even to early versions of LLMs, but they represent something essential: intellectual diversity.
Notes:
- “Meta” here refers to the “metagame,” a term from video games used to describe the prevailing strategy in a competitive environment.
