The Transformer and the Hash
the building blocks of 21st century political science
All stable processes we shall predict. All unstable processes we shall control.
~ John von Neumann
We discovered something. Our one hope against total domination. A hope that with courage, insight, and solidarity we could use to resist. A strange property of the physical universe that we live in.
The universe believes in encryption.
~ Julian Assange
“ALL CAPITAL TO THE DATACENTERS!” I can hear the Gods of the Market screaming. All over the world, vast tracts of land and massive energy contracts are being bought up to build and power zettaflop-scale supercomputers. And it is increasingly these datacenters - not Wall Street, not the Washington Beltway, not even Silicon Valley - that govern the world. It’s computations running inside the datacenters that decide what articles and videos and dating app profiles I see, what emails from collaborators are marked as important or as spam, whether my AI chatbot challenges my misconceptions or flatters me into delusion. It’s increasingly the datacenters that determine who gets rich, who gets famous, who gets cancelled, and who gets dead.
But who governs the datacenters, and how? Who decides what programs run and what data they get access to? This is the critical question of 21st century political science. The old political theories are still relevant, of course - but they need to be adapted to the new substrate. The two key forces behind human political motivation, according to Machiavelli, are love and fear. Datacenters understand neither love nor fear, but they too can be understood through a pair of opposing drives.
The First Drive: Let Information Flow
The first of these drives is exemplified by the Transformer architecture responsible for much of the recent progress in AI. The drive behind the transformer architecture is: let information flow! The models, they just want to learn: you just need to get out of their way. Instead of getting models to think linearly through a problem like an recurrent neural network or an overly-trained human schoolchild, just use attention heads, free the model’s attention to jump around wildly to whatever piece of data it wants. Instead of forcing the model to recurse on its intermediate conceptual representations like an amphetamine-addled continental philosopher, add skip-connections (the breakthrough behind ResNet) so the model always has access to its raw perceptual inputs. Nullius in verba, take noone’s word, the motto of the Royal Society and of the Transformer alike.
But above all, let information flow means scaling up the key inputs - data and compute. You can eke out clever efficiencies here and there, but the overriding priority (once you’ve gotten out of the model’s way) is to feed it ever more data, to train it longer and longer using ever more compute (the famous Scaling Laws).
Let information flow is not merely a technical imperative, it quickly becomes a political, legal, even moral imperative. For example, when language model companies ran out of public internet data, some of them started training on pirated books from LibGen in clear violation of fair use - a decision our legal system tacitly approved. The big tech companies have committed hundreds of billions of dollars to letting information flow into and through their models - what obstacle could stand against them? The state? This spring, a law banning all state and local governments from enforcing any regulation on AI passed the US House of Representatives. It was later removed by a Senate amendment - the superintelligence from the future overreached, this time - but please pause and let it sink in for a second how close it got. Let information flow was academic folk wisdom in a few dusty Canadian labs in 2010, and in 2025 it nearly become the law of the most powerful empire in the world. At least on paper, any town in the United States that tried to restrict how its citizens use artificial intelligence in any way would find itself literally facing down the US Army. This is not a drill.
In any case, billions of dollars will be deployed in the next few years to destroy all remaining technical, political, and moral barriers to letting information flow. And if even billions of dollars aren’t enough? Then we’ll bring in the full weight of the security state, terrified by the specter of an arms race with China…

The Second Drive: You Cannot Pass
The first drive, at the end of the day, is quite intuitive. We all know information wants to flow, the unintuitive thing is just how far this insight can take us.
The second drive is much stranger, almost magical. It says to information: you cannot pass! It allows us to conceal information, paying pennies in compute to create a digital wall that a billion-dollar datacenter cannot break.
The simplest atom of this drive is a cryptographic hash function, an algorithm that deterministically maps any piece of data of any size - a password, a human genome, the full text of War and Peace - to a single number, a hash. But the mapping is one-way; nobody looking at the hash can get much information about the original data. It’s not a priori obvious that this is even possible! But somehow, miraculously, it is. Mythically it feels similar to Bilbo finding the Ring in a mountain cavern - a strange coincidence on which all our hopes of freedom now rest. Cryptographic hash functions are the core building block for digital signatures (and hence ~all internet commerce), Git, blockchains, zero-knowledge proofs and much more.
Transformers say: let information flow, connect everything, add more parameters, more attention heads, more paths for data to travel. Hash functions (and other cryptographic algorithms) say the opposite: destroy information, cut off paths for data to travel. Leverage chaos to foil prediction and control.
The Varieties of Datacenter Governance
Out of these two primitives we will build the future information architecture of the world. Wherever we want to pool information, wherever there is enough trust and compute, we will put a transformer. Wherever we want to build a wall, to prevent adversaries from attacking us or even understanding us, we will put a hash function. The only stable political entities will be those protected by a citadel of hashes.
(or is there a secret third thing, a “smart membrane”, that is intermediate between the full transparency of a transformer and the full opacity of the hash? If so, I haven’t seen it. Get in touch).
So given these two building blocks, what might a 21st century political science look like?
Let’s start by examining one of the most common models of datacenter governance, the libertarian-Georgist absolute monarchy, pioneered by Amazon Web Services. It is an absolute monarchy because Amazon has absolute authority on what it allows to happen in its datacenters, and can read, modify, or delete all your software and data whenever it wants. But in order to maximize profit, this absolute monarchy has adopted a quite extreme libertarian-Georgist policy - almost any user can run almost any program they want on whatever data they want, and have secure property rights to their software and data (~ labor and capital), provided they pay market rent to the sovereign for the scarce services they use.
(You might respond: Amazon isn’t a monarchy, it’s a shareholder-accountable bureaucratic corporation bound by laws! First of all: is it? Second, for analytical purposes we’re talking here about the formal power relationships within the datacenter, not outside of it. Much like we call Louis XIV an absolute monarch even though he had inner conflicts, could be influenced informally through his friends and family, etc.)
Another very common model of datacenter governance is technocratic Leninism - pioneered at scale by Google but since adopted by almost all large consumer software companies. Users have no direct control over their data or the software that runs over it. Instead, user data is held in common and software is written by a technocratic elite of engineers (the Party) led by a CEO (the Politburo) with absolute authority to decide what software will most serve the interests of the user (the dictatorship of the proletariat). Engineers have a way of straying from the path of righteousness and forgetting the true best interests of the user, so in addition to the hierarchically organized engineers there is typically an additional mycelial network of product managers (commissars) who ensure ideological alignment.
(Calling it Leninism makes it sounds like I’m condemning it, but historically a key problem with Leninism is that it was forced on people through physical violence; Leninism in our datacenters is much less problematic since people can just switch to a different datacenter governed differently. Though of course, like the historical Leninist states, ~every technocratic-Leninist corporation is trying as hard as it can to prevent you from escaping it).
To liberal readers, this must sound pretty dystopian - is our choice really between absolute monarchy and communist dictatorship? Must we always be at the mercy of a unitary authority? In one sense, yes - the datacenters still exist in the physical world in the context of a monopoly on violence. Some guy can show up and blow up the datacenter anytime, unless some other guys are standing there with rifles, and the guys with rifles need to get paid. How to pay the guys with rifles is the fundamental problem of political philosophy, and we have to solve it with our usual toolkit: religion, rule of law, jury trials, parliaments, separation of powers, systems of succession, and the (often overlooked but critical) cultivation of civic virtue.
But in another sense, the situation in the digital world could be far better for human freedom than the physical world ever was. The physical world is governed by entities which have the greatest capacity for physical violence (i.e. which win wars), and this capacity scales roughly linearly with the number of people you can command. This means the power of small groups, or cultures incapable of organizing themselves into massive superorganisms (e.g. most indigenous cultures), declines to zero over time. WEIRD culture is an extreme example of this - a culture that destroyed close family ties and kin networks but enabled cooperation at larger scales and thus conquered the world. Again the transformer insight - let information flow - but at civilizational scale (that’s the sense in which AI = capitalism = hyperprotestantism).
The exception to this rule - small cultures that survive and are able to foster freedom - are, in the physical world, almost exclusively island cultures like (pre-imperial) England or mountain nations like Switzerland, whose geographical position means they are safe from conquest by much bigger rivals. As Vitalik writes in My Techno-Optimism:
the combination of ease of voluntary trade and difficulty of involuntary invasion, common to both Switzerland and the island states, seems ideal for human flourishing.
This is finally where the hash comes in - the astronomical asymmetry between the amount of compute needed to encrypt vs decrypt information could enable us to build a digital Switzerland. Nobody’s really succeeded at scale yet, but in principle, trusted execution environments could wrest most of the sovereignty from the datacenter owners and allow users to genuinely own their data and control the software that runs over it, while still benefitting from the agglomeration effects of datacenters.
Of course, the datacenter owner can still refuse to run your code, or install a physical backdoor of some kind. So to truly protect freedom we still need all the liberal technologies of ages past - from contract law to a free media that amplifies the voices of whistleblowers - but updated and interfacing closely with the technical reality of the datacenter. For example - who exactly is allowed inside the datacenters and when? How are they vetted and trained? Democracies typically die by military coup, but it is a coup coming from inside the datacenters that will soon be the greatest threat to free societies.
(if it isn’t already - after all, this was a popular narrative of the 2016 election, though on balance I think it was overblown.)
Postscript: Strange Visions
Political science has benefitted from thinkers describing strange and extreme governance schemes, from Plato’s Republic to Thomas More’s Utopia to Aldous Huxley’s Island. Data governance, too, would benefit from more technically grounded imaginative work. So I’ll end by presenting two of the strangest yet most compelling visions of datacenter governance I’ve found.
In Markets and Computation: Agoric Open Systems, Mark Miller and Eric Drexler describe a datacenter run like an idealized free market - there’s a variety of competing software objects which are charged for the computational resources they use, and charge each other (and users of the datacenter) for services provided. In a way this is a more sophisticated implementation of the AWS libertarian-Georgist policy; instead of letting market participants run the software they want in walled-off enclaves and having transactions happen off-platform, AWS could internalize the market into the datacenter. Transaction costs will be much lower, so this should dramatically increase the overall “GDP” of the datacenter and hence its profitability. But it’s also in many ways a safer and freer path to AGI than humanity is currently taking - instead of most economic work routing through giant black-box models owned by megacorps and understood by nobody, the dynamics of a software market push towards the creation of highly reusable and modular software objects - giving us interpretability for free. I’m not enough of a fan of markets to pursue this personally - like Tolkien, I prefer forests over cities, and am betting more on the organic alignment approach of labs like Softmax - but the world would certainly be better if there was a lavishly funded AGI lab building in this direction. If you want to set one up - please get in touch.
And finally, in Personal Universes: A Solution to the Multi-Agent Value Alignment Problem, computer scientist Roman Yampolskiy gives the most technically compelling implementation of world peace ever published. Assume we have perfect VR, or else mind uploading. Guarantee every human a slice of compute that they can use to simulate their own private universe in which they can have anything they want. Sort-of-by-definition, everyone from Elon Musk to Xi Jinping strictly prefers this to continuing to live their imperfect lives in our imperfect universe. (do you want to share a universe with your friends? you’re welcome to inhabit a shared universe with anyone you want, strictly by mutual consent). So everyone goes into their VR cave or uploads their mind, we lock down the physical universe into a permanent robotic totalitarian state which does nothing except guarantee that the datacenters stay running, and everyone gets everything they wanted forever.
I think this vision is doomed to fail for reasons Wolf Tivy articulates in God Hates Singletons - you can’t actually lock down the substrate; division and conflict at every level is inevitable - but it is directionally right. Technological progress means almost all of us can get much more of almost everything we want; creating extremely secure property rights regimes with guarantees like ‘this data will be kept inviolate and publicly accessible for a thousand years’ - a property currently only credibly promised by a few of the leading blockchains - can do a lot to reduce existential conflict.
Thanks to Richard Ngo for the comment that originally inspired this essay; to Wolf Tivy and Daffy Durairaj and Alex Komoroske and Andrew Rose and many others for discussions that informed it; to David Holz for convincing me to return to AI research - a field I thought I had left forever when I first grasped the full implications of the Transformer drive on a sleepless night in early 2023; and to Madhu Sriram, who graciously let me cancel our call and stay in flow when I finally hit momentum after having this essay in drafts for almost a year.

