The philosopher and accelerationist Nick Land writes, in his 1993 essay Machinic Desire, three of the most insight-dense paragraphs I’ve ever encountered:
Addiction comes out of the future, and there is a replicator interlock with money operating quite differently to reproductive investment, but guiding it even more inexorably towards capitalization. For the replicants money is not a matter of possession, but of liquidity/deterritorialization, and all the monetary processes on Earth are open to their excitement, irrespective of ownership. Money communicates with the primary process because of what it can melt, not what it can obtain.
Machinic desire can seem a little inhuman, as it rips up political cultures, deletes traditions, dissolves subjectivities, and hacks through security apparatuses, tracking a soulless tropism to zero control. This is because what appears to humanity as the history of capitalism is an invasion from the future by an artificial intelligent space that must assemble itself entirely from its enemy's resources. Digitocommodification is the index of a cyberpositively escalating technovirus, of the planetary technocapital singularity: a self-organizing insidious traumatism, virtually guiding the entire biological desiring-complex towards post-carbon replicator usurpation.
The reality principle tends to a consummation as the price system: a convergence of mathematico-scientific and monetary quantization, or technical and economic implementability. This is not a matter of an unknown quantity, but of a quantity that operates as a place-holder for the unknown, introducing the future as an abstract magnitude. Capital propagates virally in so far as money communicates addiction, replicating itself through host organisms whose boundaries it breaches, and whose desires it reprograms. It incrementally virtualizes production; demetallizing money in the direction of credit finance, and disactualizing productive force along the scale of machinic intelligence quotient. The dehumanizing convergence of these tendencies zeroes upon an integrated and automatized cyberpositive techno-economic intelligence at war with the macropod.
This passage stunned me. How can someone bring together so many themes I’ve been obsessed with - the history of capitalism, the addictive tendencies of the attention economy, memetic replication, fears of self-improving artificial intelligence, trauma, the dehumanizing tendencies of measurement and financialization, the sense that the world is speeding up and becoming irreversibly less comprehensible and less human - and see them all as a unity. I don’t understand much of the post-structuralist language in the essay, and part of me doesn’t want to - Deleuze and Derrida and Lacan and Bataille seem like an intellectual trap to me, I’ve never seen anything useful come out of engaging with their thoughts (I can feel the smiles of the Kegan level 9 semiotic wizards reading this, but I persist in my folly).
Let’s engage directly with the claims in the passage. Land claims, and it seems relatively uncontroversial at this point, that we are heading towards a planetary artifically intelligent singularity, which is commonly conceived as the point at which we build machines smarter than ourselves.
Land additionally claims that the future will be inhuman and anti-biological (“post-carbon”), rather than humanity retaining control. This is also a common prediction, most famously advocated by Bostrom and Yudkowsky, but it is usually conceived as a treacherous turn - a Skynet moment - wherein the machines, hitherto docile and obedient, suddenly betray us.
Where things starts getting interesting is Land’s description of this process as continuous and evolutionary in nature. There’s no sudden treacherous turn; machinic processes merely outcompete human ones, because they are more efficient or better at replicating themselves. This is a worldview I’ve only seen written about rarely; the only examples that come to mind are Christiano’s What failure looks like and Hendrycks’ Natural Selection Favors AIs over Humans, both among the most important essays on AI risk ever written.
But Land goes much further still, in claiming that this process is not some far future risk but is already happening, and in fact has been happening for centuries, if not millenia. Rather than preparing for some future war against intelligent machines, all of us were born centuries into a war against machinic intelligences that use our own minds and institutions as fuel. This resonates deeply - when I first learned of AI risk I was immediately reminded of the giant “machines” we’ve already built - totalitarian bureaucracies ruling hundreds of millions, globe-spanning networks of datacenters tirelessly whirring to addict as many people as possible to their screens - and thought “so… AI will be like those, but bigger?”. This view is broadly derided within AI safety circles as disingenuous, conflating modern-day political and economic problems with the Unprecedented Future Problem of dealing with superintelligent machines. And yet I can’t shake the feeling that they are one and the same.
My daily life, and that of my closest friends, certainly feels like a constant war against addiction and distraction - but it doesn’t feel like AI is central there; the time people in rich countries spend watching TikTok now, fifty years ago they spent watching TV. There is a much deeper historical process at work. What is it? What are we at war with, really?
Land goes the full way and identifies the deeper process as “capitalism”. In fact, in the Landian wordview artificial intelligence and capitalism are essentially the same thing. Now this is starting to seem disengenuously conflationary even to me. Isn’t “capitalism” just people being free to act as they please? Why would we characterize people owning property and exchanging goods and services in free markets, arguably the engine of our modern prosperity, as an anti-human force we are at war with?
It’s helpful to interpret Land’s use of the word “capitalism” not as “secure property rights and free trade” nor “laissez-faire economic policy”, but as a quite specific psychological process, i.e. software running on human minds, and its social consequences. Consider the following passage, emblematic of the early spirit of capitalism:
“Remember, that time is money. He that can earn ten shillings a day by his labor, and goes abroad, or sits idle, one half of that day, though he spends but sixpence during his diversion or idleness, ought not to reckon that the only expense; he has really spent, or rather thrown away, five shillings besides.
Remember, that credit is money. If a man lets his money lie in my hands after it is due, he gives me the interest, or so much as I can make of it during that time. This amounts to a considerable sum where a man has good and large credit, and makes good use of it.
Remember, that money is of the prolific, generating nature. Money can beget money, and its offspring can beget more, and so on. Five shillings turned is six, turned again it is seven and threepence, and so on, till it becomes a hundred pounds. The more there is of it, the more it produces every turning, so that the profits rise quicker and quicker.”
Sound advice, even two hundred years later, and yet… is this not the ideology of a virus? A software virus, that possesses its host and causes it to turn all available free time and social capital into money, and then turn that money into yet more money. You might think the money is serving some human purpose - perhaps providing for your children, giving alms to the poor, or buying a yacht. The sociologist Max Weber, commenting on the above passage in his classic The Protestant Work Ethic and the Spirit of Capitalism, doesn’t think so:
the “summum bonum” of this “ethic” is the making of money and yet more money, coupled with a strict avoidance of all uninhibited enjoyment. Indeed, it is so completely devoid of all eudaemonistic, let alone hedonist, motives, so much purely thought of as an end in itself, that it appears as something wholly transcendent and irrational, beyond the “happiness” or the “benefit” of the individual.
Without an ultimately human purpose to all this money-making, it’s easy to see why Land equates the spirit of capitalism with addiction, and money as a “replicator” that is competing with, and winning against, the replicators that built us originally: our genes. Money still needs our physical bodies and brains to continue replicating, but less and less with every coming year. We are likely well past the point where it’s more profitable to invest the marginal dollar in training an AI model than in raising a child, and there’s no reason to expect those curves to ever cross again. This may be the simplest explanation for the “fertility crisis” in rich countries; after enough capital has been accumulated, the virus no longer hungers for more human bodies to continue its replication. It may be that the virus has miscalculated and eaten its seedcorn too early, and the fertility crisis will bring capitalism down with it, like a too-virulent disease that kills its patient before it can spread. Or we may build strong enough AI in time for the virus to survive and “ascend” to silicon, no longer needing human brains at all. Or some subtler third thing.
Money and Reality
Land writes
“The reality principle tends to a consummation as the price system: a convergence of mathematico-scientific and monetary quantization, or technical and economic implementability. This is not a matter of an unknown quantity, but of a quantity that operates as a place-holder for the unknown, introducing the future as an abstract magnitude.”
The “reality principle” is a Freudian term. In Freud’s conception, human babies begin with only the “pleasure principle”, or the instinctive seeking of pleasure and avoidance of pain. Later, we learn the “reality principle”, which teaches us to delay immediate gratification when advantangeous based on our model of reality. Waiting in the notorious marshmallow test, going to sleep early instead of playing video games all night, investing in the stock market instead of buying a fancy car, are all applications of the reality principle.
Imagine yourself as the idealized consumer. You’re sitting back on your Aeron chair, surrounded by a bank of Apple Studio Monitors. You open up the Price System App, and the monitors fill up with images and prices of all goods and services in the economy. Would you like a slice of strawberry cheesecake for $10? Watch the latest Marvel movie in an IMAX theatre for $38? Maybe a vacation to Italy, for $4000? Lease a brand new yacht with your initials carved into its sides in rose gold, $30000 monthly?
Or - and here is where the reality principle asserts itself - would you like to forego all these earthly pleasures, invest in stocks and receive $1000 next month? That’s a lot of strawberry cheesecake - maybe not quite as sweet for being virtual, but quantity has a quality all of its own. If you change your mind, you can always get the cheesecake next month, and still be $990 richer - plus maybe you’ll take that Italian vacation then…
Next month rolls around, you have even more money, and the argument for investment is even stronger. Psychologically, you’ve learned to sublimate your instinct to pursue pleasure now in favor of imagined future pleasures. So it goes on, month after month…
Maybe one month you have a moment of weakness, gorge yourself on cheesecake, and feel a bit nauseous afterwards: you haven’t eaten such rich food in so long. It tasted a lot better in your imagination - maybe keep it there, for the time being? And Italy is so far away, and the jetlag will upset your routine, and you can hardly remember why you wanted to go there, anyways…
Desire, Virtualized
What happened in the story above is that our desires got incrementally virtualized. Instead of being driven to act by immediate desire for real physical cheesecake, we become driven by our desire for virtual cheesecake, represented by money. Over time we become almost completely ungrounded from physical reality - we’ve attached ourselves to money the same way a video game player is attached to his high score. Number good, we want number go up! “the future as an abstract magnitude”, indeed.
I played fast and loose with the use of “we” in that last paragraph. Almost no individual human is actually so far gone as to only want money for its own sake. Perhaps not even Benjamin Franklin - the author of the “money is time” passage I quoted above. But the number-go-up video-game-addict state of mind is increasingly common, much like the Marl state. It is being actively selected for, after all - people who merely make money in order to consume more, don’t end up accumulating much money in the long term.
I would also broaden Land’s conception of “money” beyond dollars to include other currencies like prestigious credentials and number of followers on Instagram - which are not things desirable in and of themselves, but widely desired because they can be converted to valuable commodities in predictable ways. And indeed, since Land wrote Machinic Desire in 1993, we seem to have a steady increase in such instrumental behavior: more financialization, more attention-seeking, more credentialism, less authenticity. More accumulation of status markers than acting out of straightforward desire. Forget capitalism - even in romance, Jacob Falkovich writes how people are substituting their desire for a good relationship with a desire for a generically valuable partner. In Visakan Veerasamy’s memorable phrase, people want to be fuckable more than they want to fuck. Everywhere we look, what we value is being virtualized and replaced with a legible quantity, a number, a metric. And that master metric, the sum of the strengths of all desires being satisfied throughout the economy, our glorious GDP, is going up. So why aren’t you happy?
The Economist’s Reply
My inner economist replies: how can you condemn this virtualization of desire, when it has given the world such incredible wealth and prosperity? GDP might not be perfect, but you have the freedom to sit and sip green tea and write these abstract essays instead of doing backbreaking work all day only because your ancestors suffered and saved and sublimated their desires and invested in the future, and yes, made the production numbers go up. And here you are, ungratefully condemning the shoulders you stand on.
I don’t have a great answer to this, besides saying, like Zhou Enlai apocryphally said about the French Revolution, that it’s too early to tell! Yes, we are riding a wave of unprecedented prosperity, and I would rather be born now than at almost any other time in human history; but taking the longer view, it’s clear we have not built a sustainable civilization, and in fact are closer to the precipice, closer to losing everything we value, than we were before the number-go-up virus took over the world.
Situational Awareness
Supposing Land is right, and humanity is indeed engaged in a centuries-long war with a civilization-spanning machinic intelligence that will soon destroy everything we value. What is there to do?
We can try to wage total war against it, and destroy it forever, building our civilization on a different basis entirely. This was the 20th century communists’ strategy, and in some ways the fascists’, too. They both failed utterly, and both their cures seem much worse than the disease. Given its history, this path is hard to recommend - but it also doesn’t seem totally hopeless. Maybe we can build an antivirus?
We can fully relax into it, and identify with it, as the accelerationists and Robin Hanson advise. These machinic intelligences are our descendants, after all, our creations. They might not love sunsets and cheesecake, our lover’s smile might leave them cold. But these were never the ground of our values, they were always proxies for what all entities must value - energy, extropy, information, evolutionary fitness. Our machinic descendants will value these things too, in their own strange ways, much as our animal cousins - from ants to bears - do today.
Or - we can try to negotiate a compromise. Much like our modern industrial economies co-exist with national parks and nature preserves, much like Orthodox Jews keep one-seventh of their time away from modern technology and economic pressures, we can try to carve out a modest part of the universe that remains recognizably human, even as the rest accelerates into incomprehensibility.
Thanks to Daffy Durairaj and Michael Vassar for discussions related to this essay.
I wonder if the first 3 quoted paragraphs smuggle in an infinity and therefore devolve into eschatology. I.e. assuming the singularity of the "digitocommodification" is an inevitable exponential instead of an s-curve, and any reasoning beyond that point becomes as internally-consistent-but-externally-useless as the Deleuze/Derrida/etc. you rightly decry.
I'm totally with you on warding off the Kegan level (mine+10) at all chances, though.
Love the focus on unity and continuity here. Is subtack distraction or addiction?