A friend and I were recently lamenting the strange death of OKCupid. Seven years ago when I first tried online dating, the way it worked is that you wrote a long essay about yourself and what you were looking for. You answered hundreds of questions about your personality, your dreams, your desires for your partner, your hard nos. Then you saw who in your area was most compatible, with a “match score” between 0 and 100%. The match scores were eerily good. Pretty much every time I read the profile of someone with a 95% match score or higher, I fell a little bit in love. Every date I went on was fun; the chemistry wasn’t always there but I felt like we could at least be great friends.
I’m now quite skeptical of quantification of romance and the idea that similarity makes for good relationships. I was somewhat skeptical then, too. What I did not expect, what would have absolutely boggled young naive techno-optimist Ivan, was that 2016-era OKCupid was the best that online dating would ever get. That the tools that people use to find the most important relationship in their lives would get worse, and worse, and worse. OKCupid, like the other acquisitions of Match.com, is now just another Tinder clone - see face, swipe left, see face, swipe right. A digital nightclub. And I just don’t expect to meet my wife in a nightclub.
This isn’t just dating apps. Nearly all popular consumer software has been trending towards minimal user agency, infinitely scrolling feeds, and garbage content. Even that crown jewel of the Internet, Google Search itself, has decayed to the point of being unusable for complicated queries. Reddit and Craigslist remain incredibly useful and valuable precisely because their software remains frozen in time. Like old Victorian mansions in San Francisco they stand, shielded by a quirk of fate from the winds of capital, reminders of a more humane age.
How is it possible that software gets worse, not better, over time, despite billions of dollars of R&D and rapid progress in tooling and AI? What evil force, more powerful than Innovation and Progress, is at work here?
In my six years at Google, I got to observe this force up close, relentlessly killing features users loved and eroding the last vestiges of creativity and agency from our products. I know this force well, and I hate it, but I do not yet know how to fight it. I call this force the Tyranny of the Marginal User.
Simply put, companies building apps have strong incentives to gain more users, even users that derive very little value from the app. Sometimes this is because you can monetize low value users by selling them ads. Often, it’s because your business relies on network effects and even low value users can help you build a moat. So the north star metric for designers and engineers is typically something like Daily Active Users, or DAUs for short: the number of users who log into your app in a 24 hour period.
What’s wrong with such a metric? A product that many users want to use is a good product, right? Sort of. Since most software products charge a flat per-user fee (often zero, because ads), and economic incentives operate on the margin, a company with a billion-user product doesn’t actually care about its billion existing users. It cares about the marginal user - the billion-plus-first user - and it focuses all its energy on making sure that marginal user doesn’t stop using the app. Yes, if you neglect the existing users’ experience for long enough they will leave, but in practice apps are sticky and by the time your loyal users leave everyone on the team will have long been promoted.
So in practice, the design of popular apps caters almost entirely to the marginal user. But who is this marginal user, anyway? Why does he have such bad taste in apps?
A personality sketch of the marginal user
Here’s what I’ve been able to piece together about the marginal user. Let’s call him Marl. The first thing you need to know about Marl is that he has the attention span of a goldfish on acid. Once Marl opens your app, you have about 1.3 seconds to catch his attention with a shiny image or triggering headline, otherwise he’ll swipe back to TikTok and never open your app again.
Marl’s tolerance for user interface complexity is zero. As far as you can tell he only has one working thumb, and the only thing that thumb can do is flick upwards in a repetitive, zombielike scrolling motion. As a product designer concerned about the wellbeing of your users, you might wonder - does Marl really want to be hate-reading Trump articles for 6 hours every night? Is Marl okay? You might think to add a setting where Marl can enter his preferences about the content he sees: less politics, more sports, simple stuff like that. But Marl will never click through any of your hamburger menus, never change any setting to a non-default. You might think Marl just doesn’t know about the settings. You might think to make things more convenient for Marl, perhaps add a little “see less like this” button below a piece of content. Oh boy, are you ever wrong. This absolutely infuriates Marl. On the margin, the handful of pixels occupied by your well-intentioned little button replaced pixels that contained a triggering headline or a cute image of a puppy. Insufficiently stimulated, Marl throws a fit and swipes over to TikTok, never to return to your app. Your feature decreases DAUs in the A/B test. In the launch committee meeting, you mumble something about “user agency” as your VP looks at you with pity and scorn. Your button doesn’t get deployed. You don’t get your promotion. Your wife leaves you. Probably for Marl.
Of course, “Marl” isn’t always a person. Marl can also be a state of mind. We’ve all been Marl at one time or another - half consciously scrolling in bed, in line at the airport with the announcements blaring, reflexively opening our phones to distract ourselves from a painful memory. We don’t usually think about Marl, or identify with him. But the structure of the digital economy means most of our digital lives are designed to take advantage of this state. A substantial fraction of the world’s most brilliant, competent, and empathetic people, armed with near-unlimited capital and increasingly god-like computers, spend their lives serving Marl.
By contrast, consumer software tools that enhance human agency, that serve us when we are most creative and intentional, are often built by hobbyists and used by a handful of nerds. If such a tool ever gets too successful one of the Marl-serving companies, flush with cash from advertising or growth-hungry venture capital, will acquire it and kill it. So it goes.
Thanks to Ernie French (fuseki.net) for many related conversations and comments on this essay.
I worked on this problem for more than a decade, as co-founder of what became the Center for Humane Tech, and in other roles.
One way to break it down is different users, but as Ivan notes towards the end, we all have Marl in us. So, another way to get at the same thing is to exclude certain engagements from the metrics — the things we click on but would not reflectively endorse as meaningful. You get a lower engagement number that represents meaningful choice, rather than just "revealed preference" / engagement.
This is what I work to align LLMs with at the Institute for Meaning Alignment[1], and Ivan is helping! I also have a paper[2] on the difference between revealed preference and meaningful choice.
(It's also worth noting that this process of enshittification doesn't just happen in software. Markets and voting also have this revealed preference vs meaningful choice problem. So, making this distinction is a chance to upgrade all of our large-scale systems.)
[1] https://meaningalignment.org/
[2] https://github.com/jxe/vpm/blob/master/vpm.pdf
1. Hobbyist builder doesn't have to sell out.
2. Hobbyist builder that do sell out (at a good price) can use the capital to build even more hobbyist tools, instead of becoming a VC or retire on an island somewhere.
Hobbyist builder that fails both 1 & 2 is just another kind of Marl.