The Surprising Power and Complexity of Aligned Recommendations
In which a naive algorithm turns out to be profound, dating apps are structurally incapable of love, and the author waxes rhapsodic about the lovely Brenda.
When last we met I wrote about epistemic infrastructure — small tools that deliver the right information at the right moment to the person who needs it. One of the Forethought design sketches that caught my eye was for “aligned recommender systems”: recommendation engines optimized not for engagement but for what users would actually endorse having seen, days or weeks later.
I’m embarrassed to say that it took me a couple of days to remember that I had actually built an aligned recommender system a while back. In my defense it’s been a long career.
About twelve years ago, I was working at a job posting site, which indeed will remain nameless. I evidently got tagged as “someone who is good with math” — a potent but dangerous label in any organization — and was put to work on their job recommendation page. The goal was straightforward: show people jobs they’d actually want. The result was a system that drove a good amount of traffic to the site, not by keeping people scrolling but by sending them toward jobs they’d apply for and, ideally, get.
What was the algorithm? It was just an embarrassingly straightforward dimensionality reduction from the space of job views using minhash similarity as an approximation of the Jaccard coefficient set-theoretic metric followed by a flip to using the minhashes themselves as inverted index keys to achieve the necessary scalability, all rendered as a directed acyclic graph of map-reduce jobs in a combination of daily batch and real-time analyses. I know, ridiculous, right?
In English: we used collaborative filtering to find what jobs people like you liked. “People like you” was determined by little more than you being interested in the same jobs. When I say this out loud, it sounds incredibly naive. In practice, what we were doing was automating the sort of social partitioning that was the meat and potatoes of human job recommender systems. We effectively built an old boys’ network, except everyone was one of the old boys.
And it worked. It almost worked too well. For a while, the (unmonetized) recommendation page was so successful that it was cannibalizing traffic from the (monetized) search page, thus costing my employer thousands of dollars per day! It worked not because the algorithm was clever (it wasn’t, particularly, although the math behind minhash similarity is extremely pretty), but because of something deeper about the structure of the problem.
Three Kinds of Recommendation
Not all recommender systems are created equal, and the differences have almost nothing to do with the technology. They have to do with who’s paying and what “success” means.
Product recommendations are the ones everyone thinks of, probably because they made Jeff Bezos a bezillionaire. Amazon, Spotify, Netflix, TikTok, they all recommend new products or content for you to consume. The platform makes money when you buy things or keep consuming. The recommender is aligned with engagement. More clicks, more watches, more purchases. The user... maybe wants this? In the moment, sure. On reflection, after four hours of doomscrolling or the arrival of yet another impulse-bought kitchen gadget, maybe not. This is where the pathologies the Forethought folks worry about live — systems optimized for “you clicked” rather than “you’re glad you clicked.”
Job recommendations are structurally different. The ideal job recommendation results in the user never needing another job recommendation again. Think about that for a moment. Your success metric is the user leaving and never coming back. A repeated monthly active user on a job site isn’t a sign of engagement. It’s a sign of failure. Every naive engagement metric — time on site, return visits, pages viewed — is not just unhelpful but actively inverted. If you’re optimizing for time on site, you’re optimizing for failing to find people jobs.
The magic of the job site was that the payers — the employers posting listings — wanted the same thing the job seekers wanted. Employers want well-matched candidates who will stick around. Job seekers want jobs where they’ll thrive (plus, you know, making the rent next month would be nice). The site wanted to demonstrate that it produces good matches so employers keep posting. Everyone’s incentives pointed roughly at the same hilltop. You could apply almost any reasonable, scalable algorithm at this problem and get decent results, because the economic structure was clean.
Dating recommendations have the same alignment structure as jobs — the ideal recommendation results in the user deleting the app forever — but with catastrophically worse economics. Because the user is the payer.
Let me explain why this matters by waxing briefly rhapsodic about my wife, the lovely Brenda.
I have been with Brenda for thirty years now, all but the first six months of that happily married. Yes, we were engaged after three months and married after six, yes, really. She is, by any rational accounting, the best thing that has ever happened to me. Finding the right life partner is worth — and I speak with no hyperbole — multiple years’ salary. Some would say “priceless”, but that’s basically a word used to prevent thought, so I’ll assume better of my readers. The compound returns on a good partnership across decades of shared life, shared decisions, shared joy and shared hardship are almost impossible to overstate. The net present value of “the right person” dwarfs every other economic decision most people will ever make, including their house, their career, their friends, and their education. And that’s before risk-weighting for the unspeakably horrible downsides of finding “the wrong person”.
Brenda and I predate dating apps, a fact that fills me with incredible relief every time I think about it. We met the old-fashioned 1990s way, by being surreptitiously set up on a blind date by a scheming mutual gay friend. A dating app that truly found you your Brenda would be worth a staggering amount of money. You should be willing to pay tens of thousands of dollars for that match without blinking. And yet the moment the app succeeds, perfectly, it loses a customer. The app that finds you lasting love in three weeks earns three weeks of subscription revenue and then you’re gone. The app that almost finds you lasting love, that keeps surfacing people who are pretty good but not quite right, that keeps you swiping with hope — that app earns years of revenue by filling the customer with disappointment and self-loathing.
Every dating app CEO has stared at this structural contradiction at three in the morning. There is no clean resolution. You either align with the user and shrink your revenue, or you align with your revenue and subtly underserve your users. The entire industry has quietly, structurally, chosen door number two. Not through malice. Through economics.
What the Algorithm Can’t Fix
The Forethought design sketch for aligned recommender systems is thoughtful and technically sound. It completely comprehends the problem, if not necessarily reaching a solution, which is way better than most.nThey propose building systems that optimize for “long-term user endorsement” rather than short-term engagement. They discuss retrospective measurement, user interviews, and multi-source preference modeling. It’s good work, and the world would be better if these systems existed.
But the sketch underspecifies the hardest part of the problem, and it’s not the technology.
My job site recommendation engine worked — with twelve-year-old collaborative filtering technology and with nothing resembling a modern LLM — because everyone in the economic chain wanted the same outcome. It wasn’t cheap (I’m not sure what my employer was paying to let me use a Hadoop cluster like a pinball machine), but it wasn’t ridiculously expensive either. The alignment was in the incentive structure, not the algorithm. If I’d built the same algorithm for a dating app with a subscription model, it would have produced worse outcomes, because the system it was embedded in would have been rewarded for keeping users searching.
This is the uncomfortable truth about aligned recommendation: it’s not primarily a technology problem. It’s an economics problem. And economics problems are harder than technology problems, because you can’t solve them by building a better model. You have to change who pays, or what they’re paying for, or how success is measured and by whom.
The Forethought sketch gestures at this with their discussion of “building new recommender platforms from scratch” versus “building add-ons that filter content from existing platforms.” But what they’re really describing is the challenge of grafting aligned incentives onto misaligned economic structures. The existing platforms aren’t misaligned by accident. They’re misaligned because their business model requires it. An add-on that makes Instagram’s recommendations better for users makes Instagram’s recommendations worse for Instagram’s advertisers. Instagram’s real-time analytics will notice this very quickly, furious Slack messages will be exchanged, cease&desists will be mailed, and changes will be made in the code and terms-of-service to prevent that nonsense from ever happening again.
The Strange Angels May Not Save Us Here
Here’s where I arrive at an uncomfortable place, especially for a blog that has spent months arguing that AI tools can solve problems that previously required institutional coordination.
The current generation of AI — these strange new angels of LLMs — can do remarkable things. They can write, reason, code, analyze, create. Working with Claude Opus 4.6, I get uncomfortable indications that LLMs can now out-think me. I have no doubt they can build better recommendation algorithms. They can model user preferences with unprecedented sophistication. They can probably build a recommendation engine that genuinely predicts what you’d endorse having read, watched, or clicked on a week later.
None of that matters if the entity deploying the system is economically incentivized to not use it as designed.
You can build the most beautifully aligned recommender system in history, train it on reflective endorsement rather than engagement, validate it with longitudinal studies showing that users are genuinely happier and better informed. And then the platform that deploys it will notice that “happier and better informed” users spend 30% less time on the app, see 30% fewer ads, and generate 30% less revenue. And then a product manager will be tasked to “balance” alignment with engagement, because the numbers needs must. And then you’re back where you started, with a slightly more sophisticated version of the same misalignment.
This is a problem that engineering alone cannot solve. It requires either a business model where the payer benefits from user alignment (like my job site), or regulation that changes the incentive structure (which brings us back to the previous post and the question of whether governance at that scale is achievable), or a cultural shift where users are willing to pay directly for aligned recommendations (which requires users to understand the problem, which is itself an epistemic infrastructure challenge).
I built an aligned recommender system twelve years ago. It worked beautifully. The algorithm was simple. The economics were aligned. And I’ve spent the years since watching the rest of the recommendation industry fail at this, not for lack of technology but for lack of aligned incentives.
The strange angels can make the technology better. What they can’t do — what nothing can do algorithmically — is make humans want the same things. In spite of what many might wish, we humans are fundamentally “unaligned”; recommendation algorithms just show that clearly. The payers want engagement. The users want fulfillment. The platforms want revenue. And somewhere in that triangle, the lovely Brenda remains unfound by millions of people whose dating apps are optimized to keep them searching.
That’s not a technology problem. Would that it were. Technology problems, I know how to solve.
This post was developed in conversation with Claude Opus 4.6, who noted that the minhash paragraph would be either completely opaque or deeply satisfying depending on the reader, with very little middle ground. We agreed this was a feature, not a bug. Previous post: Tools vs. Treaties.


I'm grateful to have met my partner the old fashion way: Literally the "girl next door" as I moved in below her in a house apartment. Met in the driveway.
-`ღ´-
But, I would push back a bit on the "unaligned" point since I think that the fundamental incentives that drive all of us can summarized via Jung (i.e. Hierarchy of Needs) but I also love Dale Carnegie's book "How to Win Friends and Influence People" where he said this:
---
What do you want? Not many things, but the few that you do wish, you crave with an insistence that will not be denied. Almost every normal adult wants:
1. Health and the preservation of life.
2. Food.
3. Sleep.
4. Money and the things money will buy.
5. Life in the hereafter.
6. Sexual gratification.
7. The well-being of our children.
8. A feeling of importance.
Almost all these wants are gratified – all except one. But there is one longing – almost as deep, almost as imperious, as the desire for food or sleep which is seldom gratified. It is what Freud calls “the desire to be great.” It is what Dewey calls “the desire to be important.”
---
It's the last one that really, really gets me b/c it's so true.
Funny that this topic just so happens to perfectly align with the last comment I posted on this site.
Nevertheless, centralized platforms will ALWAYS be shit because they relentlessly optimize for money and growth, that's it.
When all you care about is making more money than this time last year then it will NEVER cultivate a place for human flourishment.
Individuality, proper reach and discovery, authenticity, truly diverse voices, social support, thoughtful replies, unbridled creativity, open-mindedness, highly engaged communities, meaningful discussions, togetherness, deep lasting connections, etc. will never be prioritized when all these companies care about is MONEY.
The PEOPLE are what make a thing great, yet MONEY is all they ever optimize for.