Tools vs. Treaties
In which a Ford engineer solves a universal problem by getting wet, and we consider what this implies for making the world saner.
On a rainy day in April 1986, a Ford interior trim designer named Jim Moylan borrowed a company car to drive to a meeting in another building. The gas tank was empty, so he pulled into a station. Wrong side. He moved the car, got soaked, sat through the meeting in wet clothes, and came back to his desk. Perhaps swearing was involved, but history doesn’t report.
Without even taking his coat off, Moylan sat down and wrote a product convenience suggestion memo. “The indicator or symbol I have in mind would be located near the fuel gauge,” he wrote, “and simply describe to the driver on which side of the vehicle the fuel fill door is located.”
That’s it. The whole proposal. A little arrow near the gas gauge. You’ve seen it a million times, and perhaps never thought that some named person invented it.
His bosses looked at the memo, said “huh, that’s cheap to implement. Just a change to a stencil. Have the boys down in dashboard design run up some possibilities” and put it in the 1989 Ford Escort (three years is incredibly fast in automotive roadmap terms). Other manufacturers noticed. Within a decade, every car on the road had one. Now every EV has one too, pointing at the charging port. Jim Moylan died this past December at eighty. Most people never knew his name, but every driver in the world knows his arrow.
The Moylan Arrow is my favorite piece of “epistemic infrastructure”, ambient tools for knowing about the world. It delivers exactly the right information, at exactly the right moment, to exactly the person who needs it. It requires no coordination between automakers. It requires no fifty-page international standard for gas-cap placement (there isn’t one — manufacturers put it on whichever side the engineering works out). It requires no user training. It costs almost nothing. It just works, millions of times a day, so quietly that people forget it wasn’t always there. When I was working on creating inline static analyses for IntelliJ IDEA, I consciously aimed for that level of quiet utility, helping developers make better decisions almost without them noticing.
I bring this up because two pieces of writing crossed my desk this week that crystallized something I’ve been mulling over for a long time, and I’m pondering the Moylan Arrow again.
Two Theories of Change
Both pieces come from Forethought, a research organization thinking hard about navigating the transition to advanced AI. They were published four days apart. Both are thoughtful. Both are clearly the product of smart people who care about getting this right. And they represent two fundamentally different theories of how you make the world better.
The first piece, by Will MacAskill, utilitarian moral philosopher and kinda-sorta-inspiration for the Effective Altruism kinda-sorta-movement, proposes that an international AI project should have a monopoly on the training of AI systems above a certain compute threshold that aim to automate AI R&D. It envisions governments agreeing to restrict the most dangerous capabilities while encouraging helpful ones. It discusses enforcement mechanisms, incentive structures, and the geopolitical dynamics of getting rival nations to cooperate on technology that each considers strategically vital.
It is, in other words, a proposal for an international treaty. A carefully reasoned, intellectually serious proposal for an international treaty, on a topic where treaties are historically and structurally extremely hard to achieve and harder to enforce. MacAskill himself notes it’s work-in-progress he doesn’t plan to pursue further. The single comment on the piece asks “how do you even define the thing you’re trying to restrict?” There is no answer, because there isn’t one yet.
The second and third pieces, by Owen Cotton-Barratt, Lizka Vaintrob, and Oliver Sourbut, take a different approach entirely. They present a series of design sketches for AI tools, ten so far with more to follow, that could improve how people think and decide. Community notes for everything. Rhetoric-highlighting that flags persuasive-but-misleading sentences. Reliability-tracking for public claims. Reflection-scaffolding that acts as a Socratic coach. Guardian angels that flag when you’re about to send an email you’ll regret.
Each sketch includes feasibility notes, possible starting points, and the kind of practical detail (”build for yourself first, then expand”) that signals “this is meant to be built, not just discussed.” The proposals range from straightforward (some of these are basically browser extensions backed by a lightweight server) to ambitious, but every single one is individually adoptable, individually testable, and creates value for its users regardless of whether anyone else adopts it. Strangely, the authors don’t seem to realize just how collectively possible their ideas are using agentic coding tools, or we’d presumably already be seeing proof-of-concepts.
One set of proposals requires convincing every major government on earth to agree on the regulation of their most strategically important technology, and then to follow through on that agreement precisely at the moment things become really interesting (interesting as in “Oh God, Oh God, we’re all going to die). The other requires convincing a developer to spend a weekend building a prototype, possibly while their spouse is off at some quilting ensemble or beekeeping pajama-jammy-jam.
I know which one I’d bet on. But I want to be precise about why, because this isn’t really about these specific pieces. It’s about a pattern.
Governance You Don’t Notice
I’m not anti-governance. I want to be clear about that, because “technology guy doesn’t like regulation” is a tedious cliché that I don’t want to inhabit (anymore). With age, I’ve grown more tolerant of governance, but much less tolerant of people play-acting at governance without caring whether it works or not.
The kind that actually governs tends to be the kind that works so well you don’t notice.
Automatic circuit breakers halt trading when markets drop too fast. Nobody votes on whether to invoke them. Nobody convenes a committee. The rule is embedded in the system’s architecture, and it fires when it fires. Unemployment insurance funding increases automatically during recessions, without requiring Congress to notice that a recession is happening and then agree to do something about it. Type systems prevent entire categories of bugs not by establishing a review board but by making the wrong thing harder to express than the right thing.
These are all governance. They’re just governance that works by changing the infrastructure rather than by changing people’s behavior through persuasion or mandate. The information arrives at the moment of decision. The feedback loop is immediate. The mechanism doesn’t require ongoing cooperation between parties with misaligned incentives.
The Moylan Arrow is governance. It governs the interaction between driver and gas pump. It just governs so quietly that it doesn’t look like governance at all.
The Forethought design sketches are full of Moylan Arrows. Community notes that surface context at the moment you’re reading a misleading claim. Rhetoric highlighting that flags manipulative framing as you encounter it. Provenance tracing that lets you see where a claim came from before you decide whether to believe it. Reflection scaffolding that asks “are you sure?” before you send the angry email. Each one delivers the right information at the right moment to the person who needs it.
The international AI governance proposal, by contrast, is very much not a Moylan Arrow. It’s a proposal to hold a meeting where various parties would discuss what kind of arrow might theoretically be appropriate, if they could agree on what counts as a fuel filler. This is not a criticism of MacAskill’s intelligence or intentions — the man is clearly thoughtful and working on problems that matter. But the genre of intervention has a track record, and the track record is not encouraging. Nuclear non-proliferation: partial success at best, took decades, and the most dangerous actors simply declined to participate. Climate accords: aspirational numbers, inconsistent compliance. International coordination on strategically important technology of any kind: spotty, slow, fragile.
Meanwhile, the Moylan Arrow shipped in three years from memo to dashboard, and conquered the world in a decade. Community Notes went from experiment to genuinely useful in about the same timeframe.
This may just be me as a technologist rather than an activist. I want to be transparent about that bias. Thirty-five years of building things has taught me that the things which move the needle in people’s lives are the things that are useful to someone on Tuesday morning, not the things that require the UN to agree on a definition by Thursday. Perhaps I’m wrong. But when I look at the historical record and try to spot which path has the piles of skulls alongside it from previous failures and which one has the Moylan Arrows, the pattern is hard to ignore.
Angels on inventors’ shoulders
Here’s what makes right now different from 1986.
Jim Moylan was one person at one company who happened to have a good idea and access to a dashboard. The number of people who could be Jim Moylan — who could notice a friction point in how humans interact with information and actually build a solution — was small. You needed to work at a car company, or a tech company, or a research lab. You needed institutional backing, manufacturing capability, distribution channels.
That constraint is dissolving. Fast.
Jordan Rubin built Future Tokens, a set of composable reasoning skills that target specific blind spots in arguments via AI-assisted reasoning — opposition gaps, hidden assumptions, missing dimensions, rhetorical fulcrums. It’s epistemic infrastructure, designed to make thinking better, whatever the substrate. He built it by himself, and anyone can install it today. Over in my corner, I’ve built Bad Dave’s Robot Army, a collection of specialized AI agents that do things eerily similar to what the Forethought “personalized learning systems” sketch describes — agents that map unfamiliar codebases, surface architectural patterns, create personalized learning journeys for new technologies, and help developers build expertise faster. Nobody coordinated us. Nobody funded us. We built these because they were useful, and useful things get adopted.
The Forethought design sketches describe tools that small teams could prototype in weeks. Rhetoric-highlighting? That’s a browser extension backed by an LLM call. Reliability-tracking? A database and some clever UI. Reflection-scaffolding? People are already using chatbots for this, imperfectly; the sketches describe how to do it well. These aren’t moonshots. They’re small-team engineering projects. If you’re reading this blog, you could absolutely take a swing at most of these ideas on your own using agentic code tools, decent ones starting at $20/month.
And this is only going to accelerate. Every person who learns to build a browser extension is a potential epistemic toolsmith. Every vibe coder who picks up enough engineering fundamentals to make their weekend projects actually work at scale (shameless plug that I’ll be launching a series on that soon) is someone who could build the next Moylan Arrow. The number of people who can notice that drivers keep going to the wrong side of the gas pump and can do something about it is exploding.
This connects to an economic pattern I keep coming back to in this blog. Many of the tools in the Forethought design sketches describe capabilities that have always existed in expensive, bespoke, high-friction forms. Rhetoric-highlighting is part of what peer reviewers do, slowly and with great complaining, for academic papers. Deep-briefing is what well-funded executives get from their chiefs of staff. Reflection-scaffolding is what a good therapist or personal coach provides. Provenance-tracing is what investigative journalists do when they have the time and budget.
What’s changed is the economics. These capabilities are becoming cheap enough to embed in tools that anyone can use. The transition from “luxury tools available to the few” to “infrastructure available to everyone” is exactly how the most important technologies in history have worked. Indoor plumbing. Electricity. Literacy. Wi-fi. The pattern repeats. And when it completes, the world gets substantially saner, not because anyone passed a law requiring sanity, but because the tools for being sane got cheap enough for everyone.
What I’m Actually Saying
I’m a technologist, not an activist, and that shapes my thinking in ways I want to be honest about. But here’s what I’ll say:
If you’re a serious policy person, mazel tov. I wish you many fat and happy children. Keep working on international coordination. If you can make it work (“This time for sure!”, in the words of the sage), I’ll be the first to applaud. The world needs people trying, although they’re not always the people one might choose.
But if you’re a builder — if you write code, if you design interfaces, if you make things that people use — or even if you wish to be a builder, there’s a different opportunity in front of you, and it’s available right now. Look at the Forethought design sketches. Look at Community Notes, which is already live and already working. Ask yourself: where is someone getting soaked at a gas pump? Where is there a small piece of information that, delivered at the right moment, would make the right decision easier than the wrong one?
Then build the arrow.
The changes coming will not be governed. There is sadly unlikely to be time for that, just as with the internet before it. The changes will instead be engineered, by a growing army of people who notice that the gas cap could be on either side and think “someone should really fix that.” People who look at the state of the world, think “this is confused and insane” and simply build something which alleviates some confusion and prevents some insanity.
Jim Moylan didn’t file a patent. He didn’t start a movement. He didn’t write a position paper. He invented a small thing, wrote a memo, forgot about it, and changed the world.
This post was developed in conversation with Claude Opus 4.6 (congrats on the upgrade!). The arguments were vetted using Future Tokens, a set of composable reasoning skills developed by Jordan Rubin. As is often the case with these conversations, the biggest contribution Claude made wasn’t the words — it was asking “okay, but what are you actually saying?” until I figured it out.


Ok, so, this is definitely pedantic, but, I'm having a hard time with this:
> The Moylan Arrow is governance. It governs the interaction between driver and gas pump. It just governs so quietly that it doesn’t look like governance at all.
I don't see this as "governance" really as it's not a "system of rules" at all. It's just an indicator. No one has to see it or even use it. It's there for convenience, not control.
Anytime anyone suggests that we should "control" AI I think to myself: "That person has never been a parent because if they were then they'd have a very, very different perspective on AI.
You can't control children (I have 3; 1 is graduating college this year! ♥) but you can guide them, ask them questions, react (positively) to positive and negative stimuli and events that they create. Or you can watch them burn.
But perhaps most important is that I do not want to govern them. Ever. I want to coach them to find their own agency (ok, this metaphor / analogy is going to get weaker by the moment) so they can discover what it's like to rule themselves.
Governance is the wrong word IMHO.
Great post, thanks!