Rig for Turbulence
Your Captain has turned on the seatbelt sign for the remainder of this flight
I read cyberpunk obsessively in my misspent youth, and never understood how people thought it was dystopic.
I mean, sure, the megacorps were bad and the street was mean. But people had cyberdecks. They could jack into global networks and reshape reality with their minds, something that quite resonated with young programmer Dave. The future was strange and dangerous and bursting with possibility, and the protagonists navigated it by being smart and fast and adaptable and occasionally ruthless. They traveled between exotic locations which were absolutely not the far west suburbs of Chicago. They navigated between lucrative and occasionally dangerous contracts with exciting eccentric billionaires. They casually adopted powerful and dangerous skills and technologies. That all sounded fantastic.
I’ve been like this my whole life. Neophilia is the clinical term, if one is an a mood to medicalize. I like it when things get weird. When the ground shifts. When the old maps stop working and you have to navigate by feel and first principles. Bat territory. Some people find that terrifying. I find it — and I need you to know this about me before we go any further — deeply exciting.
I’m telling you this upfront because it’s a disclosure. My gut reactions at finding myself in deep kimchi are tilted toward “This is going to be great!” or worse “Just imagine the experience points!”. When I look at the current AI landscape and feel exhilaration rather than dread, you should discount that somewhat based on just how much I love living in the future. I’ll try to help you calibrate how much.
That said, I’m also a software engineer with thirty-five years of experience who has been using these tools daily, in production, for serious work, since mid-2024. And the gap between what I see every day and what I read in the discourse is so vast that it has become my primary source of professional entertainment.
The Week That Ate the Market
Last Sunday, Citrini Research published a piece of speculative fiction posing as a macro memo from June 2028, in which AI has cratered the S&P by 38% and pushed unemployment past 10%. The piece went viral. Markets sold off. Bloomberg covered it. The White House called it science fiction. Citadel Securities issued a formal rebuttal. A co-author went on TV to call for an AI tax (and also admitted that they were short many of the companies mentioned in the note, quelle suprise).
A Substack post moved markets. That part is genuinely interesting.
The content isn’t. Citrini’s scenario is the same one that has been predicted about every major technology for roughly three hundred years. Mechanization will impoverish the weavers. Electricity will obsolete the factory workers. Computers will end the time of secretaries. AI will destroy the knowledge workers. The pattern is always the same: mass technological unemployment, deflationary spiral, social collapse. The prediction is always specific. The timeline is always short. It is deeply believable and it has never, not once, been correct1.
This isn’t because the technologies weren’t transformative. They were. Every one of them destroyed specific jobs, created new ones, and left the economy larger and weirder than before. The error isn’t in predicting disruption. The error is in predicting that disruption leads to less economic activity rather than different economic activity, and in assuming that the transition happens faster than institutions can adapt.
I could steelman the “this time it’s different” case — intelligence is more general than a power loom, the substitution effects are broader, the pace is faster. And I’d take those arguments seriously. But Citrini doesn’t make them. Citrini writes speculative fiction, packages it as analysis, and — this is the part that matters — manages a model portfolio. When a portfolio manager publishes a bearish macro scenario, you are not reading analysis. You are reading a sales pitch.
William Goldman, the screenwriter, said about Hollywood: “Nobody knows anything.” He wasn’t just being cynical. He was describing a genuinely unpredictable system in which the people closest to it have only marginally better predictive accuracy than outsiders, and everyone pretends otherwise because admitting uncertainty is professionally fatal.
That’s where we are with AI and the economy. Nobody knows anything. Except the people who are making confident predictions are typically either (a) selling something, (b) extrapolating from vibes, or (c) telling you that X is impossible when X is something I’ve been doing before my first Diet Coke of the morning every day since June.
What’s My Model
Okay, so if nobody knows anything, why am I writing this?
Because “nobody knows anything” doesn’t mean “all models are equally wrong.” It means the granularity of useful prediction is much coarser than most people pretend, and that you should hold your models loosely. So here’s mine.
You’ve probably encountered some version of the scary recursive self-improvement model. It goes like this: AI gets smart enough to improve itself, which makes it smarter, which lets it improve itself faster, and the curve of its smartness eventually goes vertical. Intelligence explosion. Singularity. Hard Rapture. Nerdvana. We may become as ghosts. We may become as gods. Utopia or extinction, depending on who’s telling the story, but either way: a discontinuity so sharp that the world on the other side is unrecognizable.
My model shares some DNA with that one but differs in important ways. The recursive part is real. AI is already being used to develop better AI. Humans augmented by AI tools are more productive than humans alone, and they’re using that productivity to build better AI tools. The intelligence amplification loop exists. I know because I live and work in it.
Where I differ is on the “goes vertical” part, because I think there are terms in the equation that don’t go to zero. And that changes everything.
Total intelligence in the system is increasing rapidly. Not just AI capability, though that’s the headline. Human-AI collaboration, tool-augmented reasoning, automated research, institutional knowledge capture — the total cognitive throughput of civilization is going up, and the rate of increase is itself increasing. This isn’t speculation. I watch it happen every day in my own work. Things I couldn’t do two years ago are routine now. Things I couldn’t do six months ago are getting there.
Increasing intelligence, applied through market mechanisms, generates wealth. This is less a prediction and more a description of how the last several centuries have worked (and how the prior centuries mostly failed to). More cognitive capacity, applied to economic problems, produces more solutions, more products, more services, more surplus. The distribution of that wealth is a separate question, but the generation of wealth follows from increasing intelligence with high reliability.
Increasing intelligence and wealth make things weirder. Not calmer. Not more predictable. Weirder. What, you think increasing total intelligence and wealth will make things less weird? You must be thinking of a different species than the homo sapiens I know and love. Every expansion of human capability in history has produced results that the previous generation would have found incomprehensible, and often abhorrent. The internet didn’t make the world simpler. Smartphones didn’t make the world more orderly. The printing press didn’t reduce the amount of nonsense in circulation. More capability means more possibilities means more strangeness. This is a feature, not a bug.
But some terms in the equation don’t go to zero. This is where I get off the singularity bus. The weirdness doesn’t go to infinity, because there are real constraints that bound how fast things can change, no matter how much intelligence is applied to the task. These constraints include the speed of light, the second law of thermodynamics, technological diffusion cycles, that annoying HR lady, surface tension in viscous fluids, NIMBYism, maximal processor clock cycles, procurement processes, engineering tolerances, and general human cussedness. The physical constraints are obvious. The human ones are underrated. Getting a technology from “it works in the lab” to “it’s deployed at scale in a regulated industry” takes months to years, minimum, because of institutional inertia, contractual lock-in, risk aversion, training requirements, and the fact that someone has to actually rewire the building and they’re booked through July. AI changes all of that a bit, but not enough to make it instantaneous.
The recursive loop is real, but it’s not frictionless. Every cycle of improvement has to pass through the physical world, through human organizations, through regulatory frameworks, through supply chains, through the plain fact that deploying things at scale is way harder than inventing them. These aren’t temporary obstacles that smarter AI routes around. They’re structural properties of reality. The loop accelerates, but it doesn’t go vertical. It’s a steep exponential with real drag terms tending toward an S-curve or very steep linearity, not a singularity.
So: my model is that things get increasingly weird, indefinitely, but not infinitely weird at any given moment. The curve is steep and accelerating, but it’s bounded (or at worst its slope is bounded). Nobody can predict the specific shape of the weirdness more than about eighteen months out, because the system is too complex, too reflexive, and too dependent on choices that haven’t been made yet.
That’s the model. It’s not a prediction about unemployment rates or S&P levels or which industries survive. In the words of the sage, nothing here is investment advice. Those would be specific claims about a system too complex for specific claims. The model says: prepare for sustained, accelerating strangeness, bounded by physics and human nature, with the overall trajectory pointing toward more total intelligence, more total wealth, and much more total weirdness.
Do feel free to check me on this, because honestly I love this vision.
What I’m Not Going to Predict
I’m not going to make society-scale predictions, because I don’t believe anyone can. I’ve spent the last two years making industry-scale predictions — specifically, about how AI changes software development — and even at that resolution it’s been humbling. I’ve been right about a lot of the directions and wrong about several of the timelines, and I’m working in the single domain where I have the most expertise, the best data, and access to state-of-the-art tools.
Society is harder. The interaction effects between technology adoption, labor markets, consumer behavior, regulatory response, geopolitical competition, cultural adaptation, and plain dumb luck are beyond the modeling capacity of any human or any AI currently in existence. The people making confident society-scale predictions are doing one of three things: extrapolating a trend from a handful of data points, telling a story, or talking their own book. These are narratives dressed up as analysis.
What I do know is that the total amount of intelligence in the system is increasing fast, the total amount of weirdness is going to increase with it, and the terms that bound the weirdness don’t include “somebody on Twitter said it couldn’t happen.”
Rig for Turbulence
In the submarine service (at least fictionally), “rig for turbulence” means: secure everything, brace yourself, and trust your training. It doesn’t mean panic. It doesn’t mean abandon ship. It means things are about to get rough and you need to be ready.
Here’s how to be ready.
Invest in your skills, relentlessly. Both your AI skills (learning to work with these tools effectively, understanding their capabilities and limitations, developing judgment about when to trust them) and your general mental armamentum (critical thinking, domain expertise, the ability to learn new things quickly, communication). You now have the most powerful teaching technology since the book available to you. Use it. The people who thrive in turbulence are the ones with the deepest and broadest skill sets, because skills transfer across contexts even when specific jobs don’t. Every hour you spend getting better at thinking clearly is an hour that pays dividends regardless of which specific version of weird we end up in.
Be humble about what you think you know. The most dangerous position right now is certainty. If you’re certain AI will destroy the economy, you’re going to make bad decisions. If you’re certain it won’t, same thing. If you’re certain your specific job is safe, or your specific industry is immune, or your specific skill set is permanent — you’re going to get surprised, and surprise is what kills you. The correct epistemic state is: things are changing fast, my map is incomplete, and I need to keep updating. Improvise. Adapt. Overcome. That may be uncomfortable. Get comfortable with it.
Do not tie your identity to any given vision of the future. This is the hard one. People who have built their identities around being AI optimists or AI pessimists, around their current job title, around a political ideology, around a specific theory of how the future unfolds — those people are going to have a very bad time. Not because they’re necessarily wrong (although that’s the way to bet), but because identity attachment makes you unwilling to update when the evidence changes. And the evidence is going to change. Rapidly. Repeatedly. In directions nobody expects. If your sense of self requires a specific future to obtain, you’ve built on sand. If you’re hoping that the chaos that is unfolding will just happen to be fixed by some plan you’ve had for years, you don’t really get what chaos means.
The cyberpunk protagonists I loved as a teen survived in their strange new world not because they predicted it correctly, but because they were adaptable. They were smart, they were skilled, they learned fast, and they didn’t waste time mourning the world that was.
Things are about to get weird. Weirder than most people expect, but not inhumanly so, bounded by physics and cussedness. Not a Mad Max dystopia. Not a Star Trek utopia. Just very, very strange.
Pot’s right. Let’s do this.
This post was constructed with the able assistance of Claude Opus 4.6. That sentence would once have been unimaginable strange, and now it’s background noise.
Some will here suggest that while humans have not experienced widespread technological unemployment, horses did with the introduction of the internal combustion engine. Here at the Griffith’s Secluded Mountain Lair and Agricultural Tax Write-off, we have somewhere between 5 and 15 horses currently (I’d honestly have to check with my wife for an exact number). While horses are delightful and picturesque as all hell, I will note that they are not the most adaptable of creatures, and that matters.


This reads like a more pragmatic version of Shumer's Something Big is Happening.