Brain Dumps as a Literary Form
What happens when you share your thinking process instead of your conclusions
Recently, a colleague asked for my thoughts on a technology proposal that he had been developing. It was an interesting one, with a boatload of potential, a lot of open questions, and a good few pitfalls. In the Before Times — which is to say, six months ago — I would have spent an evening crafting a careful Slack message or e-mail. Instead, I sent them a link to a Claude transcript where I’d spent an hour thinking through the problem out loud (literally, using voice-to-text), with Claude pushing back on my weaker arguments and drawing out implications I hadn’t fully articulated. I felt the result was notably stronger than I could have done on my own, and gave better insight on my thought processes.
A week later, someone asked about an architectural initiative I’ve been musing about for years. Previously, this would have been a tightly written Slack message or maybe a short slide deck. Instead, I sent three markdown documents — an overview, a project plan, and a set of elevator pitches — that Claude and I had produced after a long working conversation. The project plan included a long series of risks that Claude had surfaced as part of stress-testing the proposal. The initial ideas were mine. The stress-testing was collaborative. The artifacts were... whose, exactly?
Then there’s this blog, which regular readers know is produced through a process I acknowledge in every colophon: I have a conversation with Claude, it produces a draft, I add my voice, it offers editorial suggestions. The ideas are mine. The words are increasingly ours, a product of Dave and Claude together. You might have heard the term “centaur” used for human-machine teams like this, evoking human machine convergence.
Here’s the thing: none of this felt like a revolution from the inside. It felt like obvious convenience. It’s just a link. It’s just a doc I prepped. It’s just how I communicate now.
This rhymes with history. Every genuine medium shift feels like obvious convenience to early adopters. Writing was just a memory aid. The printing press was just faster copying. Email was just faster mail. The people living through the transition don’t experience it as historic. They experience it as “well, obviously you’d do it this way.”
I think we’re in one of those transitions right now, and almost nobody has noticed.
The Share Button as Gutenberg’s Press
Somewhere at Anthropic, someone shipped a “share chat” feature. I’d bet good money it was specced as something like “enable users to share transcripts for debugging and support purposes” or maybe “increase organic product distribution.” A checkbox fifty items deep on a product roadmap. A sprint task worth a few story points that the junior developer tasked to it barely remembers doing.
What they actually built was the publish button for a new literary form.
This happens constantly in media history. Twitter’s @reply was a user convention that the platform eventually formalized. The novel emerged from people writing really long letters to each other. Science fiction was created by Mary Shelley as a way of putting off yet another boring threesome with Lord Byron. Nobody designs a new medium. The infrastructure gets built for boring institutional reasons, and then people discover what it’s actually for1.
We’re in the “people discover what it’s actually for” phase.
Cognition, Not Conclusions
Here’s what makes the shared transcript genuinely different from every communication medium that preceded it: it transmits thinking, not thoughts.
When I send you a polished proposal, you get my conclusions. You have to trust that I’ve considered the counterarguments, weighed the alternatives, stress-tested the assumptions. The quality of the document is your only proxy for the quality of the reasoning behind it.
When I send you a transcript, you get the reasoning itself. You can watch objections get raised and addressed. You can see where the argument pivoted because a counterpoint landed. You can observe which assumptions got challenged and which survived. The medium is self-validating — the evidence that the ideas have been stress-tested is built into the artifact.
This inverts something I’ve always taken as gospel. Blaise Pascal famously wrote "I have made this letter longer than usual because I have not had time to make it shorter." (This quote has also been attributed to Cicero, Mark Twain, and probably Albert Einstein, Oscar Wilde and Chief Seattle for all I know, just like every other aphorism.) The craft of communication has always been about compression — distilling your thinking into its most potent form. Shared chatbot transcripts are the opposite of compression. They’re the uncompressed cognitive stream. It turns out that for complex technical and strategic thinking, the compression was losing information that mattered. All that back-and-forth, all those “but what about X?” moments — that wasn’t noise. That was signal. We were spending effort to remove it so as to flatter ourselves by making our results look effortless and inevitable.
Plato knew this, incidentally. There’s a reason we still read Socratic dialogues twenty-five centuries later, and it’s not because Plato couldn’t write essays. Watching someone think through a problem in dialogue can be more engaging and more informative than reading their conclusions. There’s narrative tension. Will the objection land? How will the argument adapt? You learn more about the shape of an idea by watching it get pressure-tested than by seeing it presented in its final form.
“SLOP!”
Now, some of you saw em-dashes in this post, mentally flinched, and are reading with your guard up. I get it.
Every new artistic form has its detractors, and their arguments often end up looking ridiculous in retrospect. In the 1800s, people worried that reading novels would lead to moral decline. In the 1980s, there were fears that role-playing games would cause suicides and Satanism. Comic books, video games, The Rites of Spring, all new artistic forms that were going to unleash the end times. The slop reaction deserves to be taken a bit more seriously than those, because it’s pointing at something real even when the specific detector — “I spotted an em-dash! He used the analepsis and the rule of three! Burn the witch!” — is hilariously unreliable. There is a flood of genuine slop out there. SEO farms churning out AI-generated articles with zero human judgment behind them. LinkedIn posts that are transparently “write me something about leadership” pasted raw into ChatGPT. Amazon drowning in AI-generated books that no human has even read, let alone written.
The em-dash detector is a bad heuristic, but it’s responding to a real signal. People have been burned by content that looks polished but contains no thought, and they’ve developed pattern-matching to protect their time. That’s rational.
But here’s where the slop critics go wrong: the implicit model is that authenticity requires unassisted human output, and this argument falls apart with a moment’s thought. Is a spell-checked document less authentic? An edited one? A ghostwritten speech delivered with genuine conviction? Executives and politicians have always had speechwriters. Academics have always had editors and co-authors. Every writer has collaborators and influences. Drawing the line at “Sure, typewriters are fine, but not word processors!” is arbitrary. Demanding that writers perform visible labor as proof of sincerity is an aesthetic preference cosplaying as an ethical principle.
That said, there’s a core to the authenticity worry that I can’t dismiss. When you read something I wrote entirely on my own, you’re building a mental model of my mind — my priorities, blind spots, what I chose to emphasize, what I chose to omit. That model is useful. It tells you something about me. When you read centaur output, you’re modeling the Dave-plus-Claude system, and the social contract around communication implicitly promises a tighter coupling between “the person who sent this” and “the cognitive process that produced it.”
I think this is a transitional problem. Once the norm updates — once everyone knows that serious thinkers use AI interlocutors the way serious writers use editors — the implicit model adjusts and the tension dissolves. But we’re in the transition, and during transitions, reasonable people can disagree about norms.
This is part of why I note the collaboration between Claude and me at the end of every blog post. Part of it is just honesty. I don’t want anyone to think I’m selling them something I’m not. But part of it is doing what I can to drag the norm forward. Every time a credible practitioner treats AI collaboration as just obviously what you’d do, the culture moves a little. You don’t normalize new media through manifestos. You normalize them through producing compelling work.
The Prosthesis for Intersubjectivity
Bruce Sterling introduced the concept of the “exoself” — your external cognitive augmentation layer, your outboard memory and decision-support — anticipating a lot of what’s happening now. But Sterling imagined the exoself as personal. A prosthesis for thought. What he didn’t foresee, and honestly should have, is that once your exoself produces artifacts, those artifacts become communications, and your exoself starts mediating your social relationships.
The exoselves we are building aren’t just going to be secret memory palaces or hidden confessionals. They are going to be how you present yourself to others. They aren’t just personal prosthetics for cognition. They are going to evolve to be prosthetics for intersubjectivity — for the shared space of understanding between minds.
This expands in directions that are both incredible and terrifying.
The incredible direction: most human communication is lossy compression. I have a rich understanding of something, I squeeze it into words, you decompress it with different priors, and maybe forty percent of my meaning survives the trip. The centaur model is less lossy. A shared transcript transmits the reasoning topology — which paths were explored and abandoned, which objections forced pivots, where the key insights crystallized. The reader can crack open the AI's reasoning too. Just click on the thought bubble to see the moments where it considered pushing harder on an objection or debated how to frame something. Scale that up to teams where everyone’s thinking is visible this way, and you approach something like genuine mutual understanding, networks of intellect and intent that point toward the superhuman.
The terrifying direction: a prosthesis for intersubjectivity is also a prosthesis for manipulation. The same tool that helps me communicate my genuine thinking more clearly can help a bad actor communicate strategically optimized deception more persuasively. And because the output quality exceeds what most people produce unassisted, the normal “does this seem well-reasoned?” heuristics get defeated. The em-dash detectors are wrong about me, but the anxiety they express — “I can no longer trust surface quality as a signal of genuine thought” — points at a real problem that gets worse as the tools improve.
Cognitive Voyeurism and the Market That Doesn’t Exist Yet
Here’s where this gets fun. One of my favorite artifacts is “No Maps For These Territories”, a film of a long rambling discussion with science fiction author William Gibson, riding around Singapore in the back of a limo. One can barely call it an interview, and calling the film a documentary is just laughable. I recently realized I would pay real money for a fifty-page chatbot transcript produced by William Gibson and Claude. Not an interview, not an essay, not a novel — a raw working session where Gibson is thinking through whatever he’s currently obsessed with, with an inhuman interlocutor that can keep pace and push into interesting territory.
I’d pay for this because I don’t want Gibson’s conclusions. I can get those from his novels, every word lovingly placed like pebbles in a zen garden. I want to watch Gibson’s mind move. I want to see what questions he asks, what analogies he reaches for, what surprises him, where he pushes back. A transcript is a cognitive portrait in a way that no finished work can be.
There’s a market for “watch interesting people think” that currently gets served badly by podcasts, interviews, and documentaries. Podcasts are filled with their hosts showing off to polish their brand. Interviewers inevitably pander toward the interests of the audience, not the thinker. Documentaries need a narrative arc, and usually end up with not enough of one. A transcript with a capable AI interlocutor strips away that friction. You get the pure cognitive signature of the thinker.
This is not content. This is not journalism (although it does rhyme with gonzo). It’s something that doesn’t have a name yet — intellectual intimacy, maybe, or cognitive voyeurism. Whatever you call it, it’s a genuinely new product category, and it fell out of a share button that someone shipped to hit a quarterly OKR.
The Punchline
The most natural way to explain this new medium is to use it. Everything you’ve just read emerged from a conversation — my observations about my own changing communication habits, interrogated and expanded and challenged in real-time, then shaped into prose you can hold in your hands. The ideas are mine. The stress-testing was collaborative. The good jokes are mine. Putting the bad ones on the cutting-room floor was Claude’s work. The artifact is the centaur’s.
I’m fine with that. I think you should be too. And if you’re not — if something about this makes you uneasy — I’d suggest that the unease is worth examining, because it’s telling you something true about how fast the ground is shifting.
The revolution arrived disguised as a share button. It felt like obvious convenience. It always does.
This post was developed in conversation with Claude Opus 4.5 and reflects our collaborative exploration of ideas I’ve been noticing in my own communication patterns. Full transcript available here for the cognitively voyeuristic.
And then use it for porn, of course. I estimate that chatbot transcript erotica is probably about three weeks away from being a thing. Not quite sure I’m ready for it, it’s definitely on the way. There’s probably a subreddit for people who want to sex up ChatGPT already. To paraphrase the sage, humans are “filthy f*cking McGyvers. Give us two rubber bands and a popsicle stick and we’ll figure out a way to f*ck it.”


Sounds kind of like "Twitch for authors"
Great post. I also would love to be able to easily point Claude at a Claude conversation link to load and analyze it. Right now conversation sharing is just for humans, but I think the AIs would help us make even better use of them.