Claude Code Sees Like A Software Architect
What LSP Support Means for Agentic Coding (And Everyone Building On Top of It)
Claude Code shipped native LSP support last week. If you’re not a programming languages and development tooling nerd, that sentence probably means nothing to you. Let me explain why it matters, and why it’s quietly devastating news for a whole category of AI coding startups.
LSP—Language Server Protocol—is the thing that makes your IDE actually understand your code. When you hover over a function and see its signature, that’s LSP. When you right-click “Go to Definition” and it jumps to exactly the right file and line, that’s LSP. When you rename a variable and it updates every reference across your entire codebase, that’s LSP.
For years, every IDE had to implement this stuff separately for every language. Then Microsoft, in a rare fit of “actually useful standardization,” created LSP so that language intelligence could be built once and used everywhere. Your TypeScript language server works in VS Code, in IntelliJ IDEA, in Cursor, in whatever weird editor you prefer.
Now it works in Claude Code too.
Why This Matters for Agentic Coding
Here’s the thing about AI coding tools before this: they were mostly doing text manipulation. Sophisticated text manipulation, under the covers it’s still just slinging strings. When Claude needed to find where a function was defined, it was essentially doing fancy grep. When it needed to understand the structure of your code, it was parsing text patterns.
This works surprisingly well! LLMs are remarkably good at understanding code as text, probably because they’ve seen so much of it. But it’s fundamentally limited. You’re asking the AI to reconstruct, from raw text, the sort of layered and nuanced semantic understanding that your IDE already has.
With LSP support, Claude Code gets access to the same code intelligence your IDE uses. Go-to-definition. Find-all-references. Type information. Symbol hierarchies. The stuff that makes a senior developer actually productive when navigating an unfamiliar codebase.
The Ecosystem This Eats
This is where it gets interesting—the sort of interesting that can net out to “Oh god, oh god, we’re all going to die” (or at least our business models are).
There’s been a whole ecosystem of tools trying to give AI better code understanding. MCP servers that wrap LSP. Plugins that build code indexes. Startups building “semantic code search for AI.” Projects that generate knowledge graphs from codebases.
I know this ecosystem well, because six months ago I was building one of these tools myself.
Project Sagrada was my attempt to solve exactly this problem: how do you give AI agents the kind of semantic understanding of code that professional developers get from their IDEs? My approach was to automatically generate RDF knowledge graphs from software projects and then give agents the ability to query them using SPARQL. It had all the bells and whistles too: parsers for the fifteen most popular languages, understanding of six different build tools, knowledge of Git history, cross-temporal and cross-project views, metrics, code smell detection, effects analysis, so many shiny toys. The intent was to give Claude Code the sort of tools a senior architect would use to do complete analyses of foreign codebases. It played to my strengths—language processing, static analysis, software architecture, semantic web technologies—and I got some genuinely promising results.
Then I watched the base model benchmarks start ticking up, week by week. Claude’s base model was get better at understanding code. And better. And somehow still better. And then I watched Anthropic ship native LSP support in Claude Code, which gives agents roughly the same capabilities I was building toward, but integrated directly into the tool rather than bolted on from outside.
To put the best possible gloss on it, Project Sagrada’s strategic importance has been overtaken by events. Most likely, though, it’s just dead.
The Base Model Eats The Business Model
There’s a phrase I’ve heard for this phenomenon: “The base model eats the business model.” Every capability you build on top of an AI system is a candidate for incorporation into the next version of that AI system. The moat you’re digging fills in behind you.
This is happening across the AI coding ecosystem right now:
RAG pipeline companies are watching context windows expand from 8K to 128K to 200K tokens. The original value proposition of RAG was “the model can’t fit all your data, so we’ll retrieve the relevant bits.” That proposition gets shakier every time the context window doubles. RAG isn’t dead—it’s still a lot cheaper than stuffing everything in context, and retrieval can still beat “dump everything in”—but the simple use cases are disappearing.
Code review startups are watching Claude get better at understanding diffs. The early pitch was “the AI doesn’t understand your codebase, but we index it and give the AI context.” Now the AI just... understands codebases better. Turns out if you train on enough code, you develop intuitions about how code should look.
Documentation tool initiatives are watching models get better at reading and synthesizing software project docs. The value prop of “we’ll help the AI understand and build documentation” gets eaten by base models that have all the smarts to necessary read and write documentation.
Semantic code search tools—like, say, my Project Sagrada—are watching LSP support land natively. The knowledge graph I was building? The stock agent just got the same capabilities through a different substrate.
The pattern: if your value proposition is “we help the AI understand X better,” you’re in a race against the AI understanding X better on its own, and there’s every reason to believe you’re going to lose.
Who This Is Happening To
Let me name some specific categories of tools that should be nervous:
Cline, Cursor, Windsurf and the other AI IDE players are in a complicated position. They’ve built some genuine value—better UX, thoughtful integrations, novel interaction patterns like Cascade and Supercomplete. But a lot of their differentiation has been “we give the AI better context about your code.” As the base models get smarter and as Claude Code adds native capabilities, the question becomes: how much of that differentiation survives?
The answer probably depends on whether their moat is “code understanding” (destined to be eaten) or “workflow integration” (more durable). Cursor’s GitHub sync, Windsurf’s Memories system, Cline’s MCP marketplace—these are integration plays, not intelligence plays. Integration moats are harder to eat because they’re about the annoying work of connecting to enterprise systems, handling auth, dealing with compliance. The base model doesn’t want that job yet.
LlamaIndex, LangChain, and the RAG orchestration layer are similarly positioned. They’re not just “give the AI context”—they’re “here’s a framework for building AI applications.” Frameworks have network effects and switching costs that pure intelligence improvements don’t eat. But if you’re using them primarily for RAG, and RAG is getting less necessary, you need to find other reasons to exist.
The whole MCP server ecosystem is interesting. MCP (Model Context Protocol) is essentially a standard for giving Claude access to external tools and data. There are now hundreds of MCP servers for everything from Slack to PostgreSQL to... LSP. The ones that are pure “give Claude access to X” will survive as long as Claude wants access to X. The ones that are “give Claude smarter access to X” are racing against Claude getting simply smarter on it’s own.
The Brave New Future
I’m not sad about Sagrada. Well, maybe a little—it was a technically elegant solution to a real problem, and I enjoyed building it. But the goal was always to make AI coding better, and if Anthropic gets there through LSP while I was wandering toward knowledge graphs, the goal still got accomplished.
This is what it feels like to be in a field where the ground keeps moving. You build something clever, and then the platform absorbs it. You find an edge, and then the base model gets smarter and the edge disappears. You’re not competing with other startups; you’re competing with the rate of capability improvement in the foundation models.
The correct response isn’t despair. It’s adaptation. Find the things the base model can’t eat—the proprietary data, the messy integrations, the human judgment. Build on those instead of racing against inevitable capability improvements.
Or, you know, do what I did: build tools that make you more effective, release them for others who might benefit, and don’t get too attached to any particular solution lasting forever. Bad Dave’s Robot Army doesn’t need to be a business. It just needs to work today.
Claude Code has an architect’s eyes now. It can see the structure of your codebase the same way your IDE does, understands “the skull beneath the skin”. That’s genuinely great news for everyone using Claude Code, and genuinely challenging news for everyone who was building that capability on the outside.
The future keeps arriving. I’mma try to keep up.
This post was constructed with the able assistance of Claude Opus 4.5. I mentioned that I’d tried to build something similar to its new LSP capabilities, and Claude expressed what I can only describe as polite professional sympathy.


Marvelous post, Dave. I wonder how much longer it will take Google and OpenAI to add LSP servers to their own agentic CLI solutions (codex and gemini).
On the solutions side, there is a very important aspect we should all watch for: even with models getting smarter, and with broader context, they are still a general purpose transformer.
Many general purpose products struggle to solve problems in a better way than specific solutions because of product canibalization.
In other words, they have to sacrifice a big part or tenet of its core product to become specific enough, and that often backfires.
For people investing in tooling, the future is not as gloom as it seems. At least for now.
How to enable it?