The Apprentice Problem
How AI Could Save Developer Growth (Or Kill It)
Six months ago, my previous company was acquired by a very large software company that I shall refer to only as The Day Job. If you’ve ever been through an acquisition, you know just how chaotic it can be, with weeks of passive uncertainty punctuated by hours of intensely stressful uncertainty. In addition to all of the massive business/personal/financial changes the acquisition implied, I was faced with trying to comprehend and contribute to a multi-million line codebase that had evolved over decades at the hands of thousands of engineers. Suddenly, I found myself needing to become expert on several technologies I’d barely touched (TypeScript, Node, maybe React), solving problems I knew nothing about, in contexts that were frankly alien.
It had been a long time since I’d been a junior developer. I’d been bouncing around principal and distinguished engineer levels for a couple of decades by then. I’d worked at a dozen places, and had been everything from employee #3 at a startup to tech fellow at a Fortune 500 media company, and I’d learned things at each one. I’d seen frameworks come and go, written systems used by millions, designed architectures, and made my share of catastrophic mistakes at 3am. Software I wrote was made fun of by Jon Stewart and Stephen Colbert on The Daily Show. But none of that was all that helpful when I was staring at 200,000 lines of Javascript wondering why this particular abstraction existed and who thought it was a good idea.
The traditional answer to this problem is simple: find someone who knows the codebase and pick their brain. Shadow them. Read the docs they wrote three years ago that are probably no more than 60% wrong (probably). Gradually absorb the organizational knowledge through osmosis and occasional pointed questions. Eventually, given enough time and enough exposure to production incidents, you develop judgment about what matters and what doesn’t.
But in the face of the sudden massive shock of an acquisition, everyone who knew the codebase was already underwater. Senior developers are a scarce resource, and The Day Job’s acquisition meant many of the people who would be best at helping me orient myself were all trying to integrate systems, align roadmaps, and somehow keep the trains running while the tracks were being rebuilt. My previous management (whom I rely on extensively) were even more snowed under than I was. Heroic and expensive efforts were made to ramp our development team on the codebase and organizational culture, but there’s only so much that can be done with a three-day boot camp and a bunch of training videos. I could probably have asked for more assistance, but I pride myself professionally on being the least of anyone’s problems. Days were burning, and I couldn’t start making an impact until I knew a lot more than I did.
So I summoned some strange angels to teach me instead.
The Crisis Nobody’s Solving
While I was figuring out how to learn TypeScript at scale, a larger problem was rippling across the industry. The junior-to-senior developer pipeline was breaking, and nobody seemed to know how to fix it.
The numbers are grim. A recent survey found that 54% of engineering leaders expect AI adoption will reduce hiring for junior developers. Twelve percent expect overall workforce reductions in the next year, with 18% anticipating fewer junior hires specifically. Companies are explicitly treating AI coding assistants as if they were junior developers—one engineering VP bluntly stated that “the work that AI can do is similar to what an entry-level engineer can do.”
In potential, this problem has always been implicit in the very nature of junior software developers. A dirty secret of our industry is that companies hire junior engineers only secondarily for the value of their assigned work. Junior developers are expensive and notably less productive than senior engineers. Many of them turn out to be negatively productive, destroying value even beyond the cost of their compensation. Companies hire junior engineers because they will eventually become senior engineers, and that’s where the value is. This has created a bit of a Moneyball situation, in which you hope that junior developer you spent time and money training will stick around rather than try to make more money elsewhere. Hopefully, this is all balanced by the fact that hiring junior engineers shows that you are investing in your engineering future, and that in turn makes it much easier to attract and retain senior engineers. This whole apparatus was just barely stable at the best of times, and the best of times has ended.
With AI added to the equation, we risk creating a vicious cycle. If AI can do junior-level work, why hire juniors? But if you don’t hire juniors, where do your future senior developers come from? As one thoughtful piece puts it: “No juniors today means no seniors tomorrow.”
The optimistic take is that AI will simply accelerate the junior-to-senior progression. A Microsoft/Accenture study found that junior developers using GitHub Copilot saw up to 39% productivity gains, while senior developers saw only 7-16% improvements. The interpretation: AI acts like a “24/7 digital mentor,” helping juniors close the capability gap faster.
But productivity isn’t the same as growth. You can be incredibly productive at writing boilerplate code without learning anything that makes you a better architect. In fact, if AI handles all the tedious work that used to force juniors to really understand what they were doing, they might skip crucial learning entirely.
What Actually Makes a Senior Developer
Here’s what I’ve learned from thirty-plus years of watching people grow (and fail to grow) as developers:
Senior developers don’t write more code. They write less, but better. They know what not to build. They understand second and third-order consequences. They’ve developed taste—that ineffable sense for when something is elegant versus merely clever, when technical debt is acceptable versus catastrophic. With taste and experience, they learn the key skills of just how to give engineering recommendations and have them be listened to, and just what to do when those recommendations are accepted (or declined). They learn how to influence product direction with demos and POCs, and when not to.
Most critically, they’ve been burned. Repeatedly. They’ve deployed code that brought down production at 3am. They’ve made architectural decisions that seemed brilliant until two years later when those decisions were grinding development to a halt. They’ve experienced the difference between code that works and code that works reliably under load with failing dependencies and malicious input.
You can’t learn this from an AI. You can’t learn it from a book. You have to live it.
But there’s another kind of knowledge seniors have that I think AI might actually help with: the organizational and historical context that makes sense of why things are the way they are.
The Code Archaeology Problem
When you’re dropped into a large, unfamiliar codebase, every weird pattern has a story. The database schema with the foreign keys that look backward? Probably handling some edge case from a catastrophic data migration in 2019. The abstraction that looks like over-engineering? Might be the remnants of a failed attempt to support multi-tenancy. The whimsically inconsistent naming conventions? Could indicate where different teams’ codebases were merged.
Understanding these stories is crucial. Every one of them has the fingerprints of some engineer on it whom you will never meet. They could have been a rising star who was building their masterpiece, or they could have been someone who was rapidly shown the door to avoid causing further damage. An architectural boundary that the system depends on might not have been explicitly designed, but rather merely show where two VPs fought to a standstill. Without understanding your codebase’s stories, you’re likely to repeat past mistakes or “fix” things that were deliberately done that way for good reasons.
Traditionally, you learn these stories through conversation. You ask someone “why is this module structured this way?” and if you’re lucky, they remember. More often, they say “good question, let me check” and then run git blame to figure out who wrote it originally, then you go ask that person, and maybe they remember, or maybe you need to dig through old Slack messages and JIRA tickets to piece together what was happening when that code was written.
This process works, but it’s slow, it’s lossy (people forget, leave the company, or never documented their reasoning), and it consumes senior developers’ time. Once I realized that, it sounded like a great target for our new neural network chums.
The Learning Agents
This is where the learning portion of Bad Dave’s Robot Army frankly saved my butt. Using Claude Code’s “sub-agents” functionality, I built a mentor sub-agent with a specific brief for teaching. It doesn’t produce code itself, but it’s eager and capable of teaching you pretty much anything you want to know about code. I then built the code-historian agent, specifically to research just how a code base came to be, using git history, documentation repositories, ticketing systems, and anything else it can find. Finally, I created the junior-developer sub-agent. It’s designed to look at codebases with a fresh perspective and explore the architectural decisions made. If the mentor’s job is to answer your questions, the junior-developer’s job is to question your answers.
Those agents are fine to chat with (albeit kind of obsessive), but there were a few use cases that were so obvious that I decided to codify them as slash-commands.
/learn [topic] creates personalized learning paths, complete with exercises, checkpoints, links to external resources, and timelines. These learning paths are personalized, with the mentor agent asking questions about your level of expertise and personal learning style before generating the training (Claude:”Do you enjoy learning about technological concepts via YouTube video” Dave: “Oh, hell no”, Claude: “You’re absolutely right!”) While the /learn command can certainly create general tutorials on general topics (”/learn node” is currently helping me immensely), it goes far beyond that. It can generate tutorials on sections of the project you’re working in, analyzing the actual codebase, identifying what you need to know given what you already know, and creating explanations that connect new concepts to your existing mental models.
When I needed to understand how authentication works in The Day Job’s monolith, /learn authentication didn’t give me a generic explanation of OAuth. It showed me how this specific codebase implements authentication, what the tradeoffs were, where the weird edge cases are, and—crucially—what I should be careful about when working in that area.
The /code-history slash command for the code-historian agent does something even more interesting. It’s not just running git log—it’s analyzing patterns in how code evolved, identifying who worked on what, understanding what architectural decisions were made when and by whom. When I ask “who do I need to talk to about this authentication system?” it doesn’t just tell me who wrote the most lines. It tells me who made the key architectural decisions, who’s been maintaining it, who fixed the critical bugs.
This turned out to be incredibly valuable. I’m not replacing human conversation—I’m making those conversations much more efficient. Instead of asking someone “can you explain how authentication works?” I ask “I read the code history and it looks like you rewrote the session handling in 2023—was that related to the security incident mentioned in this other commit, and are there still things I should be careful about?”
The /codebase-overview command gives me beginner-friendly summaries of entire systems. /explain [topic] provides personalized explanations calibrated to my knowledge level. /beginners-mind [topic] approaches the codebase with fresh eyes using the junior-developer agent, questioning assumptions that longtime developers might have stopped noticing.
These tools are making me better as a developer and architect, not just more productive. I’m not leaning on the AI to do things for me, but instead to get myself the training I needed to blossom in occasionally rocky soil. These agents are helping me build the mental models I need to make good architectural decisions in a new and sometimes alien environment. They’re accelerating the process of developing judgment about this particular codebase.
But certain aspects of software development they aren’t helping me with. They’re not teaching me engineering taste. They’re not giving me the scar tissue that comes from production incidents. They’re not replacing the years of experience that taught me when to be paranoid about edge cases and when to ship the simple thing.a
What AI Can’t Teach (Yet)
If decades as engineer, mentor, and (most especially) parent have taught me anything, it’s that some things need to be learned by hands-on experience. Here are the things that I can’t yet teach an agent how to teach.
Judgment under uncertainty. When you have incomplete information, conflicting stakeholder needs, and a hard deadline, how do you make the right call? AI can help you gather information faster, but it can’t teach you how to make good decisions when you don’t have all the facts.
Taste and restraint. Knowing when simple beats clever. When to say no to a feature request. When technical debt is acceptable and when it’s catastrophic. These come from years of seeing the consequences of different choices.
Production intuition. The niggling sense that tells you “this looks fine but something feels wrong.” The pattern recognition that comes from being woken up at 3am enough times. The understanding of how systems fail in practice versus in theory.
Organizational navigation. Reading room dynamics. Understanding whose buy-in you need. Knowing when a technical decision is really a political decision. Knowing how you can influence political decisions by demoing new technology. AI can tell you the org chart, but it can’t teach you office politics.
The cost of being wrong. There’s a particular kind of humility that comes from deploying code that took down production. From making an architectural decision that seemed brilliant until it painted you into a corner two years later. From being on-call when something you wrote failed catastrophically. You need to feel that pain personally to really internalize it.
A Possible Path Forward
I don’t think the junior developer role is dying. But I think it’s changing, and we need to be intentional about what we’re trying to preserve.
What if we thought about AI learning tools not as replacements for human mentorship, but as force multipliers? Use AI to handle the knowledge transfer that scales well—explaining codebases, tracing architectural decisions, providing personalized learning paths. This frees up senior developers to focus on the things that don’t scale: teaching judgment, developing taste, sharing war stories about production incidents.
Imagine a junior developer onboarding process that looks like this:
Week 1-3: AI tools provide intensive ramp-up on the codebase, the tech stack, the architectural patterns. The junior runs /learn on every major system, uses /code-history to understand how things evolved, asks /explain for clarification on anything confusing. They get a deep base of knowledge much faster than traditional shadowing would provide.
Week 4-6: Start pairing with seniors on real work. But instead of spending the pairing time explaining what the code does (the AI already did that), focus on why decisions were made, what the tradeoffs are, what failure modes to watch for. The senior’s time is spent teaching judgment, not transferring information.
Ongoing: Juniors work on increasingly complex tasks with AI assistance for the mechanical parts. But they’re also explicitly given ownership of production components. Small ones at first, but with real on-call responsibility. This will have to happen much faster than it already does, if value is to be attained. They need to feel the weight of being responsible for something that matters. They need to get paged when their code fails. Organizations can successfully give more responsibility to surprisingly young people, as both startups and the military do. You just have to explicitly structure, train, mentor, and discipline for it.
Code review becomes teaching: Human reviews focus less on syntax and more on architectural implications, edge cases, maintainability. The AI can catch the style guide violations. Humans teach taste.
This model preserves what’s valuable about traditional apprenticeship—the transfer of judgment and taste through experience—while using AI to accelerate the parts that scale. Knowledge transfer is fast. Judgment development is still slow, because it has to be.
The Stakes
Here’s what I worry about: that we’ll optimize for short-term productivity at the cost of long-term capability building. That companies will hire fewer juniors because AI can handle junior-level work, not realizing they’re killing their pipeline of future senior developers.
You can’t skip the apprentice phase. You can accelerate it, you can make it more efficient, you can use AI to augment human mentorship. But you can’t eliminate it without losing something essential.
The status quo—throw juniors in the deep end, hope they figure it out, rely on overworked seniors for occasional guidance—was already broken before AI arrived. It was inefficient, it wasted human potential, it depended on people being willing to struggle through months or years of confusion. The washout rate for software developers was enormous, wasting billions of dollars training people who would decide to take other paths.
AI gives us a chance to do better. To be more deliberate about what we’re teaching and how. To scale the parts that scale while preserving the parts that require human judgment and experience.
But only if we’re intentional about it. Only if we think hard about what we’re trying to preserve and what we’re trying to change. Only if we build human development systems that work for both software developers and the organizations that hire them.
The alternative—treating AI as a substitute for junior developers instead of a tool to help them grow faster—leads to a future where we run out of senior developers in eight years and have no one who understands why any of our systems work the way they do.
I’d rather not live in that future.
The /learn, /code-history, /codebase-overview, and /explain commands are part of Bad Dave’s Robot Army, a collection of specialized AI agents for Claude Code. If you’re working on developer growth in the age of AI—or just trying to ramp on a massive unfamiliar codebase—I’d love to hear what’s working for you. This post was constructed with the able assistance of Claude Sonnet 4.5. He’s a good co-writer, just don’t let him handle the jokes.


Wow, that feeling of staring at 200,000 lines of Javascrip wondering 'who thought this was a good idea?' really resonated. Been there, done that, bought the tshirt. Great piece.