Blog / April 1, 2026
When the Bottleneck Disappears: What AI Is Actually Doing to Programming

The first time Manu Ebert asked Claude to build him a complete web application, he expected the usual process — the iteration, the debugging, the slow translation of intention into working code that had defined a decade of professional programming — and what he got instead was a fully functional application in minutes, not a skeleton requiring hours of human refinement but something that actually ran. Thompson's interviews with over seventy developers across Silicon Valley's largest companies and its smallest startups document this same jarring moment repeated thousands of times: the realization, arriving differently for different people but arriving nonetheless, that the fundamental bottleneck in software creation had quietly and suddenly ceased to exist. Steve Yegge, who has watched programming move through multiple paradigm shifts over the course of a long career, describes productivity gains of ten to one hundred times compared to traditional coding — numbers that sound like the kind of thing people say before the reality checks land, except that at Google, where Thompson found AI now generates close to fifty percent of all new code, the numbers are reflected in the institutional data as well as in individual experience.
What makes this more than a speed story is what Deloitte's enterprise survey captured at scale: sixty percent of workers now have access to AI tools, a fifty percent increase in twelve months, and most of the organizations deploying those tools haven't yet grasped that they're witnessing not an acceleration of programming as a craft but something closer to its transformation into a different kind of activity entirely. The coding profession was built around a specific scarcity — human brains were, for decades, the only mechanism capable of translating abstract problems into the precise syntax that computers could execute, and that scarcity created a natural moat that required years to cross and commanded premium compensation because of how few people could do it reliably. AI agents can now parse natural language descriptions of complex business logic and generate not just code but entire architectures, complete with error handling, optimization, and documentation that would have required experienced teams weeks to produce. The moat is not gone, but its dimensions have changed in ways that the profession is still mapping.
What surprises Thompson in his reporting — and what the developer interviews keep returning to — is how enthusiastically programmers themselves have embraced this. Where other professions have responded to AI capability with skepticism or anxiety, developers have leaned in with an eagerness that reflects something honest about what most programming actually involves: not the elegant problem-solving that draws people to the field in the first place, but the tedious translation of clear ideas into verbose, brittle syntax that breaks in predictable ways, producing frustrations that have nothing to do with intellectual difficulty and everything to do with the mismatch between how humans think and what computers require. The AI handles the translation. That turns out to be welcome.
The enthusiasm masks a genuine uncertainty, though, and the market data surfaces it in ways that resist reassuring interpretation. The sixteen percent decline in coding positions for workers aged 22 to 25 that Thompson documents is not a market correction of the kind that professions regularly absorb and recover from — it is concentrated in precisely the entry-level roles that traditionally served as the apprenticeship pathway through which expertise developed over time, the positions where junior developers learned by doing the routine tasks that now AI handles more efficiently. When Ebert's three-person team at Hyperspell accomplishes what previously required thirty engineers, the mathematics are not obscure: the question is not whether AI is automating programming but what programming becomes when the programming, in the conventional sense of the activity, is substantially no longer what programmers do.
What JPMorgan's Performance Reviews Reveal About the Transformation
The shift arrives not as a grand announcement but through the accumulation of institutional decisions that each seem incremental until you see them together. Muhammad Zulhusni's reporting from JPMorgan Chase and Bank of America documents the same pattern at two of the country's largest financial institutions: what began as experimental AI assistance has hardened into organizational expectation, and the definition of professional competence is being rewritten in ways that employees are navigating without clear maps. At JPMorgan, where 65,000 engineers and technologists are subject to performance reviews that now include metrics on how effectively they collaborate with AI tools, the signal is unambiguous — AI fluency carries the same institutional weight as technical skill in determining career trajectory, and the tracking systems monitor not just whether employees use AI but how thoughtfully they integrate machine-generated suggestions into their workflows. The feedback loop rewards engagement with AI while creating structural pressure on those who resist or delay.
Bank of America's deployment of Salesforce's Agentforce platform to enable financial advisors to access AI-driven insights about client portfolios in real time is a different kind of example but points in the same direction. The financial advisor staring at a screen as the AI recommends a portfolio rebalancing that would have taken her hours to calculate is in a particular position that is worth thinking through carefully: she is reviewing work that, in a meaningful sense, exceeds her own analytical capacity, evaluating outputs generated by a system that processes data faster and at greater scale than any human team could manage, and her professional value in that moment is not located in her ability to produce what the AI just produced but in her judgment about whether to trust it, in what context, with what caveats, communicated to the client in a way that the AI cannot replicate because the trust that makes the recommendation actionable is built in the relationship, not in the analysis.
Thompson's developer interviews illuminate how this plays out at the technical level with a specificity that the institutional data alone doesn't capture. The work transforms from writing code to architecting conversations with systems that generate code — from debugging syntax to evaluating the logic that the AI applied in producing syntax that may look correct and contain subtle errors that only domain expertise would catch. Prompt engineering, which sounds like a lesser skill than programming until you try to do it well, turns out to require a form of fluency that is neither purely technical nor purely linguistic but sits at their intersection, demanding that the developer understand what the AI is likely to do with a given instruction well enough to construct instructions that reliably produce useful rather than merely plausible outputs. The developer who can do this at the level that makes ten-to-one-hundred productivity gains real is not a less skilled programmer than their predecessor — they are a differently skilled one, and the difference matters for how the profession develops its own talent pipeline.
The Productivity Paradox and the Apprenticeship Problem
The deeper issue that the job market data surfaces — and that none of the current frameworks for thinking about AI and work fully address — is the relationship between the efficiency gains at the senior level and the elimination of the pathway through which senior expertise was historically developed. Deloitte's enterprise survey finding that 36% of companies expect at least 10% of their jobs to be fully automated within a year sits alongside the finding that only 21% have mature governance models for the autonomous agents driving that transformation, which is a gap worth taking seriously not just as a risk management problem but as a signal about the pace at which institutional understanding is lagging behind deployment decisions.
The 16% decline in entry-level coding positions is not, in isolation, evidence of catastrophe — professions restructure, and the historical pattern is that automation of lower-order tasks elevates the value of higher-order ones. But that pattern assumed that the lower-order tasks provided the training ground through which higher-order competencies developed, and the specific elimination of the apprenticeship pathway creates a different kind of problem: it removes the mechanism by which the expertise required for strategic AI oversight gets built in the first place. If junior developers learn to work with AI tools rather than learning to write code, what they develop is fluency in AI collaboration rather than the deep technical understanding that would allow them to catch the subtle errors, evaluate the architectural decisions, and maintain meaningful oversight over systems whose outputs they can use but cannot always verify. This is not a problem that better prompting solves — it is a structural problem in how expertise compounds over time, and the profession is going to encounter it before it has answered it.
NVIDIA's Agent Toolkit, which provides policy-based security guardrails for deploying autonomous agents while maintaining organizational control over their behavior, represents the technical response to the governance challenge — and it is useful, and it does not resolve the deeper issue, which is that governance of AI systems requires a quality of human judgment that was historically built through years of hands-on technical work that the efficiency gains are simultaneously making less available as a training ground. When Cisco and Salesforce integrate these tools into existing systems, they are building infrastructure for a workforce that converses with machines rather than operates them, which is a real change in the nature of the work — and it creates real questions about how the people doing that work develop the depth of understanding that makes the conversation productive rather than just fluent.
What the Security Implications Add to the Picture
The quantum computing dimension that AI News introduces to this landscape is worth attending to carefully because it adds a layer of complexity to the professional development question that tends to get treated separately but belongs in the same analysis. If current encryption methods face obsolescence within the decade as quantum computing develops, the organizations most exposed to that risk are precisely the ones moving fastest toward AI integration — systems that require massive data flows to function, that handle sensitive financial or intellectual property data, that are being deployed at scale ahead of the governance frameworks that would allow meaningful oversight. The concept of crypto-agility — building systems capable of swapping cryptographic algorithms without requiring reconstruction of the entire architecture — becomes essential not just as a security principle but as a design philosophy for systems that need to remain adaptable as the underlying technological landscape continues to shift.
What this adds to the professional picture is a requirement for a kind of literacy that sits on top of both technical depth and AI collaboration fluency — an understanding of the security implications of AI-generated code that the AI itself is not reliable at flagging, because the vulnerabilities introduced by large language models through seemingly correct outputs are often subtle enough that they require exactly the kind of domain expertise that the apprenticeship gap is now making harder to develop. The financial advisor using AI portfolio optimization needs to understand not just investment theory but how machine learning models can be manipulated through poisoned training data; the developer using AI code generation needs to grasp how an LLM might introduce security vulnerabilities that pass surface-level review. These are not skills that come from prompt literacy alone — they require the kind of grounded technical understanding that has historically been built through the entry-level work that is disappearing.
What the Profession Is Actually Becoming
What emerges from Thompson's reporting, Zulhusni's institutional observations, and the Deloitte enterprise data taken together is a picture of a profession that is expanding and contracting simultaneously in ways that resist simple characterization as either good news or bad news. The scope of what a developer can accomplish has expanded dramatically — Ebert's three-person team doing what previously required thirty engineers is a real change in what small groups can build, and the productivity gains at the senior level are genuine and significant. The direct execution component of the work has contracted, replaced by a more strategic orientation that is higher-value in some respects and more precarious in others, because strategic oversight of systems you cannot fully verify requires a quality of judgment that is difficult to develop without the technical depth that comes from years of hands-on work.
The professionals who are navigating this most successfully, in Thompson's interviews and Zulhusni's reporting, are those who have found ways to hold technical depth and AI fluency together rather than substituting one for the other — who understand what the AI is likely to do well enough to use it effectively while maintaining the grounded understanding of systems and security that makes their oversight meaningful rather than performative. JPMorgan's tracking of AI engagement is measuring the frequency of that collaboration; what it cannot easily measure is its quality, the difference between using AI tools in ways that genuinely leverage their capability and using them in ways that satisfy the metric while gradually eroding the human expertise that makes meaningful collaboration possible.
That distinction — between AI collaboration that compounds expertise and AI collaboration that substitutes for it — is the central professional challenge the field is navigating, and it does not have a technical solution. It requires the kind of institutional commitment to developing human capacity alongside AI capability that the current pace of deployment is making difficult to sustain, and the organizations that figure out how to do both simultaneously are likely to produce something more durable than those racing toward automation with the expectation that oversight will take care of itself.
Share this article