L'inversion humaine : l'inversion
Voir la démarqueLes équipes logicielles passaient la plupart de leur temps à l'exécution, à mi-chemin entre la fondation et la révision. À mesure que ce milieu est absorbé par une IA performante, l’influence humaine se déplace vers les extrémités : des normes plus strictes en amont et un jugement plus dense en aval. C’est structurellement positif, mais émotionnellement difficile pour les personnes dont le métier vivait au milieu.
The Human InversionPoints clés à retenir
- Des équipes organisées autour de l'exécution depuis des décennies ; la fondation et la révision ont obtenu le temps qui a survécu après que le milieu du calendrier ait été alimenté.
- Lorsque l'IA absorbe la production d'artefacts au milieu, les humains se concentrent aux extrémités : un positionnement, des systèmes et une architecture plus riches en amont ; des contrôles de qualité et d'ajustement plus approfondis en aval.
- La suppression du milieu humain supprime la traduction implicite qui maintenait les disciplines alignées ; la cohérence interdisciplinaire doit vivre dans des normes explicites, des artefacts durables et un jugement final.
- L'inversion est économiquement forcée et non facultative, mais elle met à rude épreuve les spécialistes dont la crédibilité était liée à la technique d'exécution jusqu'à ce que le jugement soit à nouveau lisible.
- Agents-in-a-loop (travail d'agent asynchrone côté serveur) est à quoi ressemble l'absorption du milieu au niveau du système ; L’IA synchrone uniquement maintient les humains au milieu en tant que déclencheur de chaque étape d’exécution.

title: "The Human Inversion: The Inversion" excerpt: 'Software teams used to spend most of their time in execution—the middle between foundation and review. As that middle gets absorbed by capable AI, human leverage shifts to the ends: sharper standards upstream and denser judgment downstream. That is structurally positive—but emotionally hard for people whose craft lived in the middle.' published: true publishedDate: '2026-04-23' category: "AI" tags: ["ai", "software development", "teams", "organization", "the human inversion"] heroImage: "the-human-inversion-part-1-hero.png" heroImageSquare: "the-human-inversion-part-1-hero-square.png" ogImage: "the-human-inversion-part-1-hero-og.png" heroImageStyle: "keep-proportions" read_time: 9 series: "The Human Inversion" seriesSlug: "the-human-inversion" seriesPart: 1 seriesTotal: 5
For most of the history of software development, humans spent most of their time in the middle.
Break the process into three parts:
- Foundation: the market research, positioning, and persona definition that precedes any feature; the design systems and constraints that govern how interfaces get built; the architectural standards and coding conventions that shape what engineering can even contemplate.
- Execution: turning those foundations into actual artifacts: specs becoming designs, designs becoming code, code becoming shipped software.
- Review: the polishing, the testing, the painstaking work of making sure what got produced actually meets the standards and actually works for the humans who have to use it — and, critically, the learning that updates your priors in the foundation for next time.
Teams were organized around the middle. Product people wrote specifications. Designers implemented the specs as designs. Engineers implemented the designs as code. Even lean methodologies, which explicitly tried to collapse the waterfall into something more collaborative and cross-disciplinary, still couldn't pull most of the humans out of that middle space.
The handoffs were load-bearing. The coordination was load-bearing. The sequential dependencies were load-bearing. A designer could do some exploratory work before the product spec was finalized, and an engineer could do some research before the designs were locked, but the actual artifacts — the ones that got shipped — required the previous discipline's output to be substantially complete before the next discipline could do good work.
This had consequences at both ends:
Foundation got whatever was left over. Market research got done in the margins, between the real work of writing specs. Design systems got built reactively, pulled together after the fact from patterns that emerged across shipped screens. Coding standards lived in wikis nobody read and architectural decisions accumulated as tribal knowledge because nobody had time to write them down. The foundation was where leverage lived, but execution ate the calendar, so foundation got whatever attention survived.
Review got worse treatment. QA was a phase at the end of a sprint if you were lucky, or a ticket in someone's backlog if you weren't. Design review was a Slack thread. Code review was a ritual optimized for throughput, not for catching the things that actually mattered. The details that separate software that works from software that delights got handled in the last 10% of the timeline, which is exactly when everyone was most tired and most under pressure to ship.
This wasn't a choice teams made. It was an equilibrium imposed by economics. When execution was expensive, you had to staff execution, which meant foundation and review got whatever time and attention remained. Waterfall and lean weren't suboptimal at the ends because teams chose to neglect them. They were liquidity-constrained.
Then AI changed the economics.
The middle — the actual production of artifacts — is where AI and strong harnesses are genuinely good now. You don't need a product person to write the spec, a designer to turn the spec into a design, and an engineer to turn the design into code, with each handoff consuming days of sync meetings and context translation. You need a human who knows what should exist and a model that can produce it.
The execution cost collapsed. Not all the way to zero, and not uniformly across every domain, but enough that the gravitational center of where human work happens has moved.
The infrastructure has shifted alongside the economics. The transitory model is human-in-the-loop: a person at the center of every execution step, triggering each action, reviewing each output, maintaining the tempo. What's emerging is agents-in-a-loop: servers running agent work continuously and asynchronously, with humans setting direction and reviewing results rather than driving every step. Human-in-the-loop keeps people in the middle. Agents-in-a-loop is what absorbing the middle looks like at the system level.
Humans now move to the ends
On the foundation side:
- Product can spend time on what product work was always supposed to be: deep market research, serious positioning, validated personas, careful thinking about which guarantees and benefits matter and why. Not on writing specs that get translated into designs that get translated into code.
- Design can spend time on design systems, constraints, and the cross-cutting decisions that determine whether a hundred future screens will cohere. Not on pushing pixels in Figma to hand to an engineer.
- Engineering can spend time on architectural standards, on the principles that determine which classes of bugs are possible and which are structurally excluded, on the platform decisions that will compound over years. Not on converting tickets into pull requests.
On the review side, each discipline can spend far more time on the questions that always mattered but rarely got answered carefully:
- Does this actually solve the user's problem, or is it solving a problem the model inferred incorrectly?
- Does this actually fit the design system, or is it a one-off that'll create inconsistency?
- Does this code actually respect the architectural constraints, or is it a shortcut that will compound?
Review load isn't uniform across surfaces. On most code, thorough review improves quality. On surfaces where failure is catastrophic — bridge code in crypto infrastructure, clinical decision-support, aerospace controls, payment rails — the review end becomes the rate-limiting third of the pipeline, carrying most of the quality load. The architecture still applies; the weight shifts.
This is the inversion: humans at the ends, AI in the middle. And the ends are now where leverage differentiates, because the cost of execution is becoming table stakes, even as its quality still determines how hard the review end has to work. We just couldn't afford to staff the ends properly before, because execution ate everyone.
Stated this way, the inversion sounds like pure upside. More time for foundation, more time for review, less handoff friction, less coordination tax. And it is upside — substantial upside, the largest productivity shift software teams have seen in a generation.
What the transition feels like from the inside
The architecture is net-positive. The experience of moving through it isn't uniformly so, and any honest account of the inversion has to say what's actually happening to the specialists at the center of it.
The people who were defined by execution craft — the ones whose reputation was built on writing good code, producing clean designs, drafting tight specs — are watching the thing they were hired for get absorbed. The skills haven't become worthless. Judgment, taste, domain knowledge, system intuition, the instinct for which simplicity is cheap and which is expensive — none of that is automated yet, and most of it won't be soon.
But the credentialing skill, the thing that got them into the room, is shrinking. What remains is harder for the market to price in the short term, because the market hasn't developed the vocabulary for valuing it independently of the execution it used to be attached to.
This shows up in specific ways. Mid-career engineers on AI-heavy teams increasingly ask whether they should move to product management — not because PM work is their passion, but because PM is a role defined by judgment and communication, and judgment and communication are the skills that didn't just atrophy. The move is often correct in direction. The shape it takes in conversation — "my job is irrelevant anymore, so can I turn into a product manager?" — is the shape of a rational response to an illegible transition, not pure enthusiasm for a new role.
The converse confusion is just as common: gravitating toward the skill AI just made accessible rather than the one it made scarce.
The inversion doesn't invalidate the specialists who feel this. If anything it depends on them — foundation and review are the leverage points of the new architecture, and both run on exactly the judgment and taste that execution-era specialists spent fifteen years accumulating.
What's required is a shift in how that judgment is made legible, both to the specialists themselves and to the organizations trying to evaluate them. The market doesn't yet have good vocabulary for valuing judgment independently of the execution it used to be bundled with. Building that vocabulary — through rubrics, through new evaluation criteria, through honest conversation about what the role actually is now — is part of how the transition stops feeling like loss and starts feeling like leverage.
Execution gets automated first. Judgment doesn't. Most of this series is about what it takes to staff and organize the judgment properly once execution isn't the gate.
The problem the old org didn't have
When execution sat in the middle and humans handed artifacts between disciplines, coordination happened through the artifacts themselves. The spec forced the PM and designer to align because the designer had to read it. The design review forced the designer and engineer to align because the engineer had to build from it. The middle was where cross-disciplinary context got transmitted, because translation happened in real time while humans were actually producing the thing together. It was slow and frustrating and full of handoff waste, but it did one thing well: it kept the disciplines in contact with each other's reality.
Take the middle out, and that contact goes with it.
Specialists working at the ends don't automatically understand each other's work:
- The PM's positioning doc is illegible to the engineer.
- The designer's constraint document is illegible to the PM.
- The architectural standards are illegible to the designer.
Each discipline produces foundational artifacts that, in the old model, never had to be cross-readable because a human in the middle was doing the translation in real time. Now there's no human in the middle. There's AI in the middle, which is great at producing the artifact but doesn't inherently resolve the cross-disciplinary tension that used to get worked out through meetings and handoffs.
So the question the rest of this series has to answer is: how does async parallel specialist work actually function without collapsing into incoherence? What layer replaces the coordination that the middle used to carry? Who authors the shared context, and how does it stay trustworthy over time?
The claim
Here's the answer I'll develop across the next three posts, stated bluntly so it can be tested:
Teams that don't restructure around the inversion will look dramatically worse within eighteen months, and most teams won't restructure — not because the technical shift is hard, but because the cultural shift is harder than anyone currently appreciates.
The cultural shift requires things most organizations find genuinely uncomfortable:
- Executives authoring business rubrics they've historically kept fuzzy on purpose.
- Specialists finding their professional identity in taste and judgment rather than in execution craft.
- Abandoning the meetings that currently function as the visible evidence of teams coordinating, and trusting async mechanisms that nobody has years of experience with.
- Hiring on a different trigger than the one many investors and operating manuals currently endorse.
None of this is impossible. Some of it is already happening. But the teams that figure it out first will compound advantages that the teams who delay can't easily close, because the compounding happens at the foundation layer and the review layer — the layers that were always supposed to matter most and that most teams were never able to properly staff.
The teams that navigate it well will also be the ones that acknowledge how uncomfortable the transition is for the specialists at the center of it, rather than pretending the discomfort isn't part of the system. A shift that's good for the organization doesn't automatically feel good to the person in the middle of it, and treating the gap as a failure of attitude rather than a structural feature is how an otherwise-correct architecture loses the people it most needs.
The next post is about the hiring trigger. When do you actually add a human to a team whose execution is handled by AI? The answer is not what current startup wisdom suggests, and getting it wrong is the most expensive mistake available in this transition.