The Shift Upstream: Why Scrum Must Evolve
For two decades, Agile and Scrum focused entirely on the Implementation Layer. We built rituals (Standups, Sprint Planning, Poker) to manage the scarcity of developer hours. Then we put those rituals in frameworks (SAFe, Scrum@Scale) to improve operating big teams to do big things. While we would argue that these have neglected a real and growing bottleneck for a decade (product vision, operations, backlog), the primary focus of these operating models treated coding as the constraint because it was slow, expensive, and error-prone.
2025 has been the start of an entirely new era for software product development. Bottom line: In the age of AI Agents, code generation and user interfaces are a commodity but vision, architecture and user experience are NOT (although we expect that conclusion will be different one year from now).
Manufacturing principles argue that flow is always better than batch - meaning that we should continuously target a smaller and smaller batch size for any process and eventually we get flow. Like adding points to a square until it becomes a circle, one of my favorite basic geometry lessons. In product development (or the current term “software engineering”), batches are sprints and they are executed as part of scrum practice.
If you give a modern AI agent a perfect specification in January 2026, it can deliver code, tests, and documentation in minutes. The bottleneck has shifted. The new scarcity is not “velocity” (how fast we write code); it is fidelity (how clearly we think).
Unfortunately, we don’t yet have the tools for instant fidelity like we do for the code, tests and documentation; therefore my argument for this ecosystem of software product development TODAY is: We don’t need to kill Scrum. We need to move it Upstream.
The New Topology
Providing every existing engineer access to AI coding assistants but keeping them in the current scrum practice will have minimal impact on the overall product delivery. Theoretically every engineer can “write more code”, but operating in time-bound sprints in existing separations of responsibilities among teams will hold us back. Conversely, we can just make each developer or designer or QA engineer a full stack AI cowboy delivering completed and tested features but if those are still “scrum-bound” will not realize the potential of AI enabled product building.
Anyone with a critical eye will recognize that “Agile” might just be “small waterfall”, if not in theory then in practice. It has long been a problem that the design tasks are not discovered and delivered with the proper cadence and definition as the traditional product manager and engineers require. So when we consider a new model we need to also avoid “micro waterfalls” which might exacerbate the design-development timing and flow.
The Producer: Responsible for harmonizing the two cadences
The new technology and the new model require a new role. The Producer is the single role accountable for product intent, priority, and coherence in an AI-driven development model, replacing the traditional Product Owner. As implementation speed accelerates under AI, the primary constraint shifts to clarity and judgment: what to build, what it means, and whether it is worth building at all. The Producer owns that constraint end-to-end. They set priorities, define outcomes, and translate vision into high-fidelity intent that is safe to execute. Rather than managing backlogs or story acceptance, the Producer ensures that Context Packets represent coherent, deliberate product decisions and that rapid AI-assisted delivery produces a unified product rather than a collection of disconnected features.
To be successful, a Producer must operate with real decision authority and strong technical and experiential fluency. Their effectiveness comes from knowing when to slow thinking down so execution can accelerate, enforcing clarity without over-specification, and making trade-offs explicit early. Great Producers design tight feedback loops between discovery, delivery, and real-world usage, continuously refining intent as learning emerges. They do not act as intermediaries or process managers; they are owners of product truth. In a world where building is fast and cheap, the Producer’s value lies in disciplined judgment, ensuring that what ships is intentional, coherent, and worth the cost of maintaining.
Right-Sizing Definition Output for Limitless Execution
We must stop forcing AI agents into human-speed sprints and humans into AI-speed chaos. The solution is to decouple the organization into two distinct operating modes that connect via a rigid API: the Context Packet.
1. Upstream: The Context Team (Scrum “moves” here)
- The Cadence: 1 week Sprints.
- The Team: Producer, Architect, Context Engineer.
- The Goal: They deliver “Context Packets” defining new product capability and the constraints for how it will be implemented to best align with the product architecture and operating model.
- Definition of Done: A packet is “Done” only when it includes the Schema, the Business Rules, and the Constraint Tests (e.g., specific MSW states or Postman collections) required to validate the AI’s output.
- Why Scrum? Humans need time to debate, think, prioritize, and collaborate. Use of a 1 week sprint is proposed because a smaller team moves faster and we DO of course have AI agents which make this process fast.
2. Downstream: The Implementation Team (Kanban)
- The Cadence: Continuous Flow (No Sprints).
- The Team: A product dependent mixture of engineers who use code, design, test and delivery agents to turn context packets into functioning ready to use software. The team will require a number (proportional to implementation team size) of agent engineers who are not executing on the context packets but rather managing their toolchain of agents, infrastructure, observability and metrics. They are critical in our current era of rapid change for any team to be successful.
- The Workflow: Team operates from a Queue, not Scrum.
- The Secondary Output: In addition to the product delivery, a single team using the same tools will empower our next important evolution toward prompt efficiency, multi-agent tasking through context RAG, testing feedback loops, etc.
The Combined Topology
In this new AI enabled development organizational structure, the ratio of “product” team members to “engineering team” members inverts from what we know today. In this new model, multiple Context Teams feed into one Implementation Team because THAT is where the architectural and UI design patterns are specified and where the intimate knowledge of the product stack is required.
The New Testing Model: “Test-Driven Prompting”
The testing pyramid is now flipped. We stop writing manual unit tests and start defining Validation Logic in the design phase.
- Exhaustive Permutations: The Context Team’s prompt instructs the Agent to generate comprehensive test suites (backend tests) and/or mock service workers (frontend tests) alongside the code.
- Self-Validating Agents: The agent builds the code, the CI/CD pipeline, and executes the test suite or prepares the test environment simultaneously.
- The Human Role: We stop writing test code. We audit Test Coverage.
The Front-end: Live Agentic Iteration
One significant workflow shift happens in UI development. With our new tools, we no longer share JPEGs. We share running local servers with Mock Service Workers that simulate the exact data states the UI needs and live backends connected to local frontends for live iteration. Below is an example (numerous variants can and should be evaluated in practice) of how this new model of design works within this new product organizational structure.
- Sprint 1: Context Team generates context packets for the desired backend product function. The implementation team will generate, test, and deploy the backend (e.g. microservices) based on the Context Packet. Because of the exhaustive testing and extensive context packet, we trust this infrastructure is stable enough to be used to also implement mock service workers with implementation parity for the first sprint.
- Sprint 2: Part of the context packet for the front-end includes Hi-Fi static HTML pages generated using AI design tools. Further iteration to integrate those with the mock service workers or live backends and refine the functional prototypes can be done within the Context Team and delivered as part of the context packet for front end implementation. Yes - cancel your figma subscription. After design iteration they move immediately to functional frontend testing using the deployed backend. Again, this enables live real time iteration without deployments. Importantly, the mock service workers implemented in Sprint 1 can be altered during preparation of the Sprint 2 context packet and the dynamic front end with modified mock service workers are important elements of the context packet which streamlines what implementation.
Managing Change: The “Context Patch”
In traditional development, a missing field in an API results in a developer “hacking” the backend code to unblock the UI. In the Agentic Era, this is forbidden. If you manually edit the code, you create Drift—a divergence between the Spec (Truth) and the Code (Artifact).
We treat changes as State Reconciliation, not “bug fixes.”
When a deviation is found during the review of an AI-generated build, we apply a strict If/Then logic:
Is it an Agent Hallucination?
- Scenario: The Spec was clear, but the Agent ignored a constraint.
- Action: The Implementation Engineer refines the prompt or provides a “correction shot” to the Agent. The Context Packet remains untouched.
Is it a Spec Gap?
- Scenario: The Producer realizes they missed a requirement (e.g., “We forgot the last_login field”).
- Action: The Implementation Engineer submits a Context Patch (a delta request) to the Context Team.
- The Fix: The Context Team updates the Context Packet.
- The Reconciliation: The Implementation Team regenerates the impacted service and its test suite.
Why this matters: We do not patch code; we patch the Truth. By forcing the loop back through the Context Packet, we ensure that our documentation, our automated test suites (MSW/Postman), and our architecture remain the Single Source of Truth, forever synchronized with the deployed binaries.
Does the context patch need to wait for the next sprint? The answer to this question is no different than the struggle today with “bug boards” that are prioritized by product teams and go into engineering sprints, get tested then deployed. This cycle often takes at least two sprints and is a significant source of today’s friction. Unlike the current model, in this new AI enabled org the implementation team is one of continuous flow; therefore, the Context Team can choose how to prioritize their “bugs” and produce context packets. Once they are completed, they flow continuously into the implementation team and into the product. There is no waiting for development sprints.
The Practical Reality: The AI assisted “Human Router”
Until enterprise-grade “Router Agents” which can manage a swarm of agents themselves become mainstream, the Context Engineer must act as the bridge.
We cannot simply feed a 50,000-line codebase into a chat window for every minor change or agent task. The Context Engineer must practice “Blast Radius Engineering”:
- Context packets are focused and include explicit references to existing codebase files and specifications.
- Agents available for the engineers have integration with required resources, leverage MCP servers, and clear tactics for using agents to evaluate the work of other agents are used.
Output: The Definition of Done for the Implementation Team isn’t just ‘it runs and passes the tests.’ It is the generation of a Diff Report—a summary of exactly how the implementation diverges from the Context Packet, allowing Context Teams are informed of the code implemented, not only the code intended.
Conclusion: Sprinting on Clarity
Does Agile still work? Yes. But the “Agility” is now in how fast we decide what to build, not how fast we produce and deploy code and tests.
By moving Scrum upstream, we respect the human need for discovery. By moving Implementation to flow, we respect the AI’s ability to execute instantly. The future isn’t about “No-Code.” It’s about “No-Waiting.”
Finally note because I know you’re thinking it. Does this mean we need 10% of the number of developers? Per software product….yes. In the world….NO. This is another article (maybe, if I get inspired and nobody else does it), but the other big change in software development is that the build vs buy debate just changed. I predict a massive shift away from buy and toward build. Small and large businesses alike will be able to build their own bespoke software to enable their