Most Definitions of Done only check if features work. They should also check if your architecture can survive the next six months.
Your Definition of Done probably sucks, and not because it's too strict. Pierre Pureur and Kurt Bittner argue it sucks because it only checks if features work, not whether your architecture can survive contact with production for the next six months. A typical DoD asks: does the code pass tests? Is it documented? Cool. But does it scale? Is it maintainable? How much technical debt did you just bury in there? Those questions get hand-waved until the MVP implodes under load.
The insight here is treating your Minimum Viable Architecture like your Minimum Viable Product. Each release is a new MVP that should increase customer satisfaction, and each MVA should increase system quality while supporting that release. But here's the problem: agile teams barely have time to ship working features, let alone evaluate architectural sustainability. The only realistic solution is automation. Build architectural fitness functions into your CD pipeline. Code scanners catch security holes. Build tools enforce standard configurations. Performance tests run overnight. Manual reviews can happen, but they can't block releases or you'll never ship.
The real value comes from expanding your DoD to ask uncomfortable questions: is this sustainable or are we just kicking the can down the road? Did we test scalability and resilience? How much technical debt did we incur to ship this? What's the cost to repay it? These aren't philosophical questions. They're empirical tests you can automate. When teams skip this work, they end up in one of five scenarios, and only one of them (complete failure) lets you say you're truly done. The other four all boil down to: the architecture isn't sustainable yet.
The framework is straightforward but requires discipline. Extend your DoD with concrete architectural criteria: maintainability, security, scalability, performance, resilience, and technical debt tracking. Automate everything you can in the CD pipeline. Run deeper checks overnight. Use manual reviews to develop new ideas, not as release blockers. The payoff is that every release gives you empirical evidence about whether your architecture can handle what's coming next, instead of discovering six months later that it can't.
Check out the full stdlib collection for more frameworks, templates, and guides to accelerate your technical leadership journey.