Engineering leaders still wrestle with choosing effective team performance metrics, and DORA metrics haven't become the standard, while generative AI tools are pushing upskilling and prompting skills to the top of the agenda.
For the third year running LeadDev surveyed engineering leaders about how they measure team performance and found the problem getting harder, not easier. Most leaders still struggle to pick the right metrics, and the industry-standard DORA metrics have not become the default framework many hoped for.
The data shows that without a clear, agreed-upon metric set, teams spend time chasing numbers that don't reflect real impact. Leaders end up juggling metrics they love and hate, trying to balance velocity, stability, and quality while still answering executive questions about delivery speed.
At the same time, the rise of generative AI coding tools has created a new wave of anxiety. Leaders report that upskilling in prompting and managing AI agents is now at the top of their wish list, forcing them to rethink training programs and career paths for their engineers.
The session breaks down why measuring performance isn't getting any easier, walks through the metrics that polarize teams, and highlights the growing importance of AI-focused upskilling for engineering managers who need to keep their teams productive and future-ready.
Check out the full stdlib collection for more frameworks, templates, and guides to accelerate your technical leadership journey.