Back tostdlib
Blog Post
New

Research is leadership, and code can help (but only in the right places)

Code hasn't limited productivity since 2000. Getting signal from customers has. The easier coding becomes, the more effort you invest into being wrong before talking to a user.

Code hasn't been the real limit on productivity any time this century, yet all of our work processes are structured as though it is. What actually limits productivity? Getting signal from customers. You don't need high fidelity to learn you're barking up the wrong tree. The easier coding becomes - and the more you produce before showing it to a user - the more effort you invest into being wrong. While HBR found that AI intensifies work instead of reducing it, execs forecast only 1.4% productivity growth over the next three years. Work is a social system and productivity improvements localized to a part of the workflow that was already completely unblocked won't reflect in the bottom line.

Companies have been obliterating the coherence of that system with endless waves of layoffs, which they blame on AI. This, not lines of code per second, is the number one blocker for productivity growth. An atomized team is the antithesis of UX. Without engaging with work as a system, you'll never achieve impact or ownership. No one cares about your story point velocity. Managers want to see impact and ownership. If you don't understand the structural incentives, the social context of decision-making, and the individual perspectives, you'll be continuously confused watching your organization make obviously bad choices while ignoring your recommendations.

The build-to-learn ideology had already done irreparable harm to people's ability to understand this system by convincing them to pretend the system doesn't exist. But you can't just build your way into product-market fit. You have to do all the uncomfortable, squishy work around the software. Like research. Unfortunately, research means talking to people at a human pace. Tooling has helped us deliver more quickly, but it has done nothing to help us learn what to deliver. This perverse incentive has led people to foolishly use AI to counterfeit research data just so they can get back to shipping deliverables, which provides zero actual value.

Why doesn't the ability to reach high fidelity faster accelerate learning? Because the blockiness of research artifacts is actually a beneficial property. Good research isn't looking for a yes or no - it's creating a dialogue with the participant, and low fidelity leaves the possibility space open as wide as possible. Good research will not only give you answers, but also let you develop a sense of what data is convincing enough, and which assumptions actually warrant testing in production. We've known for ten years that testing by launching a viable product is foolish.

Source: productpicnic.beehiiv.com
#product-management#user-research#productivity#agile#ai#team-dynamics#product-development#customer-development#velocity#leadership

Problems this helps solve:

Team performanceProcess inefficienciesDecision-makingInnovation

Explore more resources

Check out the full stdlib collection for more frameworks, templates, and guides to accelerate your technical leadership journey.