Since the introduction of DORA metrics, engineering organizations have come a long way in understanding the value of data and professionalizing their work. However, engineering leaders still need to make the best decisions for their business and teams according to their context. Yes, there are industry benchmarks, but they don't cover your organization's unique context. And this makes all the difference, especially when you are scaling.
Today we will explore why context is critical when making decisions in growing organizations, even when you have metrics at your disposal. We will also look through the lens of three examples to help you on your journey.
What is Context?
Context is the frame of the organization, the set of unique circumstances that influence a business. For engineering leaders, your context is a combination of team structure knowledge, product knowledge and business goals knowledge.
Why does context matter?
If we think in terms of DORA metrics, for a team to be considered an Elite Team, they must deploy multiple times a day. But what if you have a mobile team? They can't deploy several times a day because of app store limitations. Does it mean they are low performers?
If engineering leaders don't consider this context, they might push their team to be a "DORA high-performer" when the circumstances do not allow them to. This situation is far from being related to the team's capabilities. They will not get there and get frustrated with the demands. Or worse, you might ignore data and metrics that matter in their context.
Let's look at more consequences of not taking context into account when analyzing engineering metrics by looking at three practical examples.
Context I: The (normal) pain of growing
👉 Indicator: PR Cycle Time has increased in the last month.
💭 Without context: As an engineering leader, if your PR Cycle Time increases, you will agonize over it because it means you are slowing down.
💡 With context: Since the organization is growing, you are onboarding more engineers, meaning they're still adapting to the workflows and the product itself. They need time to ramp and feel comfortable on the codebase and stack.
🚀 Actions: Review the data in a month. Your PR Cycle Time should go back to normal if the onboarding of new devs went well.
Context II: The grass is not (always) greener on the other side
👉 Indicator: Your Review Time is 2hrs higher than some “industry benchmarks.”
💭 Without context: You might think your organization is underperforming.
💡 With context: Your team is mostly made up of junior engineers, so your senior engineers are investing a lot of their time in code reviews to train the junior devs properly.
🚀 Actions: Understand the Review Time that works best for your team. There are plenty of reasons a longer code review process could benefit your context.
Context III: Fast, but not furious
👉 Indicator: PR Cycle Time decreased drastically.
💭 Without context: This looks great! It means your team is shipping faster.
💡 With context: What are you shipping? Does it impact the business or the end-user? Digging deeper into the metrics, you realize that PR Cycle Time decreased because your team is spending most of their time fixing low-priority bugs instead of building new features.
🚀 Actions: Make sure your team is aware of your priorities. In this case, your priority is to ship new features, so a decrease in PR Cycle Time could indicate a lack of alignment and focus.
There are plenty of consequences that can stem from adopting metrics without context, for your business, your end-users, and most importantly, your team.
Not taking your context into account can:
- Hurt your culture and lead to poor developer experience and employee turnover.
- Make scaling difficult: As a leader, you won't be able to provide proper guidance and empower your teams.
- Lead to time waste: If you focus on the wrong metrics, your teams could lose time working on unimpactful activities and find ways to game the system.