Magnetis is a high growth FinTech company that delivers modern investment and insurance products to their consumer clients.
Gustavo, Technical Lead at Magnetis came to Athenian after learning about DEE from a peer in their industry. Magnetis already had a culture of continuous improvement but one that was entirely anecdotal after not succeeding with other metrics tools.
Magnetis’s organization is divided into two main areas, Acquisition and Engagement. The teams are working in an agile manner with the majority of the teams contributing to a monolithic repository. Each level of engineering leadership would meet regularly to discuss their challenges and share stories of successes and failures.
Gustavo’s team at Magnetis was responsible for a new product with high impact in the company. Gustavo’s team decided to start with adopting Athenian and the DEE methodology.
Athenian worked with the team to conduct an insights review including one-on-one conversations. This process helped to identify the most impactful areas where improvements could be made.
The team used DEE as a guide to help them prioritize, plan, and execute improvement projects. This allowed them to understand what was working and what was not. It also helped the team identify opportunities for improvement and prioritize those over time.
Gustavo’s team decided upon Pull Request Cycle Time as their North star metric.
Sometimes we would end our week with all planned work done, sometime we wouldn't finish half of the work.
During this period we didn't have any insights into engineering metrics, we only had one superficial metric: Throughput. To increase our visibility into our own process, we changed tools—from Clubhouse to Jira, from SourceLevel to Athenian.
Once we had Athenian in place, one of the first things we did was question ourselves why our metrics were the way they are. The key things for us was to do this without making any kind of judgment on if they're good, or if they're bad.
Once we understood our numbers, we started to think of improvements we wanted to do in our process, and how those improvements should affect our metrics. Once we implemented the improvements, after 2-4 weeks we would look at the metrics and see if they changed in the way we expected. If they did, great! If not, why not?
I attribute most of our success to cycling through: reflecting on our metrics → reflecting on our process → suggesting improvements → reflecting on how it should impact our metrics → reflecting on our metrics.
The first two improvements we noticed were the decrease of our PR Cycle Time, specifically our Review and Merge Time.
Our Review Time was disproportional to our WIP Time, once we reflected on why we noticed that there was a lot of back-and-forth during our review process for things that should have been planned in advance. Once we identified that, we experimented with creating a planning step before development and that alone made our Review Time decrease significantly.
The other improvement was our Merge Time. We used to wait to test on staging after code changes were approved. This increased the Merge Time as sometimes we would discover new issues with the code, or there was a waiting line for the staging environment. To solve this we started experimented with testing on staging during development. This also led to a reduction on Merge Time.
At first we evaluated Athenian in comparison with CodeClimate, but they quickly became our main option because of how invested in our success they were.
They helped setup the environment, walked us through the tooling, suggested ceremonies to try with our team and organized process-debugging sessions as we tried to understand bottlenecks and improvements in our process after seeing the metrics.
The transition from one team adopting metrics to the whole organization has been going smoothly. When our team started displaying astonishing performance, we started to "export" our process to other teams.
At the time, Athenian's team shared with us a meeting template that helped us get the monthly metrics review meeting going. At first we ran a watered down version, so that the engineering leaders could get a feel on how the meetings would go, and build mutual trust — as raising questions about a team's metrics can be seen as questioning their performance, which it isn't, it's about understanding processes and making them better.
The most noticeable change that has been triggered by this monthly metrics review is that now we share process improvements, and a team's improvement becomes the organization improvement.
My team for example focused a lot on decreasing PR Cycle Time, and because of that we created a robust planning process where all tasks are deemed "small"—or, in my words, "can be done in one sitting". Another team had lots of bugs and incidents, and created a robust process for incident prioritization. Now my team is going through a period where we're having a high volume of incidents, and we use the shared organization knowledge on how to prioritize this.
Another interesting by-product of reviewing metrics is that we started to notice organizational problems and act on it as an organization, instead of each team learning and dealing with the organizational problem themselves.