šŸŽ¶ Do you believe in life after pivot? šŸŽ¶ Well, I do! And if you prefer your product updates in the form of pop ballads, you can try convincing me to sing about the changes we made to our holistic developer success platform (Just kidding šŸ˜‚ I only sing in the shower).

Hereā€™s a small clue to our big pivot:

  • How fast can you deliver that product feature? āŒ
  • How do you feel about delivering that product feature? āœ…

Ok, thatā€™s enough mystery for one update. If youā€™re keen on learning more about where weā€™re going with how are you, stick around, and Iā€™ll tell you all about it.

But first, feel free to read up on the problem weā€™re trying to solve in the area of performance and wellbeing management in software development.

Why Weā€™re Building 'how are you'

You probably saw it coming from a mile away, that remote engineering would become the new norm. This puts tech leads in a tough situation. Before the pandemic, they could learn a lot just by chit-chatting by the water cooler with their engineers. But nowadays with distributed teams working asynchronously, they donā€™t really have that privilege. Thatā€™s why tech leads are constantly looking for new ways to track progress, establish baselines, find bottlenecks, and set company goals.

The Problem With Dev Management Tools

Thereā€™s no shortage of ā€œengineering intelligence platformsā€ on the market that could do the trick. The solutions they offer help dive into developer metrics to unearth inefficiencies ā€œin real-timeā€. The main benefit of these solutions seems to be data, and lots of it!

Hereā€™s the problem. We believe that the solution is not to have uber amounts of data. The solution is to have the right kind of data that is actionable, easy to understand, and provided to tech leads in context.

We also believe that data should help tech leads not only have a good overview of work output but steer them toward having meaningful and empathetic conversations with their software engineers.

In an attempt to close this gap in the market, we decided to build a product that humanizes and contextualizes data about performance and productivity, and which lends itself perfectly to forward-focused conversations.

Gain developer insights - simple, fast, and easy!
Learn more

But What Data Exactly?

We assumed a priori that Cycle Time, a standard measure of software development performance (Time to Open, Time to First Review, Time to Improve, and Time to Deploy) was a critical metric, often regarded as the most important metric during the CI/CD pipeline. But after several internal discussions, we became skeptical. Thereā€™s a lot of other stuff that happens that Cycle Time misses out on. Does it actually say something important about the value of the work being delivered?

Also, if the primary goal is to use metrics to lower deployment time and improve code quality, then how will tech leads really know the strengths and weaknesses of their dev teams? How will they know if the appearance of blockers is due to skills gaps or burnout or just inefficient use of time?

We started to question Cycle Time so much so that we even changed our name from cyclebeat to how are you.

Primary Research - The Proof Is In the Concept

The first thing we did was a very basic form of competitor research in the space of engineering intelligence and Git analytics platforms.

We wanted to educate ourselves about 1) our competitors 2) our competitorsā€™ offerings and 3) our competitorsā€™ customers. We looked at solutions like Pluralsight, Velocity by Code Climate, and LinearB to learn how they created value for their customers. We also explored the language they used surrounding their products to better define our research search terms on Google. If youā€™re familiar with the abovementioned solutions, youā€™ll know that they are data-heavy. They are so data-heavy that we even joked that youā€™d need a dedicated data analyst to be able to make sense of the data they provided.

A regular event on our work calendar was the daily standup. During weekly ā€˜learningsā€™ we discussed our findings and the direction weā€™d take with further research. We put all our ideas and notes into a Miro board and essentially used it as our strategic sandbox. Miro helped us organize our thoughts, so we could be in sync with each other and with what we had learned (also about what we needed to remove as it wasnā€™t relevant anymore). Hereā€™s a screenshot of our Miro board.

How to Make Sense Of All This Preliminary Data?

With so many hypotheses and assumptions, our next step was to create a methodology that would make sense of it all. We wanted to make decisions based on objective principles, not just ego and gut feeling. We found that the Value Proposition Design would be very helpful as it precisely helps in building technology from scratch!

The tactical, almost scientific approach to researching the Value Proposition and Customer Segments particularly spoke to us. Identifying the Jobs, Pains, and Gains early on helped us enter a flow of sharing ideas with our research participants and then going back to Miro to further define our product scope.

Creating our Customer Segment

We accumulated insights about our customers by reading articles, listening to podcasts, and watching videos on the topic of engineering management. We consulted Built In, an online community for startups and tech companies. They happened to interview dozens of engineering managers about their roles and responsibilities. This helped us understand their daily challenges and what they were struggling with.

Our assumptions about engineering managers were largely disproved later on during research calls, but we did manage to get some things right initially. Hereā€™s another screenshot of our Miro board.

Secondary Research - Conducting User Research Calls

We believed our main customers would be VPs and Directors of Engineering because they were the ones whoā€™d purchase the solution once we got it up and running.

We looked for engineers of such standing mostly on LinkedIn. Upon entering all the filter details like position, status, and years of experience, we were able to come up with hundreds of potential research participants.

With cold emailing, we got in touch with over 337 research participants. Another 243 were contacted via LinkedIn.

We started off the calls by asking our participants to explain in their own words what their job entailed. We then moved on to asking questions about what metrics they used and how they viewed performance reviews.

We took down notes based on what they commented on, put them all into our Miro board, and then analyzed the results. We searched for patterns and created separate profiles for each participant.

Gain developer insights - simple, fast, and easy!
Learn more

Key Learnings:

At the end of our campaign, we had completed 53 research calls. Overall, our research calls were immensely helpful. The participants we chatted with were highly qualified experts with years of experience (some ex-Google, ex-Meta) and with a great deal of knowledge and insights to share.

Because of such proximity, itā€™s as if they became co-creators, and we developed our ideas together. In fact, most of them werenā€™t motivated by the free trial we would offer. They just wanted to contribute to building a cool product because they were not that happy with the solutions they were using.

Our calls gained so much momentum that our very own codequest engineers started to show interest in sharing insights, sort of as an internal board of supervisors. They were very busy working on client projects, but we were grateful they found some time to speak to us. Because how are you is meant to be a developer success platform, it was important for us to know how developers viewed success, performance, and productivity.

For example, not all of them were convinced that being ā€œproductiveā€ or doing ā€œgood workā€ could be put down to numbers. Often left out of metrics are brainstorming sessions, research, documentation work, meetings, etc. Such activities could not be quantified, as opposed to PRs, code commits, and code reviews, which could be quantified.

Another issue that came up was productivity and code complexity. Some developers would appear to be unproductive, but they could also be working on very complicated problems. It would be unfair for a manager to reprimand his/her developer if they saw such ā€œpoor metricsā€ when working on such a big problem.

Other ways we tried to learn more on the subject was by posting two incognito threads on Reddit and sending a survey to a short list of tech leads. Neither got enough traction to continue in that line of investigation.

Researchers, Not Salespeople

The first interviews were not nearly detailed enough. We also biased the participants by sharing our solution early on. Although we asked what they were struggling with, we shared with them screenshots from our prototype to see if theyā€™d find this or that feature useful. This further primed the rest of our discussion, and we sometimes finished the calls not having the exact information we needed.

As a result, we ended up having to do more interviews to clarify product decisions that came up later on. Lesson learned - in Discovery you need to stand your ground as a Researcher, not a Salesperson.

Out of all the calls we made, we did find 5 early evangelists who identified the same problem we were trying to solve. Most of them were actively looking for a solution and even went as far as creating their own solutions, which were quite rudimentary but highly specific to their needs. Right from the get-go, they were willing to buy how are you if we had it ready.

People, Not Processes

The idea behind these calls was to validate the general direction of the product, but it became clear very early on that those we spoke with didnā€™t seem to care nearly as much about metrics as they did about their engineers.

The epiphany came when one of the participants said that ā€œmetrics could be gamedā€ šŸ¤Æ That, indeed, could be a pivot-maker. He also emphasized that he wanted to know more about his developersā€™ wellbeing and goals than how much code they committed. After all, they could be unhappy, unmotivated, and uninspired while completing all their tasks.

This signaled to us that there was an unaddressed problem in the market, a communication gap between leaders and engineers themselves. After further research, it became clear that this problem was indeed a concerning trend often left out from bottom-line decision-making. The result would be low productivity, burnout, and high turnover. All very scary things that could be squashed at the root with good, meaningful management.

So, there you have it. Thatā€™s our first update.

In the following update, weā€™ll discuss how how are youā€™s focus changed to 1:1 meetings in software development.