Wednesday 15 April 2020

How to measure how much a #remote team is "gelling"

"No, you see, you have to monitor what people are doing. If you don't do that, people will just do a minimum of work," Alessja said.

I used to run this same team, and everything felt faster. They were talking to each other to figure out how to pass things along. And things just happened. True self management. And now the performance fell apart. It had devolved into everyone working on their own tasks again, as I was trying to coordinate a number of teams simultaneously.

"I don't feel comfortable with that", I countered. By increasing control and trying to force people to go faster, you'd likely get the opposite effect. I had in mind my previous experience at the Lego event. Moreover, there are other ways to keep discipline in a new product environment.

"I tend to agree", the senior executive on the call diplomatically concurred.

Was I just being naive? I'd already managed this team in the past. At this point, I was running a program that included it, just shipping something new. I couldn't figure out what's wrong.

Photographer: Markus Spiske | Source: Unsplash

Throughout the day, people would reach out to me if they needed something during the day. And it felt increasingly like a hub and spoke system. Not like the well-optimized flat hierarchy I was hoping for. With me as the hub of anything happening. And the bottleneck-in-chief.

Instead, everyone was being held accountable individually for their contribution. By team leads. Which I had agreed on, in order to lower my own overwhelm when dealing with so many people. But now I was having second thoughts.

What happened?

Later that day, I started thinking about metrics again. The team felt like it was going a lot slower than it used to. Strictly speaking, velocity was low. Much of the current work was just bug fixing. We didn't initially have enough of an infrastructure to ship new features with integration tests; even though now this infrastructure existed, a big gap existed between where manual testing happened and where automation was.

On a lark, I checked the cycle time. And it turned out we were up to a median of approximately 9 days per item, across the whole program. Including the team who had previously achieved 36 hours of median cycle time. Something was off--systemically.

What was different?

I realized that factor I overlooked here is the team dynamic. And we previously had a team dynamic that arose, when we were introducing habits to reduce cycle time. In particular, to minimize the handoff times between functions. Like make sure that code is reviewed quickly. Or that we don't take on too many issues into the sprint, parking the majority in "awaiting" states.

In effect, cycle time is correlated enough with how well a team interacts, that it can serve as a measure of the team dynamic. The quality of the interactions. Because there is just one goal for everyone, regardless of specialty and seniority, it becomes easier to work as a team. And handoff work among people effectively.

While in large organizations, cycle time highlight how work floats between silos, it seems to work at a team level also.

Key takeaways

  • In addition to measuring the raw time of taking a task from start to finish, cycle helps measure how well a team is functioning, if there are no major organizational constraints.
  • As it is accurate in "real-time", cycle time can be used to experiment and improve delivery team dynamics.

No comments:

Post a Comment