Wednesday 18 December 2019

The Kindergarten, the Construction site, and the Assembly Line

Last night, I went to a local meetup where we played Legos. It was an event organised by Krzysztof Niewinski. In particular, it was a simulation workshop of large scale product development using alternative organizational structures. But there were lots of colored bricks involved. And the specs were pictures of the end products that needed to be built.

Without getting into too much detail, we covered 3 alternatives with the same group of 20 something people: component teams, cross functional teams of specialists, and finally "T-shaped" interdisciplinary teams where everyone could do everything. In short, we were experimenting with output using alternative ways of working. Each round took roughly 10 minutes.

Here's what happened

In the first round, we had specialized component teams each dedicated to working with only two different lego colors, a supply team, an integration team, a quality team, and 8 different product managers who wandered from table to table. Sound familiar? Kind of like a massive construction site with lots of project managers. Or in a large company developing and installing software. Most of the building teams sat around doing very little in practice. There were lots of bottlenecks and confusion around getting supplies and exact requirements. I had a chance to engage in chitchat with my table mates. And a stressed out senior executive that walked around and yelled at anyone for not doing anything.

The second round, we continued to have individual performers who were specialists, but they worked together, which resulted in a lean assembly line. The time required to first output went down almost 50%. But there was less top down control. And more legos on the table, relative to the previous round.

And finally--the last round--everyone pitched in and contributed how they could. There were still some constraints, in that people working outside of their expertise could only use their left hand. Despite that, it only took a minute to get the first outcome, so almost 9 times faster. But there were lots of extraneous legos on the table. It was lots of fun, and it was a very tactile learning experience for everyone who pitched in. Just like kindergarten.

What does this mean

This boils down to control, profitability, and speed. This is just as true for startups as it is for large companies. Most of the conflicts among co-founding teams boil down to differences how founders value control and money, according to Harvard professor and researcher Noam Wasserman in Founders' Dilemmas. In big companies, any larger product development program will implictly or explicitly make a call on these three, based on how the work is organized. It depends on what you optimize for, as Krzysztof the facilitator pointed out.

The construction site was optimized for control, especially of costs. There were enough people to do the work, and enough legos could be procured if you were willing to wait. But the level of resource scarcity locked up the system, relatively speaking. And it took a long time to finish anything.

The assembly line required a slightly larger up front investment but the speed at which things happened increased dramatically. Even though the constraints on each individual were exactly the same. As an expert in yellow and green bricks, I was still only allowed to touch these, even though the configuration was completely different.

The kindergarten required even less top down control and more resources, as well as trust that the teams will get on with it. There was be a higher use of resources (lego blocks laying on the table). At any given moment, you won't know exactly what is going on, because everyone is contributing and collaborating. The teams were releasing stuff like crazy. So at that point, does it really matter that you need a bit more money up front? If they are releasing stuff so quickly, presumably this translates into revenue, which keeps the kindergarten afloat and then some.

Choosing the metaphor works that best for your company

The way you organize the work matters. And it feeds into culture. Larman said "Culture follows structure". In a software context, it means you want to allow for chaos and experimentation. And not really just squeezing features out of development teams.

As a company scales from a successful startup to a larger company, the trick is to keep enough of that "kindergarten juice" in the culture and in how the work is organized, in order to allow your company to continue innovating. If the emphasis on control changes as a product matures, you can introduce more of that as needed. But do so consciously, and watch your output and outcomes like a hawk.

By micromanaging the process, even as an assembly line in a feature factory, you're still missing out on pretty big upside (assuming you care about having lots of new products released).

That said, even a kindergarten needs boundaries. So that the teams don't cut corners in quality for example. That's kind of the point. There are a handful of non-negotiables around safety, health, and security in a kindergarten, and everything else is optimized for discovery.

So for a bunch of interested strangers on a random school night, who dug into a few alternative structures and held everything else constant, it was clear that there could be very large differences at play. 14x faster, not 14% faster. These would be results any agile or digital transformation program would love to achieve. That said, it wasn't clear if these differences came from structure only, or the culture around it. And if culture is involved, that could be what's preventing the massive change in the first place.

Key Takeaways

  • The way you organize work matters, and it feeds into the culture, particularly in a larger company.
  • By organizing work, you will be making choices about tradeoffs among variables that matter.
  • Control, in particular, seems to be inversely related with learning and speed.

Wednesday 27 November 2019

Why over-focussing on velocity causes the opposite effect

Following up on the slightly longer analysis of overfocussing on output and velocity, I think there are a few things that are overlooked with a pure velocity based model. Most of them have been known for decades in the software industry. They are squishy.

  1. It's essentially a Taylorist factory where most of the interest is in efficiency, and not on outcomes. by Taylorist, I mean Frederick Winslow Taylor. In fact, Kanban originally came from manufacturing. Cost accounting is the beginning of the imposition of a Taylorist model, to describe something more nuanced than what you see in a factory. (please comment and say why if you disagree). By using velocity as a yardstick, you pervert velocity's purpose and dilute its usefulness.
  2. As per PeopleWare by Tom DeMarco in 1987, most new technology development problems are actually people problems, either on the product development team, or with respect to the customers.
  3. Outputs are assumed to be linear. This is patently not true for knowledge work. Even in 1975 at the time of the Mythical Man Month, it was already acknowledged that adding people tactically is a major blunder in the context of creative work. 
  4. More recently, I've fascinated by psychological safety in the team as articulated by Amy Edmonson as an underlying factor influencing actual performance.

At its core, companies care about being able to release quickly. Velocity and story points are just one way to get at what's happening and why it's taking so long. But it's essentially an internal process. At some level, it's just bureaucracy created to manage product creation...on its own usually not valuable to customers.. So in and of themselves, if the teams provide value and can show they are doing so, then velocity doesn't matter.

Wednesday 16 October 2019

How to choose a useful measure of incremental progress for your team

Recently I had an interesting call with a senior QA leader. He reached out to me He wanted to get a better sense of how his people are doing, both as a functional unit and individually. Primarily, I suspect he wanted to be proactive, and have some kind of a numerical early warning system in place, which he could cross-reference with common sense and qualitative input he got elsewhere.

As we spoke, he kept using the term "velocity" initially; however, it became clear that he meant velocity in a much looser sense than the typical iterative scrum/agile sense. It doesn't really work for what he wanted to achieve.

Here's what I mean:

Core metrics to baseline progress iteratively

What is velocity anyway?

Velocity itself is first and foremost a team output metric, not an individual one. It is a measure of story points completed over a unit of elapsed time.

It gives visibility on whether the product development team is functioning effectively--as a system for generating new features. In this context, new features are what the customer is expected to value most, so we track only that. It is not an efficiency measure, and shouldn't be confused for one. Traditionally this approach came from a software development environment, but can be applied anywhere there is significant complexity and thought required. Knowledge work.

These story points are the primary "raw material" to generate estimates relative to a goal or target date. Once you have a sense of:

  • who you're building it for and why
  • what you want to build, i.e. the actual stories defined
  • and you have estimated the stories using story points

then the dance around the iron triangle begins.

When the product or project work starts, you keep track of how many story points are completed over time. You use to improve future planning. Usually this works in "sprints", which are predetermined lengths of time, as a way to plan and track progress. For example, in the popular flavor of agile called scrum, these will typically last 1-4 weeks.

Realized velocity

Let's use 2 weeks as an example. The newly formed team has started working on a new product or project. The backlog of items is defined and estimated for the absolute "must have" features.

At this point, if you're being completely transparent, you don't know how fast the team will actually go. You can also negotiate what exactly is "must have" to help reduce the time required (less work, done faster). And ideally you'll also all agree on a quality standard that everyone is ok with--which will also have schedule implications (higher bar takes more time per feature on average). So your initial realized velocity/sprint is 0, and you have a guess as to what the expected velocity will be.

You agree (with the team) which stories will be accomplished in the first sprint. And after 2 weeks, you sit down with the team, and compare what actually happened with what you'd hope would happen. At this early stage, there are likely to be a lot of learning outcomes in general, as it's a new effort. But among other things, you can add up the story points completed by the team. This is your first realized velocity.

Expected velocity

After 3 sprints, you should start to see some kind of a trend to emerge in terms of an average velocity. Sometimes it's worth giving the team the benefit of the doubt, as they might pick up the pace once they get their collective heads around what needs to be done.

Usually this number will be significantly different than your expected velocity for the dates you'd like to hit. If you calculate the total story points needed for the "must have" initial release, and divide it by the realized velocity so far. To simplify the thought process, assume it will stay fixed.

This gives you a sense of how many sprints of work will be needed to hit that final date. Usually, there will be a gap between what's happening vs. what's expected. It's best to know this as early as possible. In fact, this transparency is one of agile's strengths. It's difficult to sugarcoat reality, if you see what is being delivered. Moreover, you also see how many initially estimated story points of cognitive effort were realized.

Warning: This type of analysis can cause some healthy consternation and discussion. This is intended. Using this performance data, you can re-prioritize, change resourcing levels, change scope, or whatever else you think might help the team at that stage.

Expected velocity is the ideal pace you'd like to keep, in order to hit your business goals. Often, in more traditional environments, this will be expressed in terms of a target release date. But it can also be in other forms, depending on what's actually important to the business as a whole.

The core difference between realized and expected velocities is their time orientation. The former measures the velocity trend in the recent past. The latter is more of a business requirement, translated into a number. Expected velocity is a practical way to "have a relationship with your target date". This is a metric which translates longer term expectations into an early warning system checked regularly. When compared to your realized velocity, you'll know whether or not your teams are going too slow to hit your dates.

Cycle time

Cycle time comes from a lean background. It's a measure of how long it takes to build one unit of output. In practical terms, it's a measurement of the elapsed time from the start to the end of your production process.

= time(end of process) - time(start of process)

It includes both the actual time spent working by the team, but also all of the wait time in between steps of the process.

Unlike story points, the unit of measurement is time. This is probably cycle time's greatest strength. Time can be subject to arithmetic, statistics like mean and standard deviation, even compared across various aggregations (e.g. among QA team members). It's also less subjective, as there is not estimation required up front. It's just measured continuously. It gives you a sense of what's been happening. And how healthy your process is.

Now for the downsides. Cycle time implicitly assumes:

  • that the units of output are pretty standard, uniform, and therefore of similar size
  • when aggregated, that there is no difference between types of work. For example, building new features and fixing bugs in already built features doesn't take the same amount of time.
  • that there is no goal. It only measures efficiency not effectiveness

Cycle time works well, as a metric, in software for two scenarios:

  • When stories aren't estimated but just all broken down to be a maximum expected length of 2 days per story for example.
  • When working on maintenance development, where general process monitoring is needed so that extremes can be investigated but where time pressures tend to be issue & person specific and not team-wide

Takt Time

Takt time operates within a similar framework to that of cycle time. However, instead of measuring what has been happening, it's used to quantify expectations so that they can be continuously monitored.

In a nutshell, takt time measures the slowest expected rate at which you need to complete production processes in order to meet customer demand. It's calculated as

=net production time / total output needed by customer

There are a few numerical examples over here, if you want to take a peek.

Anyhoo, there are a number of really helpful attributes of takt time. It expresses expectations numerically, in terms of how much time should be spent on each item in order to hit a target output. For example, if takt time is 10 minutes, evety 10 minutes you should be pushing out another unit. If you are faster, great! If not, you need to troubleshoot and improve your production process, resources, or context.

The "total output needed by customer" can be measured in just units, e.g. number of stories. This way you don't need estimation and estimation won't introduce subjective bias.

Like expected velocity, it gives the team a number to help establish an operational relationship with a longer term goal or target (that has business meaning). In the moment.

Isn't this all a bit abstract and self-referential?

Yes. It is.

The primary measure of progress in an agile framework is "working software". Or to be more general, demonstrably completed work. It's demoed for everyone to see and comment, and should be done in a generic way so that anyone can participate (i.e. not only people with PhDs in Computer Science). Anyone should be able to see the new features working.

That said, not everything is software. And not all software has a user interface. So it's a bit harder to apply this, particularly in the early days of a new product.

In that case, you can use these metrics to monitor effectiveness and efficiency. You can hold both yourself and the team accountable. You have a numerical framework to deliberate with stakeholders, one that can be checked at any given moment, where you don't need to "check with the team" every time someone wants an update. And like the senior QA manager above, you can use this as a proactive early warning system. If one of a number of efforts is going off the rails, and you oversee a number of them, you'd naturally want some way of knowing that something is off.

So that's the menu. Which one to choose?

It depends where you are in your efforts, how much time you want to spend on estimation itself, and how much you need to make comparisons.

Where you are in your efforts:

Early on in a project, you have a lot of unknowns. They tend to be interdependent. For example, in order to give a date estimate, you need to agree on what you're building, and how you're building it. That might depend on the market segmentation or main business goals you want to achieve, which also might need to be negotiated. And if you tweak any one of these, all the rest are also affected.

At this point, if you add technical estimation with story points for granular tasks the mix, you expose even more uncertainty to the whole thing. You might be better off delaying story point estimation. And just use cycle time until you have a clearer picture. This way, you maximize the team's time on delivering actual work, rather than on estimation under conditions of high uncertainty, and both business and technical complexity.

Once you get to a stable team and vision and roughly stable scope, it might be worth doing some estimation and prioritization of the bigger epics. Follow this with the breakdown (into stories) and estimation of the highest priority epic or two. If your initial scope is very large, you'll spend a lot of time estimating something you don't really understand very well yet (yet another reason to be deliberate and precise with your initial release).

How much time you want to spend on estimation & monitoring:

This is a more general question about the ratio of time spent doing vs. monitoring the work. Estimation is a tool to help you monitor and measure the work. Ideally, it's good to do some estimation, so that you can slot in work tactically. In particular, it's most useful when considering the business value generated and comparing it to the amount of work required to complete it.

But estimating out a year's worth of work, especially if there are no releases to customers during that entire period--that's a notch short of madness. Ideally your releases should be tight and getting feedback both from individual customers and also the market as whole.

How much you need to make comparisons:

Like in the example opening this blog post, if you want to measure and compare individual or team efficiency, then cycle time is easily comparable. This is because the "denominator" is the same in all cases: elapsed time:

  • You can compare cycle time across various team members, ideally if they are doing similar work, for example QA.
  • Also you'd be able to compute averages to compare between teams, i.e. QA across different teams.
  • Standard deviation in cycle time can also be useful to figure out what is truly exceptional, so that you diagnose and troubleshoot (if bad) or repeat (if good)

Next steps

That should hopefully give you enough to get started. The next step is choosing which is most relevant for you, and figuring out how to gather the raw data from internal company systems. Ideally, this is done automatically & behind the scenes using software, so that your teams don't need to enter data manually, esp. time spent.

Key Takeaways

  • Velocity is a team based output metric that tracks story points completed over time.
  • Estimation can improve accountability and prioritization, but it costs time and is subject to bias.
  • Keep customer facing releases small, as this will improve your accuracy and estimate variability.

Wednesday 9 October 2019

Why estimating cognitive effort simplifies knowledge work

"There were only an estimated two to five thousand humans alive in Africa sixty thousand years ago. We were literally a species on the brink of extinction! And some scientists believe (from studies of carbon-dated cave art, archaeological sites, and human skeletons) that the group that crossed the Red Sea to begin the great migration was a mere one hundred fifty. Only the most innovative survived, carrying with them problem-solving traits that would eventually give rise to our incredible imagination and creativity." --Marc Sisson

Imagination fed into our uniquely human ability to cooperate flexibly in large numbers. So fast forward to today. Our most valuable and exciting work, particularly in the context of innovation, still relies on our ability to imagine what needs to be done, start, and continuously course correct.

In this case, we're using imagination to structure and agree how work needs to happen, and to map that to a subjective estimate of effort.

First, we imagine what needs to be built, why it needs to be built, and how it needs to work. Then, we subdivide the big overall vision into lots of little pieces, and divvy it up among a group of people who go execute on the vision. Before they do that, though, these people imagine, analyze, and discuss doing the work involved on each specific piece of the overall vision. They all need to agree how much effort it will take to complete that task. If there are differences of opinion, they should be ironed out up front.

If done successfully, this generate full buy-in and alignment from everyone involved. Even if the end product isn't a physical thing, this approach works. The benefits of trusting people and harnessing all their energy and imagination far outweigh the inherent risks. It's already done by tens of thousands of teams around the world already in various digital industries including software.

Relative Cognitive Effort is what we're imagining.

The key number that is used for tracking this is a measure of how much "cognitive effort" was completed over a predetermined unit of time. Agile and scrum use the concept of a story instead of tasks, in order to help describe complex needs in a narrative form if needed. Usually this includes elements such as: user problem, what's missing, acceptance criteria for the solution required. Therefore, the unit of measure for the cognitive effort expected to complete a story is called a story point.

Imagining size

Each story is sized in terms of story points. Story points themselves are quite abstract. They relate to the relative complexity of each item. If this task more complex than that one, then it should have more story points. Story points primarily refer to how difficult the team expects a specific task to be.

For example, it's more precise to track the number of story points completed in the last 2 weeks, than just the raw number of stories completed. As stories can be of different sizes.

Now it's time for a few disclaimers...

1. Story points are not measures of developer time required.

Cognitive complexity isn't necessarily the same thing as how time consuming it will be to achieve. For example, a complex story may require a lot of thought to get right, but once you figure out how to do it, it can be a few minor changes in the codebase. Or it could be a relatively simple change that need to be done many times over, which in and of itself increases potential complexity and risk of side effects.

source: Josh Earle | Unsplash

The main purpose of story points is to help communicate--up front--how much effort a given task will require. To have meaning for the team, it should be generated by the team who will actually be doing the work. These estimates can then be used by non-technical decisionmakers to prioritize, order, and plan work accordingly. They can then take into account the amount of effort and trade it off with expected business value for each particular story.

2. Story points related to time are a lagging indicator for team performance.

The key, though, is that story points shouldn't be derived as 1 story point = half day, so this item will be 3 story points because I expect it will take 1.5 days. This type of analysis can only be done after the fact, and related to entire timeboxes like a 2 week period. Instead, the team should be comparing the story they are estimating to other stories already estimated on the backlog:

  • Do you think it will be bigger than story X123? Or smaller?
  • What about X124?

The team needs to get together regularly and estimate the relative size of each story, compared to every other story.

This generates a lot of discussion. It takes time. And therefore estimation itself has a very real cost. Some technical people view it as a distraction from "doing the work". Rightly so.

3. Story Points assume you fix all bugs & address problems as you discover them.

Only new functionality has a story point value associated. This means that you are incentivized to creating new functionality. While discovering and fixing problems takes up time, it doesn't contribute to the final feature set upon release. Or the value a user will get from the product.

Anything that is a bug or a problem with existing code needs to be logged and addressed as soon as possible, ideally before any new functionality is started, to be certain that anything "done" (where the story points have been credited) is actually done. If you don't do this, then you will have a lot of story points completed, but you won't be able to release the product because of the amount of bugs you know about. What's worse, bugfixing can drag on and on for months, if you delay this until the end. It's highly unpredictable how long it will take a team to fix all bugs, as each bug can take a few minutes or a few weeks. If you fix bugs immediately, you have a much higher chance of fixing them quickly, as the work is still fresh in the team's collective memory.

Fixing bugs as soon as they're discovered is a pretty high bar in terms of team discipline. And a lot will depend on the organizational context where the work happens. Is it really ok to invest 40% more time to deliver stories with all unit testing embedded, and deliver less features that we're more confident in? Or is the release date more important?

4. One team's trash is another team's treasure.

Finally, it's worth noting that story points themselves will always be team-specific. In other words, a "3" in one team won't necessarily be equal to a "3" in another team. Each team have their own relative strengths, levels of experience with different technologies, and levels of understanding how to approach a particular technical problem.

Moreover, there are lots of factors which can affect both estimates and comparability. It wouldn't make sense to compare the story point estimates of a team working on an established legacy code base with a team who is building an initial prototype for a totally new product. Those have very different technical ramifications and "cognitive loads".

Conversely, you can compare story points over time within one team, as it was the same team who provided the estimates. So you can reason about how long it took to deliver a 3 story point story now vs. six months ago--by the same team only.

Wait, can't Story Point estimation be gamed?

As a system, story points gamify completing the work. Keen observers sarcastically claim they will just do a task to help the team "score a few points".

But then again, that's the idea behind the approach of measuring story points. To draw everyone's attention to what matters the most: fully specifying, developing, and testing new features as an interdependent delivery team.

Moreover, all of this discussion focuses on capacity and allocation. The key measure of progress (in an agile context) is working software. Or new product features in a non-software context. Not story points completed. If you start to make goals using story points, for example for velocity, you introduce trade-offs usually around quality:

  • Should we make it faster or should we make it better?
  • Do we have enough time for Refactoring?
  • Why not accumulate some Technical Debt to increase our Velocity?

Story points completed are only a proxy for completed features. They come in handy in scenarios where you don't have a clear user interface to see a features in action. For example, on an infrastructure project with a lot of back-end services, you might not be able to demo much until you have the core

Example: Adding technical scope to an already tight schedule

On a client project, I had a really good architect propose and start a major restructuring of the code base. It was kicked off by his frustration with trying to introduce a small change. A fellow developer tried to add something that should have taken an hour or two, but it took a few days. The architect decided the structure of the project was at fault.

Yet, this refactoring started to go into a few weeks. The majority of the team were blocked on anything important. He was working on an important part of the final deliverable. While the work he was doing was necessary, it would have been good to figure out any elapsed time impact on the overall deliverable, so that it could be coordinated with everyone interested.

As the sprint ended, I proposed we define the work explicitly on the backlog, and estimate it as a team. This way, the architectural work would be a bit more "on the radar". There were around nine tasks left. The team said the remaining work was comparable across all of them, and collectively decided it was about a 5 story point size per item. So we had added roughly 45 story points of effort.

Knowing that the team was averaging around 20 story points per elapsed week, it became clear we had suddenly added 2 weeks worth of work--without explicitly acknowledging what this might do to the final delivery date. While the architect was quite productive, and claimed he could do it faster, there was still an opportunity cost. He wasn't working on something else that was important.

In this case, story points helped come up with a realistic impact to schedule that senior stakeholders and sponsors needed to know about. The impact on the initial launch date was material. So the estimation with story points helped provide an "elapsed time" estimate of an otherwise invisible change.

While not perfect, Story Points are primarily a tool for capacity planning, not whip cracking.

So to step back, you can see that story points are a useful abstraction which gets at the core of what everyone cares about: new product features. While subjective, for the same task--as long as it's well defined--most of the members of a team usually come up with pretty close estimates. It's kind of surprising at first, but eventually you get used it. And you look forward to differences, because that means there may be something that needs to be discussed or agreed first. That is the primary purpose of story points. As a side effect, it can help get to grips with a much larger backlog. And plan roughly how many teams need to be involved.

However, this approach only works within relatively strict parameters and disclaimers if you want the numbers to mean anything. It is at a level of resolution that proxies progress, but makes external micromanagement difficult. If you want the team to self-manage and self-organize, this is a feature not a bug of the whole approach. Ultimately the core goal is correctly functioning new features. Best not to lose sight of that.

Wednesday 2 October 2019

How to simplify a complicated process, so that even a 2.5 year old would understand them

A few years ago, we had a significant challenge with our 2 year old daughter. Morning and evening routines were an uphill battle every day. Getting out the door to her childminder quickly enough to make my first meeting in the morning was often a drawn out battle of wills.

While it was clear she wanted to collaborate and appease us as parents, she didn't understand what we expected of her. Moreover, her brain development still seemed to behind. The neocortex doesn't really kick into overdrive growth until later. She was also awash hormones, which is completely normal for this age. This caused the temper tantrums typical for a two year year old. They're called "terrible twos" for a reason. We were also frustrated as parents, and we didn't know how to help her. Fundamentally, this was an issue of her feeling overwhelmed. And unable to sort out what's important from what isn't.

In a professional context, visualization works really well to help stop overwhelm. Whether this to map out a business process, plan a large scale software system, or figure out a business model, it always helps to have everyone involved "brain dump" onto post-its. And then to organize them. This usually unleashes a lot of latent creativity. Plus it helps front-load difficult discussions. You find out really quickly what the major challenges are with a new initiative.

Example of eventstorm output

How it all started

One Saturday afternoon, as I was watching her learn to draw on a coffee table, I had the idea to map out her morning and evening routines as a process. This would be analogous to a light weight lean value stream map or a business process Eventstorm. First and foremost, I wanted to do it with her, not to her. As she was already drawing and playing around, I felt a little more comfortable drawing my chicken-scratch cartoons. I don't feel like drawing was ever a personal strength of mine.

So I pitched it to her as a fun project we can do together. I pulled out some bigger post-its and a sharpie, and sat down at the coffee table with her.

First, I suggested that we brainstorm all of the things which she does in the morning. As she was coming up with specific actions, like eating breakfast, I would sketch out in cartoon format some kind of symbol of that particular activity.

As she clearly wasn't able to read yet, images reduced the cognitive load for her. And she was excited to see me draw things she understands on the fly. It isn't that common of a sight to be honest. As an output of each suggestion, we drew out a specific green wide post it, and put it on a coffee table.

Once we had a handful of these, I suggested a few others which she might have missed. I also suggest a few which were incorrect, just to make sure she was paying attention.

After this, we moved to a "converging phase". I suggested that she take the post its and put them in order on the wall. We had to do it together in practice, but the key was that I gave her the final say in the actual order. I was holding the relevant two postits, and asking questions like do you "eat breakfast" before you "descend the stairs"? Doing this multiple times, we came up with a chonologically ordered list of post-its that reflected her morning routine.

Morning and evening routine prototype

At that moment, she seemed to step back and view the whole process. And she was absolutely beaming, proud of both us for doing it together. But also happy that she finally understood what her parents were on about every morning. I think this was all because she felt less overwhelmed.

So she felt confident that she will now be able to achieve what is expected of her. Because she understood what is expected of her for the first time in her life!

Wrap up and implementation

We then did the same thing thing with her evening routine on dark blue post its. And ordered it the other way, finishing with her in bed and falling asleep.

When thinking about it, I realized that some of the activities are performed on the ground floor of our house. And some on the first floor, where her bedroom and the bathroom was. So I unwrapped a brown paper roll, ripped off two pieces about a meter long, and sat down with my daughter. We put all of the ground floor post-its on one brown paper square, and all of the first floor post its on the other.

upstairs process mapped out, with modifications/corrections from my daughter

Finally, we hung up the ground floor post its in our dining room, and the first floor post its in her bedroom. So in the end, she had a detailed map of her daily routines, organized chronologically and physically near the place where she would actually do them.

What happened in practice

My wife and I were shocked at how effective this was. The daily tantrums nearly disappeared completely overnight. If there was push-back from her, it lasted 15 seconds not 15-45 minutes as it frequently did in the past.

The fastest way to help her calm down, when she looked like she was about to blow up, was to walk her over to the post its. Then ask her where we were at that moment in the process. She would point to the relevant one. The emotions would calm down, as this required some cognitive load from her. And we could continue on with the rest of the routine that morning or evening.

About a year later, as I was putting her to bed, she said

"Daddy, that picture there is wrong" pointing at the one where she brushes her teeth.

"Oh realy, what do you mean?" I asked.

"By the time I am brushing my teeth, I'm already wearing my PJs, not a dress".

From a dress to pajamas

She was absolutely right. The next weekend, I drew out a version of the same post-it with her avatar dressed in a pajamas.

Her brain development caught up to understand what this map meant. She had full ownership of the process, because she'd been involved from the beginning. And most importantly, she could call out specific ideas for improvement.

Lessons learned

This experience made it clear to me how powerful the principle of visualization actually is.

  • It can help make sense of initially overwhelming complexity, by putting everything "out there" on the wall rather than in your own head.
  • It helps participants feel empowered and in control of what is happening, thus improving motivation once decisions are made.
  • It helps everyone involved to view a situation more objectively, both big picture and into the smaller details (what should i draw on your plate when you eat breakfast?)
  • In the case of my daughter, the increased clarity and reduced overwhelm also helped with emotional regulation. While (hopefully) not as necessary in a professional environment, it's good to know that this a welcome side effect.
  • It doesn't even require the ability to read or write.

Visualizing waste and complexity is a very powerful way to help get a grip on it. Clearly, the visual component speaks to us at a primordial level. Cavemen drew images. Medieval religious communication was all based on paintings and images.

In software terms, this would be like the kernel of the operating system. So you really get through to the root causes of problems and address them, rather than just yelling louder and pressuring people-regardless of age-who don't act according to your wishes.

Wednesday 25 September 2019

Why building for the entire market bloats timeframes, and what to do instead

In High Output Management by Andy Grove, Intel's founding CEO, Grove suggests that Intel operated in an environment where they needed to manufacture units to a market forecast. From the beginning in the 1960s, they didn't have the luxury of selling up front and building exactly what was sold. Nowadays, this is pretty much the defacto environment for product development. Even in the case of software, where there is no manufacturing or reproduction cost, timing the scope to match demand is a core component of a software company's success.

In order to plan the release of a product, you need a clear scope of what product development needs to happen first. This includes breaking down tasks, estimating them, and then mapping the specific features to expected customer value. This way you come up with a set of features that need to be created, in order to release the product (or release it again).

How this typically plays out in practice

In practice once this is agreed, new ideas come up. New stakeholders propose specific changes or additions to that original scope. You might even want to try reaching more prospects in a related segment.

1. Add a bit more scope: The natural impulse-in this environment- is for product development teams to simply add scope to the list of stories or tasks which were already agreed.

Vicious feedback loop for scope

2. Push the release back a bit: As soon as they add another feature, this effectively means more work needs to be done. So the effective date when everything will be done is pushed back. Usually this means the estimated completion date needs to be adjusted, even if this isn't explicitly acknowledged by the team.

3. Delay market feedback a bit: If the release date is moved back, the team delays getting market feedback. Depending on the size of the feature, it might be a few days in a software context. Or a few weeks.

4. Reduce probability sales: of If the feedback is delayed, you reduce the team's confidence that the product will sell. The less feedback you have, the lower the probability of hitting your sales forecast when you ultimately do release. So your probability-weighted revenue goes down, the later you release. In a sales context, "money follows speed" is common knowledge. Being able to close a deal quickly is really important, because if you don't, it's likely the customer will change their mind. And finally, if you are less certain about your sales forecast, this typically influences your overall confidence level in the product. And one of the most natural things done in this case is too...add a bit more scope.

Deja vu. The cycle repeats.

Paradoxically, the more scope you add, the lower the chances of market success of the release. Because you're innovating in a vacuum/ivory tower/echo chamber/<insert favorite metaphor here>. Even though you think you are improving your product's chances. Having too many features is overwhelming for prospects and difficult to communicate for you. The paradox of choice kicks in.

Also, if you delay the release date, you are likely to struggle to make sales until that date. There are "forward contracts" or "letters of intent", however customers will only go for something like this if they are highly motivated to get your solution. It also adds a layer of complexity and obviously a delay, thus making it harder to sell the product.

Implementation notes for startups

In the startup case, you need to get enough scope to attract early adopters in Moore's technology adoption curve. This will typically be one or a handful of related features which address a core need of theirs and which they can't address anywhere else.

Source: Harvard Innovation Lab

The idea is that you need enough scope to go after a narrow group of people, just to get started and out the door. Trying to build enough scope for the entire market will mean you'll never actually move ahead with your business. Because you need to launch it first. The sooner you get it out there, the better for you.

Once you have nailed that first segment, you expand to adjacent segments. You modify the product to appeal to the adjacent segments. And then go sell to them. Do this incrementally, so that you have the benefit of revenues from the first customers. And confidence from initial market success.

Implementation notes for established companies

The above is true for larger companies; however, they have internal operational challenges to overcome also. In particular each of the 4 parts of the circle are often represented by different department heads. Each has his or her own agenda to push.

And while each incremental change might seem to make sense in the short term, the overall effect is the delay of a release. And no one is individually responsible for it.

source: Randy Olson | the operational usefulness of pie charts

Moreover, in a large company environment, go-to-market decisions are often made based on overall market size. While this is useful to think through e.g. positioning of a new product, thinking in terms of top-down market pie charts doesn't translate into a plan to enter the market. Different slices of the pie will want different features, with some overlap across the market.

It's ok to plan to enter the entire market eventually, but it's smarter to prove the approach works on a small corner of the market--before you expand outwards.

Acknowledge it's uncomfortable, and just do it anyway

I get it. It feels unnatural to be selling something that isn't perfect. It's easy to succumb to a fear of rejection, and just put off the prospect of releasing the product for as long as you can. I've done it in the past on my own dime. I hear the mechanism at play when mentoring founders. I see the dynamics play out in larger companies every day. Every product team with an ounce of humanity is susceptible to this.

Focus on a small subset of customers with a similar set of needs, and only build the scope that they need. Keep it laser focussed. And get it out there, even if it's not perfect. Especially in a business context, if your product addresses the core need they have, they will be happy.

Ultimately, fear of rejection just a bias that prevents you from learning what you need to know to make your new product a success. If necessary, speak to customers about their problems only--and don't show them your solution right away. Most people are happy to gripe to a friendly ear, especially if it's about something they care about. Don't make promises you don't expect to keep. But start speaking with flesh and blood customers right from the beginning.

Wednesday 18 September 2019

How to resource projects and products--optimizing for elapsed time, motivated teams, and budget

In my last post, I explored the implication of a shift in importance and value of resources. Given increasingly shorter time frames for product life cycles, I think time is an increasingly undervalued resource. Zooming in to a sub-micro level, I think we're also looking at a paradigm shift with resource allocation within high technology companies too.

Regardless of technology background, all stakeholders usually negotiate around schedule. Time is the least common denominator, from an accountability perspective.

In a traditional project approach, the team would figure out and agree the scope up front. These requirements would be fixed, once they are translated into cost and time estimates. Dates would also be agreed up front. In this case, there is a lot of analysis and scrambling up front, to try to learn and decide everything before knowing it all. In practice, this front-loaded exploration takes time. Regardless of whether the product delivery team is actually working on the product, this elapsed time on the "fuzzy" front end is added to the final delivery date. It takes a lot of time to define and estimate all of the work needed to deliver the scope. And in practice, this backlog will only help us figure out when the project or product is "done", which in and of itself, has no meaning to clients or salespeople. It is easy to overlook this full-time cost of trying to fix and define all work up front, particularly since the people doing this work can usually get away with not "counting" this time as part of delivery.

standard approach: agile in a waterfall wrapper

And since scope is fixed, and something actually needs to be a pressure release valve, typically one of the bottom three triangles on the left suffer: time, quality, and cost. Then. spending months tracking project progress and with limited client interaction (because it's not "done" yet) is yet another waste of elapsed time.

There is a way to significantly reduce this waste, by bringing in the client early and maximizing learning in a highly disciplined structure. In an Agile approach, the exact opposite approach is taken. We don't try to fix scope up front; we fix the rules of engagement up front to allow both business and technical team members to prioritize scope as they go.

Instead, we strictly define business and technical criteria for a project up front, without fully agreeing what the scope is. So, we agree that we will spend up to $185k, quality is ensured with automated testing, and we have 3 months-to deliver something sellable. We may only deliver 1 feature, but if it's a valuable feature then clients would pay for it. If all of these are unambiguous, then the product team itself can prioritize scope operationally based on what it learns from clients. For all types of products, ultimately the clients and the market will decide to buy whatever is being built.

start work sooner, ship more, and incorporate client feedback sooner

What's fundamentally different here? Scope is defined by a series of operational or tactical decisions by the product team, not strategic ones defined externally to them. Senior business stakeholders shouldn't need to follow and know the technical details of what's in a product and what part of the project is "done". It's getting down into too much detail and communicating a lack of trust in judgement to a highly paid team of technical experts they meticulously recruit and train. It also undermines a sense of outcome ownership by the team. Because everything about their work is defined exogenously and just dropped on them.

What is the total cost of having a waterfall wrapper around agile teams?

Clearly efforts need to be coordinated across an organisation. Trying to use detailed waterfall-style up front planning will cost you elapsed time and may cost you the market opportunity you've identified. It's better to have shared access to backlogs and agile's drive to deliver potentially shippable software on a short cadence. Because you know you can use anything that is done by another team. And you can estimate or prioritize based on an open discussion among teams.

Wednesday 11 September 2019

How to optimize for your tech company's scarcest resource

Back in the 13th century, Princess Kinga of Hungary received an engagement ring from Duke Boleslaw of Poland. When discussing her dowry with her father, instead of just offering the usual "gold and money" which was relatively common in those situations, Kinga asked her father to gift a salt mine. Salt was a scarce resource and the only way to conserve food at the time, so it inherently had a lot value.

When gifted the largest mine in the Austro-Hungarian empire ( MaramureČ™ ), she was worried that the whole mine couldn't be moved to Poland and therefore she'd still be under the thumb of her parents. So after significant thought and even prayer, she dropped the engagement ring down the mine's shaft.

Kinga and her ring | Source: Wikipedia

Later, when princess Kinga was in transit to Cracow for her first meeting, she took a handful of Hungarian miners with her. She ordered her party to stop around the Wieliczka area. And asked them to start digging. They hit a rock and said they can't go any further. Kinga asked that the rock be crushed open. It turned out that it was essentially rock salt, and her engagement ring was found inside.

While the truth behind this origin story has been lost in the mists of time, a mineral repository was discovered. What did happen as a result? Huge economic growth in the region. The cities in the country went from being built in wood with wooden forts to brick and mortar and castles. Largely financed by salt mine profits.

Over the longer term, the price of salt fell. With it, the economics importance of the salt mine to the country's treasury. And other resources became important to economic growth. A paradigm shift occurred. And the medieval salt warlords drifted into irrelevance and obscurity.

How is that relevant today?

The closest analogue today is probably the petrochemical industry that fuels the Middle East's economic engine. As long as the price of oil stays high, that whole region has a disproportionate amount of wealth generated. These profits are reinvested in other parts of the economy local and global economy. And the prospect of peak oil would seem to give the sheikhs high hopes for even higher prices of oil and oil-based products over the longer term.

Source: thenational.ae

Yet already there are longer term trends at play which are likely to disrupt this. Looking at companies like Tesla, Elon is betting that the solar energy will replace oil as a source of energy. And the bottleneck will move from generation to storage and possibly transport. At that point, the price of lithium and other minerals needed to build batteries is likely to enter into a longer term bull market.

So, just because oil is valuable now, doesn't mean it will always be valuable. If Elon's strategic hypothesis proves to be right, the oil magnate sheikhs will be out of luck too. Nobody really knows how this will play out. But I'd be willing to bet the oil, gas, and automotive industries will look different in 50 years than they did 50 years ago.

It's easy to overlook one particular resource

Strategic resources or "factors of production" to consider according to classical economics:

  • Land
  • Labor
  • Capital
  • Enterprise/Entrepreneurship
  • Technology
  • Natural resources

Usually the current strategic approach is to think about finding the optimal balance in the resources above for your particular industry. The goal is to maximize wealth in the long run.

Resource levelling | Source: Wikipedia

In the case of software, tech decision-makers usually spend their time focus on labor-sometimes called human capital-as many skills tend be hard to find where it actually matters. Once you recruit and train the right people, then it's a question of making sure they have everything they need to do their jobs. Especially being clear on what needs to be built or modified. But also the other factors in frameworks like Gallup's 12 Questions.

In the executive offices or founder's garages, the discussion extends to capital and entrepreneurship. Knowing where and how to raise budget (capital) and how to allocate it to the biggest opportunities (entrepreneurship) is an ongoing effort.

Over time, a successful company will tend to accumulate technology assets which are unique and which have value. Which make them uniquely capable of executing a business model that beats comparable companies in the same industry.All of these play out over time.

Time is considered an obvious given. Really?

So in addition to the actual resources you need, you need to figure out optimal timing of when to inject in the resources to improve your company's business outcomes. In other words, most companies focus on finding the optimal balance of resources right now-relative to one another. Which is completely reasonable given the current paradigm.

What I'm exploring at the moment...what if time itself is a resource and input? In other words, instead of only thinking about how output changes right now if you add or remove people, what about allowing yourself to think about varying time.

  • How would you change your approach if you had year runway to release something?
  • How would you change allocations if you had one week before a client wanted to use your product?
  • How much are you actually affected by time consuming efforts like training up a new recruit?
  • How much time does the team spend on "hardening" and bugfixing before a release in a big batch?
  • When will your clients or your actually need to have the product and how does that look in relation to competitor's actions?

These somewhat philosophical questions, if thought through, could have quite significant practical implications on most projects. Because time is money, too. If you have a detailed plan to build a product and prospects who need what you're building, you are effectively paying an economic cost of delay, as per Don Reinertsen's work, for each day the product isn't ready to ship. And if you want to allocate resources rationally, you should be taking into account this cost.

Tech product life cycles are shortening. Company lifetimes are shortening. Industry lifetimes are shortening. At which point does time itself become too valuable of a resource to ignore, when in the throes of pursuing an opportunity?

Friday 9 August 2019

How to befriend time when you're in a hurry

Miyamoto Musashi was a legendary renaissance samurai poet and painter. He invented the technique of fencing with two swords. According to legend, he won over 60 sword fight duels. He was so talented, that he killed an adult samurai with a wooden sword at the age of 13. He single-handedly fought of an ambush wrought by a samurai school. Personally, he fought in 6 wars. 

Musashi Miyamoto on velocity

These Rambo-esque legends weren't his greatest contribution, though. On his deathbed, he penned a treatise on strategy called "A Book of Five Rings". As he had the ability to kill of so many opponents in duels (i.e. being on the right side of 60 duels), he became a master of the psychology of battle strategy. The book contains his insights into how he thought about this process.

“Whatever the way, the master of strategy does not appear fast….Of course, slowness is bad. Really skillful people never get out of time, and are always deliberate, and never appear busy.” Some people can cover 120 miles a day without breaking a sweat. Others will look tired within a minute of starting to run. 

What's really at stake here?

How you feel about urgency and speed is a reflection of your habits in a personal context. If you have bad time management habits personally, then of course you will feel discomfort about time going by. And the opposite is also true. If you have good habits and you live in accordance with your priorities, you look forward to time passing. It works in your favor.

Photographer: Form | Source: Unsplash | Aiming for the above with my stretching routine

I can definitely see this when I do my daily stretching routine. Even though I might not be happy with the point where I started, if I do my stretches every day, my flexibility increases. And I see progress over time. So I look forwards to time passing, because if I have increasingly better results.

This observation is fractal. In a professional context, it's about the quality of your company's systems. If you have well thought through and optimized systems, you look forwards to achieving your goals. Time feels like it's on your side. The competition isn't as important as the stopwatch. Like in a road race. If you have poor systems, you're constantly harried and monitoring and firefighting. And there's no time to do anything longer term.

Photographer: Robert Anasch | Source: Unsplash | Time is on your side if your company has well thought out systems.

If I can extend the metaphor a bit to building shareholder value, particularly in software companies or in knowledge work: "Wealth is built with time as an asset, not as a liability". 

Friday 2 August 2019

How to determine if systemic factors slow down your teams' velocity

Last week, I pulled out the critical thing that Steve Jobs did, upon returning to an Apple computer with a sagging stock price. He went after a major systemic factor that was holding back release dates: too many priorities. If you are really going after top performance, you need to look at all factors, including the global ones.

Local or global maximum?

Mountain tops above clouds
Photographer: samsommer | Source: Unsplash

When standing on the top of a hill, every way you look is down. It's the definition of a maximum. The thing is, though, you might not be standing on the highest possible hill or mountain in the area. And it's difficult to know that. You typically won't see mount Everest unless if you are in the Himalayas.

maximizing velocity: every way you look is down

The same is true when you are maximizing velocity, and looking for the top of the parabola. There is a lot you can try to improve a team's velocity. But it's worth checking whether the team is being held back by systemic factors that you can't see from where you are. Or is it really just a team-specific challenge.

It's quite likely you don't even see the potential global maximum, because of pre-existing company culture and procedures, particularly in a larger stable company. It's worth trying to improve velocity locally, at the level of one team. But also keep in mind that you might be climbing the wrong hill. The higher one will be start with improving the context in which the team operates. A low velocity may simply be a reflection of a difficult culture and context in which the team operates.

local vs global maximums: where is your product development really?

Three Examples of Systemic Factors

Before we deep dive into improving team-specific output within each team, here are three examples of systemic factors I've seen at client sites that slow down team velocity:

1. Resource Thrashing

2. Too many chiefs

3. Communication overhead

Let's start at the top, shall we?

1. Resource Thrashing

If your true priorities aren't clear, this impacts you on many levels. Before getting to the somewhat obvious operational impact, consider the revenue impact first. Yes, revenue!

source: Booz, Allen, Hamilton, Harvard Business Review

According to "Stop Chasing Too Many Priorities" by Paul Leinwand and Cesare Mainardi in the Harvard Business Review, too many operational priorities correlates with a given company's ability to outperform its peers in terms of revenue. Having too many "#1 priorities" will reduce revenue potential. It will be harder to communicate the vision to both customers and employees, as Jobs noted above. Also harder to execute.

For example, one common trap is just to list everything you are doing as a priority. Yes, every cost and effort needs to have a goal and be justified. But this approach muddles what exactly needs to change. Because everything we are working on is a priority. It is easier to sell to existing employees and shareholders, but it doesn't really lay the groundwork for anything to change.

Operationally, Steve Jobs focus on the 4 quadrants solved what I call the "denominator problem" from an operational standpoint:

people per product = people/nr of products or projects

The large the number of "buckets" you need to fill, the higher the denominator. The higher the denominator, the less resources you have to succeed with each product. New or existing.

This is the true cost of lack of focus. Because if you are under-resourcing your product teams, you are effectively setting them up for failure and yourself up for disappointment as a decision-maker.

Signs of this would include:

  • Heavy reliance on traditional project management (waterfall): as there are lots of resource conflicts, you need a caste of professionals to manage this, each with their own specialty. Which adds more cost. Note that they don't actually contribute to the work that needs to be done directly.
  • partial employee allocations: For example, allocating 5 people at 20% will mean that all 5 people will spend most of their time in status meetings and unable to actually deliver anything. But you have the slot fully allocated, right?
  • shifting team structures: If needs are constantly changing due to thrashing, then there is constant complexity around what needs to be done, who needs to do it, and by when. So you're spending a lot of time figuring out how to achieve new goal with the limited resources you have and who you can nab from elsewhere.

2. Too many chiefs

For anyone who remembers high school physics, there was one important distinction between velocity and speed--as my friend Andy Wilkin recently pointed out. Velocity had an implied direction. One direction.

Velocity measures speed in a specific direction

If each manager pulls in their own direction, they collectively turn velocity into speed. A low speed. And effectively no clear direction.

There is a value in having specialized managers which look after shared company concerns. Ones which cut across multiple products. For example, a function like DevOps or a shared database infrastructure. But then you also have functional managers, like QA. And project or delivery managers who are on the hook for getting the team to ship. And geographic managers who keep a close tab on staff in remote offices, possibly because they can charge out the time. And you end up in a situation with a lot of managers who mean well, who want to stay informed, but they don't really contribute to the work which needs to be done.

This is a systemic problem of unclear goals, covered up by having layers of people who are responsible for slices of what needs to be delivered. Here are a few metrics you can track:

  • cost of going in the wrong direction: Often a side effect of having too many stakeholders, you can end up building things which sound good to internal decisionmakers or committees, but which customers could care less about. In other words, the "voice of senior manager" booms louder than the voice of the customer. total cost = nr man months * monthly burn rate
  • cost of oversight crowds out budget for people actually doing the work: a side effect of having so many managers is that you don't have enough money left over to hire competent people to do the work. Managers are expensive and, ultimately, they aren't really required to accomplish the goal. Just to structure the work and monitor progress. Some of that is necessary, but ideally it's done by people doing the work with a high level of trust and a minimum level of oversight.
  • ratio of stakeholders to team members at meetings: This originated as a snarky observation on my part. Doesn't make it less true. For operational meetings, such as a development team standup, if most of the people attending say they are "just listening today", that means they aren't adding value and they are taking up others' time. To be followed up by more meetings with other stakeholders and managers to discuss what is happening at the standup. In practice, there is a high financial cost to all of this, and this ratio is a good proxy for that cost, even if employees don't know who makes how much.

3. Communication overhead

One of the older pearls of wisdom that have been floating around the software industry is: "9 mothers won't be able to deliver one baby in 1 month.

Your ability to add people on to a project will typically be constrained by the structure of what you're building. But also by sheer communication issues. Before anyone can contribute to a project, they need to have enough context to be ableto do so. That is harder than it seems. For one, they need to know how their bit contributes to the overall vision. So they have to understand enough of the big picture, to locate where their addition should go. For another, they need to know who knows what. Who to ask specific questions.

Much of this accumulates based on natural interaction patterns in groups of people. Each person will need to relate with every other, even the junior people to the senior people. So every important messages needs to go out to everyone. And also needs to be understood by everyone.

Common patterns to make this "work":

  • hub and spoke pattern, where the manager is the source for all decisions and communication. Typically, this results in the manager's time being a bottleneck, and most of the team sitting around and waiting for instructions. And overall progress is slow.
  • peer-to-peer pattern, or the self organizing one. This is much more effective and immediate, particularly if there are differences of experience, skills, or knowledge among team members. but this is often difficult to scale up. Each additional node added to a network of peers adds increasingly higher communication requirements. A network of this type has n! connections, so 25 peers in a flat network have 25! connections or 1.55 * 10^25 connections.

In practice these sheer numbers are combined with either inexperienced or offshored staffers or partial allocations of busy people who can't be 100% spared. For the first, it's clear why they struggle to contribute (at least without direct access to managers and mentors who are too overstretched to help). For the second, if you have a senior BA allocated at 20% to a project, most of that time will be attending status and planning meetings just so that she has context. So there is almost no time available to do any work.

So in short, for discovery intensive work like new product development, you are setting yourself up for difficulty if you put too many people on the product too.

Wait, so how do you get a globally optimal outcome if you can't add either staffers or managers?

Beyond a certain point, it doesn't make sense to add more. You will see it in your output metrics. And ideally that output goes in front of customers or prospects frequently (with commercial intent of course).

Small, highly experienced teams who know what they are doing. And with direct access to customers, to ensure they are building something customers care about. These people are probably on your team already. Get rid of everyone else, and give these team members the space to do their thing.

If you want to increase learning and speed, add more independent teams with the same characteristics. But don't make one team too large in the hopes that the software factory will output more web screens and database tables per month. Because either it won't or it will, and in both cases, you can end up disappointed.

Note that this is the exact opposite of what large, efficient and established companies are used to doing. Being deliberate, effective and thoughtful will leadyou down a much better path.