Friday 7 October 2022

The Kindergarten, the Construction site, and the Assembly Line

Last night, I went to a local meetup where we played Legos. It was an event organised by Krzysztof Niewinski. In particular, it was a simulation workshop of large scale product development using alternative organizational structures. But there were lots of colored bricks involved. And the specs were pictures of the end products that needed to be built.

Without getting into too much detail, we covered 3 alternatives with the same group of 20 something people: component teams, cross functional teams of specialists, and finally "T-shaped" interdisciplinary teams where everyone could do everything. In short, we were experimenting with output using alternative ways of working. Each round took roughly 10 minutes.

Here's what happened

In the first round, we had specialized component teams each dedicated to working with only two different lego colors, a supply team, an integration team, a quality team, and 8 different product managers who wandered from table to table. Sound familiar? Kind of like a massive construction site with lots of project managers. Or in a large company developing and installing software. Most of the building teams sat around doing very little in practice. There were lots of bottlenecks and confusion around getting supplies and exact requirements. I had a chance to engage in chitchat with my table mates. And a stressed out senior executive that walked around and yelled at anyone for not doing anything.

The second round, we continued to have individual performers who were specialists, but they worked together, which resulted in a lean assembly line. The time required to first output went down almost 50%. But there was less top down control. And more legos on the table, relative to the previous round.

And finally--the last round--everyone pitched in and contributed how they could. There were still some constraints, in that people working outside of their expertise could only use their left hand. Despite that, it only took a minute to get the first outcome, so almost 9 times faster. But there were lots of extraneous legos on the table. It was lots of fun, and it was a very tactile learning experience for everyone who pitched in. Just like kindergarten.

What does this mean

This boils down to control, profitability, and speed. This is just as true for startups as it is for large companies. Most of the conflicts among co-founding teams boil down to differences how founders value control and money, according to Harvard professor and researcher Noam Wasserman in Founders' Dilemmas. In big companies, any larger product development program will implictly or explicitly make a call on these three, based on how the work is organized. It depends on what you optimize for, as Krzysztof the facilitator pointed out.

The construction site was optimized for control, especially of costs. There were enough people to do the work, and enough legos could be procured if you were willing to wait. But the level of resource scarcity locked up the system, relatively speaking. And it took a long time to finish anything.

The assembly line required a slightly larger up front investment but the speed at which things happened increased dramatically. Even though the constraints on each individual were exactly the same. As an expert in yellow and green bricks, I was still only allowed to touch these, even though the configuration was completely different.

The kindergarten required even less top down control and more resources, as well as trust that the teams will get on with it. There was be a higher use of resources (lego blocks laying on the table). At any given moment, you won't know exactly what is going on, because everyone is contributing and collaborating. The teams were releasing stuff like crazy. So at that point, does it really matter that you need a bit more money up front? If they are releasing stuff so quickly, presumably this translates into revenue, which keeps the kindergarten afloat and then some.

Choosing the metaphor works that best for your company

The way you organize the work matters. And it feeds into culture. Larman said "Culture follows structure". In a software context, it means you want to allow for chaos and experimentation. And not really just squeezing features out of development teams.

As a company scales from a successful startup to a larger company, the trick is to keep enough of that "kindergarten juice" in the culture and in how the work is organized, in order to allow your company to continue innovating. If the emphasis on control changes as a product matures, you can introduce more of that as needed. But do so consciously, and watch your output and outcomes like a hawk.

By micromanaging the process, even as an assembly line in a feature factory, you're still missing out on pretty big upside (assuming you care about having lots of new products released).

That said, even a kindergarten needs boundaries. So that the teams don't cut corners in quality for example. That's kind of the point. There are a handful of non-negotiables around safety, health, and security in a kindergarten, and everything else is optimized for discovery.

So for a bunch of interested strangers on a random school night, who dug into a few alternative structures and held everything else constant, it was clear that there could be very large differences at play. 14x faster, not 14% faster. These would be results any agile or digital transformation program would love to achieve. That said, it wasn't clear if these differences came from structure only, or the culture around it. And if culture is involved, that could be what's preventing the massive change in the first place.

Key Takeaways

  • The way you organize work matters, and it feeds into the culture, particularly in a larger company.
  • By organizing work, you will be making choices about tradeoffs among variables that matter.
  • Control, in particular, seems to be inversely related with learning and speed.

The 2020 Guide to Planning New Products using Story Points

Due to his glasses, Michael's estimate of Jane's distance from him is 14.3% further away than she actually is--at any given moment. How would you describe his estimate?

Precise, but not accurate.

Wait, what's the difference between the two again?

Precise means the numbers you are seeing are repeatable. Often down to multiple digits after the decimal point. You can be pretty confident that when you do the experiment again, you'll get the same result.

But that's not necessarily true with an accurate result. An accurate result is one that is close to the actual value. For Michael's scale to be accurate but not precise, it would be producing numbers close to the actual value, but not the same number each time. In estimating knowledge work, accuracy bests precision.

accuracy and precision 2x2

This above 2x2 applies to estimation of work in a knowledge work context. Ideally, everyone involved wants estimates to have both high accuracy and high precision. But at the beginning of a new initiative, it will most likely be low accuracy and low precision. Because at that point you know the least you possibly can. From the example aboce, Michael's estimate lies in the top left box. However, what we actually prefer to that is the bottom right one...if we can't have the top right. High accuracy makes it much easier to plan out the upcoming work.

One way of skewing towards accuracy and away from precision is by making it difficult to be precise. Instead of trying to estimate absolute sizes of stories, i.e. 3 days, we can estimate only relative size, i.e. 2 story points.

Relative sizing gives us enough to negotiate business priorities given the size of each story, without tempting fate in terms of blaming: "You said it would only take 3 days, and blah blah blah". This isn't healthy, productive, or fun. So why even go there?

These relative sizes still allow for some reasoning; however, direct numerical inferences are deliberately imprecise. For example, you can add up how many story points there will be for a larger epic. You will need to allow for a lot of variance on the individual stories. But on the basis of the "law of large numbers" from statistics, this variance will tend to average itself out. Based on this you can get a feel for how one epic compares to another for example.

1. Estimate story points directly

For this, the first place to start is the Fibonacci sequence, as a way to express relative size. They're numbers. The sequence itself is very simple. Each number in the sequence is the sum of the preceding two values. This ensures that it roughly doubles each time a new number is generated, but not precisely (Hah! low precision!).

The Fibonnaci spiral formed by drawing a curved line from opposite corners of squares

There are visualizations of the Fibonacci sequences like the one above, where you can see that there is a clear difference between each step. In practice, this type of sizing will help you get beyond a "to do list", where everything seems to be the same level of effort. It's not. And that's the point.

Here is a recommended breakdown in my favorite free tool for estimation: planitpoker.com:

Above 20 story points, you probably need to break it down into smaller stories...

It's typically good to top out at a maximum size to make sure that stories stay small. Over a certain size it just feels too big, and the estimation discussion stops being useful. At that point, a big story should probably be broken into smaller subtasks, each of which should be estimated using the above sizing numbers.

2. Translate T-shirt sizes into story points

For teams struggling with letting go of absolute estimates, the T-Shirt approach gives you the best of both worlds. Essentially you have the team estimate using relative T-Shirt sizes. Then you translate the team's input into story points. This reinforces the relative nature of story points, while still giving you something more concrete to work with for velocity purposes.

Use TShirt sizes and translate them to story points later

Using the planitpoker.com interface, you can set it here:

Flip to the TSHirt sizing option to go ahead with this approach

3. Ron Jeffries' option of 2 days max/task

This option came out of a big debate years ago on the scrumdevelopment yahoo groups mailing list. I admittedly have never tried this myself, but it's worth considering as an alternative approach to estimation. Or a thought experiment at minimum. Basically, at the time Ron Jeffries (@RonJeffries) was arguing against estimation in general. He posited that:

  • we should stop estimating
  • limit stories to 2 days of work
  • if you expect a story to take longer than 2 days, then break it down into multiple sub-stories

This way, you significantly reduce the need for an explicit estimation discussion. And you can manage workload and scope based on the number of issues you want to complete. If stories are assumed to be of equivalent size at a worst case scenario, i.e. 2 days/story, then you can derive your delivery date from the number of stories you have. Assume all remaining stories take 2 days, i.e. the worst case scenario, and add the number of work days to today's date.

In practice, stories are often larger if they have "business value" and therefore can't be easily broken down. Thus it's hard to pigeonhole them into such a framework. By business value, I mean anything that would be of value to a customer for non-technical reasons. But you can put such a limit on sub-tasks of stories. And therefore just count the number of sub-tasks.

This approach kind of turns software development into a release checklist of stuff that must be finished. All of the tasks are small. It also means that there are a lot of stories and tasks flying around. This require extra management & coordination work to maintain it. Either by the team or a delivery manager.

And also I suspect it's hard for senior stakeholders to see the forest for the trees, if you just have a long list of tasks. What will be done by when? Even though you can change the order of what's done, it's hard for anyone who isn't intimately involved with the details to understand what's happening at a glance.

Finally, I do think in this case you lose out on the design discussions which happen during estimation. You only break down stories or tasks if they are "too big". Therefore, you won't ask questions like "what is the best way to approach this?" before starting the work. It could impact the quality of what's delivered. And also get team members to spend time on work that is thrown away or low priority. In my opinion, this is human nature. We need to be deliberate about priorities; if we're not, they won't happen.

So here are a couple of aspects of this approach this Ron himself points out when using this approach to figure out when we'll be done:
  • What if all stories were approximately the same size. Then what could we do with story estimates that we couldn’t do with story counts?
  • What if all stories were one acceptance test? What could we do with story estimates that we couldn’t do with story counts (or, now, acceptance test counts)?

4. #NoEstimates

Finally, as an honorable mention, there is the whole #NoEstimates argument, which has been popular in software development circles. Basically, the approach claims that:

  • There is a lot of scope uncertainty for most of a software project, often more than the first half of the overall schedule (i.e. >50% scope variation)
  • Making estimates takes time and therefore is expensive, particularly if there are tight deadlines and high costs of delay
  • Estimates don't contribute explicit value to final customers (i.e. they are about internal company operations and for planning purposes only)
  • Holding the team accountable to time estimates means that they are incentivized to sacrifice other aspects of the work, such as lowering quality or accruing technical debt, which isn't visible but with material consequences

There a number of other nuances. Basically for any knowledge work where there are a lot of dependencies (like in software), you might be better off not messing around with estimating at all. Just get on with the work.

Personally, I think estimation sessions are useful mostly for the purpose of having a priori how I'd do this discussions. #NoEstimates throws the baby out with the bathwater.

Also, by their very nature, most tasks on a large project will take different amounts of time (assuming we don't take the approach from #3 above). And technical people are the best placed to figure that out and share it with everyone. They have a unique and often common perspective, which the rest of the company lacks.

The team doesn't work in a vacuum. So timing becomes important, if not critical, to getting the full value of the efforts being made.

Doesn't all this estimation just mean there is less time to do the actual work?

This concern boils down to one of efficiency. Usually, the asker assumes that the time to complete something this fixed and that the team's primary goal is squeeze out as much as output as they can in that time frame.

In practice, though, effectiveness trumps efficiency. There is not point in doing something quickly, if it doesn't need to be done at all. To use the old Steven Covey analogy, is the ladder leaning against the right wall?

Photographer: Marc Schiele | Source: Unsplash | Is the ladder leaning agains the right wall?

And for better or worse, product teams of 2 or more people have exactly this problem. In addition to coordinating work among everyone. New product development is fraught with uncertainty, including technical uncertainty in many cases. So what the team investigates, validates, and builds in what order is critical. While efficiency is a concern, any time spent on making the new product development more efficient is likely to be thrown away if the goal changes. So efficiency won't matter in that case anyway.

Case study: Distributed estimation

A pretty common trope nowadays is that larger companies have distributed teams, often across many locations and time zones. In the early days of agile, it was quite difficult to estimate under these conditions. A lot of the nuance and genuine discussion required was simply lost in the ether. And until recently, the tooling for distributed work didn't exist. Well, that's changed.

At most client sites, there will already be some kind of story tracking system in place such as Jira or Trello. It is possible to add a "story points" field to the template for a story. Typically, this involves speaking with the owner or local administrator of this system, but is relatively straightforward.

We don't actually need any more functionality in the story tracking system. It's nice to be able to display the story points where relevant afterwards. Jira, for example, has quite useful reporting based on story point estimates right out of the box. Trello has addons. Other agile management systems presumably have the same.

On a typical planning sessions, the product owner or business analyst (BA) spins up a session of planitpoker.com. One of the nice features of this system is that it doesn't require you to set up an account or even really authenticate. Once a 'room' is set up, the BA shares out the link to everyone estimating. Everyone can sign onto the board without setting up an account.

At that point, typically I am sharing my screen with either the description of the story or any relevant collateral such as a spreadsheet, a miro board, or showing and explaining why the current version is missing a feature. I enter the story tracking id into planitpoker.com after we've discussed what it is. And then everyone votes on the number of story points to estimate the complexity of the task. Once this is complete for everyone, we look at the distribution of votes. If it's wide, we discuss and vote again. We keep doing the same until it's pretty much the same across the team. If for example, we decide that a particular story is a 5 story point size, we just type in that value into Jira or Trello.

You don't need any fancy or even paid plugins to do this. It's enough to be able to share your screen and have a discussion with everyone involved. In terms of estimation, the heavy lifting is done using planitpoker.com. Once a round of planning poker finishes, it can be discarded. And then you type in the estimates in the story tracking system, where you need them later.

When stumbling onto this approach, I think the key learning was that you don't actually need everything to be done in your main story tracking system. All you really need to do is have a good debate, and then store the outcome of the estimation. And the conversations you have while estimating are the most important art of the process anyway.

Case study: use T-Shirts for first cut of backlog estimation, then translate to story points

In this scenario, we had a large team, a massive scope, and a lot of uncertainty around exactly how the product would work. I felt we just needed to get started on the work in order to reduce the technical uncertainty behind the work. So forcing the whole team to spend hours or days estimating something they don't understand would have been a waste of everyone's time. If they started the work, we could have more meaningful estimation discussions later. For context, this was an infrastructure project.

The architect involved had put together a big graph in draw.io for how the entire system would work. It had rapidly become a symbol in the company of something highly complicated that nobody, except for him, actually understood. What didn't help was that developers in the company didn't have much experience with cloud technology. And it was relatively new stuff, even for him.

Steps taken:

  1. Put together a vizualization or map of the scope which needs to be built, which was particularly necessary here given that we needed to build infrastructure.
  2. We listed out the ~47 different pieces of work, most of them mapping to components of the big drawo.io microservice cloud to a spreadsheet.
  3. As the architect felt uncomfortable assigning story points given the level of uncertainty, I asked him to use TShirt sizing for all 47 items.
  4. If you need to do this yourself, start with what you think is average and assign them all an M. Compare everything else to those items, i.e. this is a little smaller so it's an S, whereas this is really minor so it's an XXS.
  5. Given all of the above are sized, map the TShirt sizes to Fibonnaci sequence numbers. For example, XXS is a 1. XS is a 2. And so on.
  6. Then use those numbers as story points and continue planning from there.

Using this slightly circuitous route, we got to a story point estimate which I felt was accurate. It wasn't precise,but that was acceptable at this stage. And the development team got started quickly on the work.

Case Study: Estimating roadmap items (to appease overachieving stakeholders)

Sometimes there is pressure to plan out a big roadmap up front for a new product. The justification that I often here, is that we want to know when "we're done". Personally, I don't feel knowing that is actually useful, especially from a business perspective. What I'd prefer to focus on is getting to first revenue and profitability. Estimating something a few years out in detail--when you barely understand it--is just a waste of time. And likely to be inaccurate.

All that said, it's useful to know your options for the future, and to also have some idea how much effort adding any given option will take. At a high level, that is probably good enough.

To keep everyone happy, the simplest approach is to use TShirt sizing of Epics. Some things are relatively small and quick. Some are huge. Some are in-between. The development team are the best judges of that, since they will need to do the work.

Clearly the TShirts for epics (groups of stories) will not be equivalent to TShirts used for stories. If you are pressed to translate the epic TShirts into story points, I'd suggest provided a range estimate for each one. For example, an M size epic is 6-10 weeks. Range estimates can still be used for planning, but aren't explicit commitments to deliver on a specific day. And they're good enough to identify if you know up front if you'll be late.

Some estimation is better than nothing at all. It's useful to just do a quick pass without going into all of the minute detail when brainstorming a roadmap. And T-Shirt epics give you the flexbility to do just that.

Case study: Nitty-gritty technical estimation in person

Geek out time, analog style. In one office! This experience, as well as similar ones, convinced me that estimating in person works best...if you can swing it. Fly people in if you have to. :)

In the early days of when I was a developer, my team and I inherited a codebase in C++ of 17 different components. Most of them were written using Borland C++, a compiler that was once cutting edge but had since largely fallen out of mainstream use. In addition to the compiler, the code used a lot of abstractions specific to using libraries shipped with the compiler. We decided that we need to get into a more up-to-date environment, to take advantage of the significant performance gains in newer compilers and also so that it would be easier to work with (and probably recruit more help for).

With an architect and another developer, we booked a small meeting room for half a day, to plan out the work. First we talked about what needed to be done. As we discussed each story, we wrote them down on index cards. We stuck them on the table, because we wanted to see all of them at once.

Then, to estimate, the work we also went analog. Essentially, we took the business cards of a former colleague who'd left the company, and wrote the Fibonacci sequence on the back of a handful cards. Then each of us took a set of the cards. We started discussing each index card, one by one. We each used one card to indicate how many story points worth of complexity lied behind each task, laying it face down on the table. Then we flipped over the cards. If there was a difference in the numbers, we got into technical discussions to convince each other that the story was simpler or harder than we expected.

After a few hours of this estimation boiler room, we had a lot of questions about one particular component which we didn't know that well. So we waltzed back to our desks to spend the afternoon looking at the existing code, and to inform our estimates even further. Finally, the next day we reconvened and finished off the complete estimation session for the entire piece of work.

Once we had the aggregate story point estimate, we provided internal company stakeholders a relatively narrow elapsed time range estimate based on how long our story points typically took. This gave them enough information to be able to plan around our efforts, especially the pessimistic estimate. The range estimate was also intentionally not precise, even though we did feel it was accurate at the time.

In practice, the estimate was correct, although the route to get there was kind of roundabout. Most of the tasks we did involved making a breaking change to the code in the new compiler. Then cleaning up the errors and warnings which resulted. The lists of problems were quite long, but then often making one fix suddenly fixed 30 errors in one go. So in short, we couldn't have really known this up front anyway.

But the estimate was close enough to be useful for planning purposes.

Key takeaways

  1. You don't need to do the estimation in your main workflow system; you just need to keep track of the estimates there.
  2. TShirt sizes can be mapped to story points if needed. This helps overcome resistance or speed up the process if people are uncomfortable.
  3. You can estimate stories or epics at different levels of resolution. If you want more accurate estimates, you need to invest more time.
  4. At the brainstorming stage, you can get something useful with a light touch estimate involving one or two senior technical people.
  5. At an operational stage, more precise estimates help adjust the plan, but they also help everyone on the team understand the work and implicitly get their buy-in on the approach to be taken.

Invitation

If you'd like to get early access to my upcoming book on improving velocity to get to market faster with your new product, sign up here.

Why over-focussing on velocity causes the opposite effect

Following up on the slightly longer analysis of overfocussing on output and velocity, I think there are a few things that are overlooked with a pure velocity based model. Most of them have been known for decades in the software industry. They are squishy.

  1. It's essentially a Taylorist factory where most of the interest is in efficiency, and not on outcomes. by Taylorist, I mean Frederick Winslow Taylor. In fact, Kanban originally came from manufacturing. Cost accounting is the beginning of the imposition of a Taylorist model, to describe something more nuanced than what you see in a factory. (please comment and say why if you disagree). By using velocity as a yardstick, you pervert velocity's purpose and dilute its usefulness.
  2. As per PeopleWare by Tom DeMarco in 1987, most new technology development problems are actually people problems, either on the product development team, or with respect to the customers.
  3. Outputs are assumed to be linear. This is patently not true for knowledge work. Even in 1975 at the time of the Mythical Man Month, it was already acknowledged that adding people tactically is a major blunder in the context of creative work. 
  4. More recently, I've fascinated by psychological safety in the team as articulated by Amy Edmonson as an underlying factor influencing actual performance.

At its core, companies care about being able to release quickly. Velocity and story points are just one way to get at what's happening and why it's taking so long. But it's essentially an internal process. At some level, it's just bureaucracy created to manage product creation...on its own usually not valuable to customers.. So in and of themselves, if the teams provide value and can show they are doing so, then velocity doesn't matter.

Managing Risk the Agile Way: Like a Hedge Fund

Manage risk. Despite having backlogs, Agile doesn't manage risks--explicitly. In 2007, Paul Kedrosky noticed a rather peculiar ratio. The ratio of software developers to non-developers at a major quant fund versus a major software company:

It's not too much of a stretch to say that hedge funds are a new type of Software Company. After all, they have more developers per capita than the latter, and they generate more cash flow per capita, if they are any good. Hedge funds also provide a fantastic template for how software companies and projects could be run. Risk management separates the men from the boys. Well-run hedge funds make financial decisions quickly, consistently with the spirit of the agile manifesto. They cannot afford to let any bureaucracy get in the way of “high gamma” execution.

In fact, a hedge fund is just a company too, with the main difference being that they have a disproportionately large pool of capital to invest in the markets. As a thought experiment, I explore hedge fund approach to investing to new software projects, specifically to compare a waterfall approach to an agile approach. All software investment projects have embedded real options. Many analytical tools exist when investing with options, and many of them are surprisingly relevant in a new product development context.

Every greenfield software development project, from the moment it's just an idea discussed by the team, is a bet on what clients or prospects want. Even if the project is being custom made-to-order for one client, it's possible the client's actual needs will be verbalized differently, once he sees a prototype. In addition to technology unknowns, development projects also face a number of other unknowns, especially in the context of marketing. Who exactly will need it? What problem will it solve for them? How many potential users exist? How do we effectively find and convince the potential users to buy this software once it exists? That's why it's reasonably easy to understand new product development as a bet the product team makes.

Copyright: Copyright 2007

Don't make detailed assumptions about the distant future.

Like tax returns, project plans in software development are largely intricate works of fiction, i.e. based on a true story. Instead, you can make supremely detailed tactical short-term plans, where:

  1. your context doesn't have time to change as much
  2. you can integrate your product development as quickly as possible with paying customers
  3. you can prove that a market exists for the new product, and that the concept is even viable

You are trying to discover an unmet need in the market which your prospects will be crazy about. Then, you actually have a chance that your bet will turn into a Demarco Project B. Then you will be thinking like hedge fund, really understanding and calculating the value of your immediate real options.

In this case, you are investing in a real call option. You have a small initial outflow at the beginning of the project, to generate a minimum viable product. Your losses are limited to that outlay. Your net present value (NPV) will be negative. Based on the standard criterion for accepting an investment project, you should reject such a project if the NPV is less than 0. Nevertheless, within a few iterations, you may generate a product that starts generating revenue. This revenue stream may exhibit exponential growth. In financial option terminology, the real call option has high gamma.

What does this mean for you operationally?

If you are using an Agile approach, you already keep track of your call options in a product backlog. A product backlog is an ordered list of potential features, or user stories, for a product. In the context of a new product, the product backlog's most important function is to help you, as the product/business owner, prioritize this list, primarily based on the expected payoffs of adding a feature. Once you finish enough of a product that you can sell it, you start getting a lot of feedback about everything but the development process, so it would be ideal to have the flexibility to adapt to market conditions.

This list, while it may look rather dull, is potentially revenue-generating magic in the future. In fact, you can view it as a list of real call options, like the exchange traded derivatives variety. A backlog is effectively a portfolio of real options. An option portfolio is more dynamic. Its value depends on your business context. A number of interesting implications come out of this new model.

In fact, this is where Agile and Scrum metaphors around the backlog as a “feature idea inventory” break down for me. Typically, the product backlog is described as an inventory of potential features, where they are hoarded and stored. In my mind, a warehousing metaphor is somewhat lifeless and static. You don’t take into account the potential features’ value, when doing NPV financial analysis.

After getting an understanding of the market where a company operates, VCs just calculate a “fudge factor”, as a proxy for how much they believe the company will actually generate revenue. They use the denominator to override the value of the company’s real options on the product backlog. The value of these options will effectively decide whether the project will be a Demarco Project A or Project B, yet they aren’t taken into account explicitly at all.

In a high-tech environment like IT startups, return on investment (ROI) is more likely to be driven from adding new revenue streams than from controlling costs and budgeting. From an income statement point of view, the top line (revenues) is much more important than the bottom line (net profits), because we work in such a young and rapidly expanding industry. You can use NPV to trace cash flows down to combinations and series of tasks, and to estimate when you might be able to start selling in the future. Alternatively, you can also explicitly value your backlog items as real options. This way, you keep track of one list of fully developed features you need, so that you can prioritize much more dynamically-as you start selling and getting market feedback.

As a result of taking into account these dynamic scenarios, you have a much more accurate project valuation, at any point in time after the first calculation period used for the estimation! Moreover, NPV on its own systematically underestimates the value of most software and internet R&D projects, because it ignores embedded call options in the product backlog, which typically have high values in a volatile industry. If you calculated their value, and added it to the NPV cash flows you estimated, you can then use the NPV criterion of being net position with more precision. You will be taking into account the true value of what you get, when you invest in that project.

This is the key business difference between agile and waterfall. Waterfall handcuffs you into making estimates of a long term NPV, without explicitly allowing for optionality. Thinking about a product backlog as a portfolio of options is a much finer-grained approach to risk management, particularly when combined with software demos at the end of iterations. Then, you can legitimately claim you run your software project portfolio like a hedge fund manager.

Statistician E.P. Box quipped, "All models are wrong, some are useful". I would add that some models are more useful than others.

How to befriend time when you're in a hurry

Miyamoto Musashi was a legendary renaissance samurai poet and painter. He invented the technique of fencing with two swords. According to legend, he won over 60 sword fight duels. He was so talented, that he killed an adult samurai with a wooden sword at the age of 13. He single-handedly fought of an ambush wrought by a samurai school. Personally, he fought in 6 wars. 

Musashi Miyamoto on velocity

These Rambo-esque legends weren't his greatest contribution, though. On his deathbed, he penned a treatise on strategy called "A Book of Five Rings". As he had the ability to kill of so many opponents in duels (i.e. being on the right side of 60 duels), he became a master of the psychology of battle strategy. The book contains his insights into how he thought about this process.

“Whatever the way, the master of strategy does not appear fast….Of course, slowness is bad. Really skillful people never get out of time, and are always deliberate, and never appear busy.” Some people can cover 120 miles a day without breaking a sweat. Others will look tired within a minute of starting to run. 

What's really at stake here?

How you feel about urgency and speed is a reflection of your habits in a personal context. If you have bad time management habits personally, then of course you will feel discomfort about time going by. And the opposite is also true. If you have good habits and you live in accordance with your priorities, you look forward to time passing. It works in your favor.

Photographer: Form | Source: Unsplash | Aiming for the above with my stretching routine

I can definitely see this when I do my daily stretching routine. Even though I might not be happy with the point where I started, if I do my stretches every day, my flexibility increases. And I see progress over time. So I look forwards to time passing, because if I have increasingly better results.

This observation is fractal. In a professional context, it's about the quality of your company's systems. If you have well thought through and optimized systems, you look forwards to achieving your goals. Time feels like it's on your side. The competition isn't as important as the stopwatch. Like in a road race. If you have poor systems, you're constantly harried and monitoring and firefighting. And there's no time to do anything longer term.

Photographer: Robert Anasch | Source: Unsplash | Time is on your side if your company has well thought out systems.

If I can extend the metaphor a bit to building shareholder value, particularly in software companies or in knowledge work: "Wealth is built with time as an asset, not as a liability".