A Principle for When Not to Estimate

Based on a prior post, I was ‘asked’ to extract this into a generalized principle. (I invite you to read that in full, so you understand the context.)

Of course, one has to open your mind that the value of estimating has to be questioned in this circumstance and what you are trying to accomplish with estimates and how they may or may not contribute to your work. And I would be remiss not to mention 3 things:

  1. My push on the #noestimates tag is not one of #neverestimate; on the contrary, I am interested in circumstances where estimation may need to improve OR it may need to be ditched to increase value production. I would never advocate to never estimate under all circumstances, yet at times estimation may be wasteful and should be eliminated. Depending on risk understanding, one may absolutely need to estimate. The more potential (in terms of probability and impact) for loss of life or loss of money, the more need to estimate. I do have a keen interest in when estimation may not be needed though beyond just dropping story points.
  2. If you decide to apply this principle, also determine the circumstances and how you would detect that perhaps this principle doesn’t apply in your context. Ask yourself, how does this apply? What will tell me whether I applied it incorrectly, and lastly what possible paths should I take to rectify the situation?
  3. Lastly, this whole set of arguments or debate is very binary. (I’ve only ever seen actual dialog on the topic at Agile Dialogs.) #nobinary is something I advocate on a variety of subjects and you can learn more about how to drop binary thinking and contribute to interventions at Agile Brambles (an evolving site). If your mindset is one where you only see your point of view, then you may find thinking through the paths on the site fruitful.

OK, onto the principle….

When the estimate is not going to be used, regardless of how quickly you can produce it, don’t create one.

Looking back at my context briefly, the old way was to estimate all work that came in and then regardless of the answer, we did the work.  This delayed the start of the work and actually digging into really understanding it by doing it. It doesn’t matter whether you are using a super sophisticated model or a simple guess. In my case it at a minimum delayed work by a week and in some cases more. From a any form of value production standpoint (like weighted shorted job first), the estimation part of the process was hurting the organization. I’m certain many of us can think of times when an estimate wasn’t used and really had no intention of being used.

Be sure though that it truly will not be used. This is important. If this is the case, then it has some degree of importance and then the level of accuracy and precision should be quickly assessed and the estimate done at that level. This brings us to a corollary:

Only estimate at the level of accuracy and precision required (or that has no incremental cost above what is required).

Often estimation takes time and effort. When estimation is needed to understand risk, then only do it at the level required OR if you can do it more accurately/precisely with no additional effort, then go ahead and do it the higher level. This also goes into thinking about creating sophisticated models or simulations that may be used. Bring these in when the situation warrants it, a good prioritized backlog of work, projects, etc. will reveal whether there may be a future for using one.

You may notice that both of these boil down to –

Simplicity — the art of maximizing the amount of work not done — is essential.

Hmmmm… I’ve seen that before.

One last principle:

Question whether this principle (or its corollary) continues to apply.

You may have determined you needed an estimate or perhaps you determined that it wasn’t needed; be sure this still remains true on a periodic basis.

I’ll close, ‘anecdotal’ circumstances are evidence. If you find that you don’t need to estimate under some circumstance, then that is a set of evidence showing a different reality. It’s very similar to observations made by Galileo and Copernicus that indicated that a geocentric view was incorrect and a heliocentric view was more accurate. Or perhaps an even better metaphor would be when Einstein was able to explain observations being made through relativity theory. It didn’t negate how classical mechanics worked under most circumstances, but there were circumstances (context) when mechanics needed to be replaced. If we never examined observations (anectdotes), then everything would remain status quo.

Agile Dialogs Recap

This will be a short recap of the Agile Dialogs unconference held yesterday.  We discussed ways of predicting value production with and without estimates.  Over the next few days I’ll blog more of what we uncovered, but this will be a simple post on how the unconference was approached.

We had a good mix of people that were passionate, though no one was at I’d say fully at each end of the spectrum. The big takeaway was that both sides are right in many ways and wrong in many ways.  The idea of not using estimates of time, money, and/or story points can be done and is highly context dependent. As with any approach it may nor may not work in your context; it depends, or YMMV.  The best you can do is try it as an experiment and see whether it works for you.

What we did at Agile Dialogs was –

  • register with one side or another along a continuum (how strong we felt on the issue),
  • post the types of things we estimate,
  • tell our stories of both our successes and failures on both sides – with and without estimates
  • explore our objectives for either using or not using objectives and the techniques we use for each side
  • Explore the assumptions used when using estimates
  • Explore the assumptions used when not using estimates
  • Explore what each side could learn from the other
  • Posted and voted on what could possibly be the next thorny topic we tackle
  • and retrospect on how the Agile Dialogs unconference could be better

Here’s a few teasers of some of the discoveries… I’ll go more in depth on what was discussed in future posts as well as post some proceedings on the Agile Dialogs site.

  • When management or customers are asking for estimates, it is more important to understand their need for it; then more valuable alternatives to fulfill that need may be explored. Estimates may prove best for fulfilling that need though, so don’t force fit an alternative technique.
  • Estimation has become a scapegoat for other dysfunctions within the works system. Removing estimation won’t fix these dysfunctions, but it may help uncover them.  Whether at the end of the day, you remain with or without estimation, if these more fundamental dysfunctions can be fixed, then the work climate will improve.
  • Estimation always exists, but when pursuing a noestimates approach, the nature of the estimation actually changes from cost, time, and/or complexity to value (which is not based on those in most environments).
  • Focusing on understanding time and money estimates tends to introduce longer feedback loops for actual learning. If it is possible (and that is an IF), then removing them can eliminate waste in the work system to that learning.
  • Measurement is important in both approaches; when doing estimates we sometimes get lulled into a false sense of security that good measurement exists, when often it doesn’t.
  • Humans suck at estimation except on conceptually obvious items (obvious equating to the obvious domain in the Cynefin framework); mathematical models (particularly when the underlying assumptions on those models are validated by the team doing the work) can really help produce accurate results in the complicated domain.  The complex domain can be assisted greatly by these mathematical models, but the loop through is validating a hypothesis.
  • Another way to test a hypothesis is to set time or cost box and see if the solution at the end of the box is on track decide whether to spend more, accept as-is, or abandon; think Lean Start-up approach.

I have set-up The #AgileDialogs Daily that curates information from both sides of this thorny topic; other thorny topics will get added as a discussion on them emerges.

What’s This Agile Dialogs Thing Anyway?

If you haven’t caught it, I’m running an unconference called Agile Dialogs; you can find out more about it at http://agiledialogs.org.

So why would I want to take on thorny topics, ones that seem to bring out flamewars? Because the lack of listening to each side as we argue from each other’s sidelines seems an inane way of advancing our craft.  If we want organizations to advance their thinking, we in the community need to advance ours and listen to those with differing opinions. It doesn’t mean we need to agree, but we do need to listen, truly listen to what the other side is saying.  When we decide to challenge the other side, we need to do it in a manner that isn’t trying to cole them into accepting we are right, but to have them think through why they are taking the position they have chosen. We may reaffirm it, but in the process, we will have had them rethink underlying assumptions.

Dialog is about understanding and elevating assumptions so we can find answers to our questions and perhaps a new better way forward.  I know I am a believer in good estimates when they make sense and when they don’t not even bothering with them. But perhaps when I thought they weren’t useful, there was a better way to have made them useful.  I certainly welcome learning that in a manner that doesn’t start out with – hey bud you are wrong. That closes down dialog as that is about winning an argument. Save the arguments for a debate, let’s find out what makes each side tick and see what we can learn.

I hope you will join me!

Using Dollars as a Constraint on a Project

I’ve been planning to write this for awhile, and this seemed to me more important to post after seeing an update from a Kickstarter campaign I am backing.

So I backed a board game, I was particularly interested in that it i intended to be small so I can take it with me almost anywhere I go.  What was amazing to me was how they calculated what they needed for funding.

Before I dive into that, I’ve backed quite a few boardgames on Kickstarter (along with music albums, music gear, and camping gear…) Most Kickstarter projects go in with varying degrees of estimations; one nice thing Kickstarter does is if you don’t reach your funding goal you don’t owe to make anything and the backers keep their money.  If you get funded, your estimates hopefully allow you to produce the game and have at least a small measure of profit. Most projects offer stretch goals that when they are reached, component upgrades and such kick in – these usually have a change in your estimate.

The gentleman that developed Carrier Commander, decided on a price point he wanted to be able to sell the game ($3 as it is a nanogame; I love small games to take with me when I travel). From there he reversed everything into size and weight by calculation based on what would be possible should he hit his stretch goals.

On the campaign page, he reveals the cost breakdown including the “Uh-Oh” zone which is the profit area…

To read up on how he calculated his way into the $3 price point without estimating, see this update:

https://www.kickstarter.com/projects/1078944858/star-patrol-carrier-commander-3-sci-fi-strategy-na/posts/1348731?ref=dash

Should all Kickstarters work this way?  Probably not…  The larger the game, the more the calculations would become overly cumbersome, particular as stretch goals needed to be calculated, so using estimation and factoring in reserve to cover the uncertainty would probably suffice.  In his instance, his upgrades were in cardboard only, so this made it much easier.

So how would this relate to software development? As I wrote in my post “When I Have Skipped Estimates”, one could use a team size as a constraint and then measure throughout.  Once the constraining bottleneck is understood and all worthwhile options for increasing throughput there have been exhausted, you could increase capacity.  This really works well for software maintenance.

One could also use something akin to what this gentleman did for his Kickstarter game; establish a fair market value for the cost of what you are building (i.e. how much is someone willing to pay to have something by a particular point in time).  Once you have this you have both time and budget constraint and now you can see how much that would pay for in terms of people and other infrastructural resource one may need; i.e. what is the capacity it can purchase?  Let’s say we got enough money that it would pay for 7 people for 6 months (+ servers, desktops, software licenses, etc). We can then execute and develop based on that.

One may ask at this point, how do you know if you will make what is needed? You actually don’t. What you do know is that this is what the person or people that set the constraint said would be what they are willing to pay. Like a venture capitalist, they have in their mind, I am willing to risk this amount of money to see if I can get what I want.  Yep, no guarantee. But then, an estimate doesn’t produce one either.

Should you do this under all cases? Absolutely not. In fact, I would say estimation is needed more often than not when deciding to fund a project (or program). And for those cases, we as an industry need to improve in estimation. However, there are cases,where estimation doesn’t necessarily help us. The more novel the project (and thus its approach), the greater the uncertainty and at some point it may be best to establish a cost (and perhaps schedule) constraint and see what you get at the end of that.  Got something valuable? Perhaps continue forward (and perhaps now introduce estimation); what you got isn’t valuable? Then you can use the knowledge you gained to decide to continue or not (and perhaps add in estimation or not).  You can use the knowledge you have to make a decision.

At least those are the cases I have for when I would go a #noestimates route… What are yours?

I’m interested in exploring each side; if this interests you, I hope you will consider joining me at the first Agile Dialogs unconference I am putting together.

When I’ve Skipped the Estimates…

spiral_clockWhile the debate carries on whether one must have estimates or not, I thought I’d provide a viewpoint of when I found them no longer needed.

However, before go there, let’s start off with a bit of a story about when estimates were not useful, but required, so I took the *EASIEST* path out.

Let’s go back to 2008; I was just hired on as a software development Branch Chief in USDA and asked to prepare the budget for the next fiscal year.  Of course, the first thing I did was poll around on what upcoming work there was. No one knew except that the same amount of maintenance as was last year. That was easy, apply an inflation factor on what we had this year, add a management reserve, and we’re done.

Now onto the harder problem: what about the unknown new projects looming.  dollar_tunnelSo I investigated how these normally got funded; any estimate done is simply reported up the chain (as requested Development monies), but the funds are actually provided by the programs that need the work done for them. These are used as a projection for  the branch and nothing more. Any work really done goes through its own process of requesting and then actual money is provided.

So I asked, how many projects did we do the year prior and how much did they cost? And the year prior? And the prior to that? 4 projects, 4 projects, and 6 projects were the answers. (I won’t go into the money numbers, but I’ll note this branch did not develop super huge applications, but small to medium sized applications with some complexity – a GIS app, an analytical app, several tracking type apps, a loan package development application, that may give you the picture.)  I didn’t need to know the number of apps for  the reporting, but I used that number to calculate the average cost per app we developed, projected into 2009 dollars; adding a standard deviation game me some more certainty, then a 15% management reserve.  Once I had those numbers, the process was literally a half hour to run through the math a couple of times to ensure I was on target.

My project managers could not believe I was going to use that number; they always went around to each potential customer and asked them to conjecture on applications or upgrades they wanted. Most never got funded and something else came up and got funded, so why spend time estimating what never happened.

This was a very low precision estimate, but got me in a reasonable and justifiable target number. (If the system allowed for ranges, I would have provided those, but alas it didn’t.)

I’m guessing you are wondering how ‘correct’ I was with that… We had 5 projects and it was fairly close to the average.  The next year we did the same thing, but it was off – much higher as the Recovery Act kicked into high gear, but as I pointed out before, it didn’t matter.

OK, that was budgets built using the least painful method of estimates possible.  (Sometime in the future, ping me on how I executed on real work within the branch… The spoiler hint is I limited the WIP of projects going on at any one time, so that I could keep my team close to constant size, the increase meant I experienced a contractor headcount increase by about 2 people.

So now onto some maintenance estimation I did away with…

When I took over running the maintenance team at Office of Pesticide Programs, every Software Change Request (SCR) came in went into a queue where it was examined in a meeting and the contractor told to go estimate it.  When the contractor came back with their estimate, usually a week later, the work was approved.  They estimated in time and they could then quote the money as they figured out who was going to do the work and then they could apply their labor rate. This singular meeting was at least an hour long every week and consisted of telling the contractor go estimate the amount of work to do and report out on estimates made.  This never went anywhere; no one did anything with these estimates. We never said no to the SCRs for the legacy systems we maintained, mostly because no one worked with the business well enough to know whether it should happen or not. On top of that, there were 20 some legacy apps with at least that many stakeholders to try and satisfy. Perhaps at some point, this estimation process was used to say no, but with the mostly low complexity work coming in, there was no drive to say no.

We set budgets based on annual contractor headcount. Perhaps at some point this estimation exercise was used for this, but it wasn’t any longer.

So I did a couple of things, I killed the meeting. I put the onus on the government application maintenance staff to work with the business to prioritize the work in their viewpoint. I set-up a rule set for taking these priorities, along with a quick technical assessment (that set severity) and the date in, to establish a prioritization across all apps.  I got these stakeholders to agree to this scheme so I didn’t have to fight with each app. We still never said no, we just prioritized the work not started constantly.

And I eliminated the estimates.  I decided on contractor staff based on how much work I could get through; I concentrated on further process improvements before I thought of increasing headcount. (You can read about the Kanban system that was set-up on GovLoop if you so desire.)

To go full circle to where once again I found an estimate helpful in this environment was a potential regulatory change was going to require a rather large piece of work to our legacy PowerBuilder app. I was asked how long it would take; the upper management was interested in ensuring that we had enough lead time to get it done. Not having it done, had a financial impact on the Agency.

Since I had a Kanban system implemented in Trac, I filtered out that legacy’s enhancements to similar ones and calculated the average and two standard deviations.  I gave them that range with stating the high number had 95% confidence we’d fit within it. They deeply appreciated the accuracy and precision in this case. This is a form of estimation of course, but the real point is day-to day, we never estimated; there was zero value in it.  We did capture actual data though using our system, which made predictability possible just as I mentioned above.

Hopefully this will help others at least understand one context where estimations weren’t needed and also where low fidelity estimates were good enough to establish a reasonable estimate. I consider myself a no estimates guy, only because I look at the assumptions of why I need to estimate and if I don’t and can derive a more suitable answer in some manner, I’ll probably use that.  It’s all a matter of context.

Agile Dialogs – Why We Need It

Agile_Dialog_Logo-2Recently I have noticed conversations in the Agile Community getting increasingly hostile.  Whether it be about scaling, self-organization, estimation, or a variety of other topics, there seems to be some reason one side or the other has to be ‘right’. I’ve personally been in the crossfire and not once was there any inquiry about why I had my opinion, only some circumspect attribution as to my opinion being off the mark.

Perhaps it was… Perhaps not… Who is the judge?

So something I and a colleague (@Ryan Ripley) have decided to try is put together is an unconference to bring together people to discuss these thorny conversations. And by discussion, I mean dialog, not debate.  In other words, the point is not to prove someone wrong or right, but rather understand there position and whether it is valid for your context.  Using a philosophy espoused by Peter Senge, we need to expose and elevate our assumptions so that we can find what works and doesn’t between the positions. We call this Agile Dialogs and have set-up a website (rudimentary at the moment).  Our first dialog will be about how to predict value with or without estimates. If you have an opinion for against or somewhere in the middle, we hope you will join us. You can find out more info at the Agile Dialogs website; please consider taking the short survey at the end and of course joining us on November 13th at the Navy League Building in Arlington, VA..