Using Economics to Encourage Testing Incrementally (or As You Go Along)

At TriAgile, I had an interesting conversation with a Product Owner. She described to me a problem where the testers could not keep up and their behaviors were actually holding them back. Let me describe her situation…

In her content development team, they had a couple of testers. They manually tested hyperlinks and other HTML/JavaScript/CSS elements towards the end of the iteration. While she would love to move to automated testing, there were some hurdles to get software approved for use, plus she had this whole behavioral mindset she needed to overcome. The testers on her team felt building and running tests incrementally as a developer completed work on acceptance criteria was wasteful. They preferred to do it after a story was completed by teh content developer; this then always put them being crunched. No matter how hard this Product Owner tried to convince them to do their testing as they go, they resisted. Her Scrum Master was also not providing any influence one way or the other.

As we discussed this at TriAgile, I finally settled on economics to help her understand the situation.

Suppose a content developer produced a defect that prevented a CSS library from working by using a faulty assumption (let’s say it was as simple as a misspelled directory in the URL). And let’s suppose this faulty assumption caused the error to be reproduced 10 times. And further, let’s say each time the person did this, it took them 10 minutes to implement each instance. Lastly, it took the tester 5 minutes to test EACH instance.  So let’s do some math. (All of the times are hypothetical; they could be longer or shorter.)

So first up, testing at the end: 10 x 10 minutes implementation + 10 x 5 minutes testing = 150 minutes. But wait, we now have to fix those errors. So presuming that great information got passed back to the developer and it only takes them 5 min to correct each instance, we need to add: 10 x 5 minutes fix time + 10 x 5 minutes retesting = 100 minutes.  So our total time to get to done is 150 + 100 = 250 minutes to implement, test, correct, and retest the work. Our Product Owner had actually said that this kind of error replication had happened multiple times.

OK, what would have happened if it happened incrementally? Well our implementation time is the same, but after the first implementation occurs it would go to get tested. If an error is found, it goes back to the content developer and having seen the error she or he was making, they can now avoid reproducing it. So the time would be something like this: 1 x 10 minutes implementation + 1 x 5 minutes testing = 15 min, then 1 reworked item x 5 min + 1 item retested x 5 min = 10 min, and finally 9 remaining items x 10 minutes implementation + 9 x 5 minutes testing = 130. Total time now is 160 min.

If the cost was $2/minute (assuming a $120/hour rate employee), you easily wasted

$250 – $160 or $90.

Now multiply this by however many teams are not testing as they go and how many times they have this happen.

Of course there could be items caught that are not recurring, but the fact of the matter is, every recurrence of an error that has to be backed out introduces a lot of waste into the system (defect waste for you Lean/Six Sigma types). Testing as you go and stopping ‘the line’ to prevent future defects from occurring saves money in the long run since labor time is what we are talking about.

In addition to this direct savings as calculated above, one ALSO has the queue time for each test that is awaiting to being tested before it can be OK-ed to produce value. In the first instance, this may be building up considerably, delaying production readiness. And suppose out of the 10 occurrences above, only 8 could be completed because we’re near the end of the iteration? Then we’re probably not going to get all of them tested and any fixes done in time. If we had been testing along the way, then if something didn’t get tested, we could talk with the Product Owner about releasing what was completed and successfully tested. Something of value is completed as opposed to deploying nothing. There is a real opportunity cost for this delay.

So there is something to be learned by each area with this. For the tester, testing completed work, even manually, incrementally keeps you from becoming a bottleneck to producing value. For the developer, giving developed items to the tester incrementally and getting feedback after each item allows you to correct along the way, possibly avoiding future errors. And for the business, having this occur incrementally actually reduces both the real and potential opportunity costs of the work.

Agile Dialogs Recap

This will be a short recap of the Agile Dialogs unconference held yesterday.  We discussed ways of predicting value production with and without estimates.  Over the next few days I’ll blog more of what we uncovered, but this will be a simple post on how the unconference was approached.

We had a good mix of people that were passionate, though no one was at I’d say fully at each end of the spectrum. The big takeaway was that both sides are right in many ways and wrong in many ways.  The idea of not using estimates of time, money, and/or story points can be done and is highly context dependent. As with any approach it may nor may not work in your context; it depends, or YMMV.  The best you can do is try it as an experiment and see whether it works for you.

What we did at Agile Dialogs was –

  • register with one side or another along a continuum (how strong we felt on the issue),
  • post the types of things we estimate,
  • tell our stories of both our successes and failures on both sides – with and without estimates
  • explore our objectives for either using or not using objectives and the techniques we use for each side
  • Explore the assumptions used when using estimates
  • Explore the assumptions used when not using estimates
  • Explore what each side could learn from the other
  • Posted and voted on what could possibly be the next thorny topic we tackle
  • and retrospect on how the Agile Dialogs unconference could be better

Here’s a few teasers of some of the discoveries… I’ll go more in depth on what was discussed in future posts as well as post some proceedings on the Agile Dialogs site.

  • When management or customers are asking for estimates, it is more important to understand their need for it; then more valuable alternatives to fulfill that need may be explored. Estimates may prove best for fulfilling that need though, so don’t force fit an alternative technique.
  • Estimation has become a scapegoat for other dysfunctions within the works system. Removing estimation won’t fix these dysfunctions, but it may help uncover them.  Whether at the end of the day, you remain with or without estimation, if these more fundamental dysfunctions can be fixed, then the work climate will improve.
  • Estimation always exists, but when pursuing a noestimates approach, the nature of the estimation actually changes from cost, time, and/or complexity to value (which is not based on those in most environments).
  • Focusing on understanding time and money estimates tends to introduce longer feedback loops for actual learning. If it is possible (and that is an IF), then removing them can eliminate waste in the work system to that learning.
  • Measurement is important in both approaches; when doing estimates we sometimes get lulled into a false sense of security that good measurement exists, when often it doesn’t.
  • Humans suck at estimation except on conceptually obvious items (obvious equating to the obvious domain in the Cynefin framework); mathematical models (particularly when the underlying assumptions on those models are validated by the team doing the work) can really help produce accurate results in the complicated domain.  The complex domain can be assisted greatly by these mathematical models, but the loop through is validating a hypothesis.
  • Another way to test a hypothesis is to set time or cost box and see if the solution at the end of the box is on track decide whether to spend more, accept as-is, or abandon; think Lean Start-up approach.

I have set-up The #AgileDialogs Daily that curates information from both sides of this thorny topic; other thorny topics will get added as a discussion on them emerges.

Using Dollars as a Constraint on a Project

I’ve been planning to write this for awhile, and this seemed to me more important to post after seeing an update from a Kickstarter campaign I am backing.

So I backed a board game, I was particularly interested in that it i intended to be small so I can take it with me almost anywhere I go.  What was amazing to me was how they calculated what they needed for funding.

Before I dive into that, I’ve backed quite a few boardgames on Kickstarter (along with music albums, music gear, and camping gear…) Most Kickstarter projects go in with varying degrees of estimations; one nice thing Kickstarter does is if you don’t reach your funding goal you don’t owe to make anything and the backers keep their money.  If you get funded, your estimates hopefully allow you to produce the game and have at least a small measure of profit. Most projects offer stretch goals that when they are reached, component upgrades and such kick in – these usually have a change in your estimate.

The gentleman that developed Carrier Commander, decided on a price point he wanted to be able to sell the game ($3 as it is a nanogame; I love small games to take with me when I travel). From there he reversed everything into size and weight by calculation based on what would be possible should he hit his stretch goals.

On the campaign page, he reveals the cost breakdown including the “Uh-Oh” zone which is the profit area…

To read up on how he calculated his way into the $3 price point without estimating, see this update:

https://www.kickstarter.com/projects/1078944858/star-patrol-carrier-commander-3-sci-fi-strategy-na/posts/1348731?ref=dash

Should all Kickstarters work this way?  Probably not…  The larger the game, the more the calculations would become overly cumbersome, particular as stretch goals needed to be calculated, so using estimation and factoring in reserve to cover the uncertainty would probably suffice.  In his instance, his upgrades were in cardboard only, so this made it much easier.

So how would this relate to software development? As I wrote in my post “When I Have Skipped Estimates”, one could use a team size as a constraint and then measure throughout.  Once the constraining bottleneck is understood and all worthwhile options for increasing throughput there have been exhausted, you could increase capacity.  This really works well for software maintenance.

One could also use something akin to what this gentleman did for his Kickstarter game; establish a fair market value for the cost of what you are building (i.e. how much is someone willing to pay to have something by a particular point in time).  Once you have this you have both time and budget constraint and now you can see how much that would pay for in terms of people and other infrastructural resource one may need; i.e. what is the capacity it can purchase?  Let’s say we got enough money that it would pay for 7 people for 6 months (+ servers, desktops, software licenses, etc). We can then execute and develop based on that.

One may ask at this point, how do you know if you will make what is needed? You actually don’t. What you do know is that this is what the person or people that set the constraint said would be what they are willing to pay. Like a venture capitalist, they have in their mind, I am willing to risk this amount of money to see if I can get what I want.  Yep, no guarantee. But then, an estimate doesn’t produce one either.

Should you do this under all cases? Absolutely not. In fact, I would say estimation is needed more often than not when deciding to fund a project (or program). And for those cases, we as an industry need to improve in estimation. However, there are cases,where estimation doesn’t necessarily help us. The more novel the project (and thus its approach), the greater the uncertainty and at some point it may be best to establish a cost (and perhaps schedule) constraint and see what you get at the end of that.  Got something valuable? Perhaps continue forward (and perhaps now introduce estimation); what you got isn’t valuable? Then you can use the knowledge you gained to decide to continue or not (and perhaps add in estimation or not).  You can use the knowledge you have to make a decision.

At least those are the cases I have for when I would go a #noestimates route… What are yours?

I’m interested in exploring each side; if this interests you, I hope you will consider joining me at the first Agile Dialogs unconference I am putting together.

The Economics of Agile Communications for Requirements

This post originally appeared on my BoosianSpace blog on 28 October 2011. Some minor updates were made.

I’ve been reading Democratizing Innovation by Eric von Hippel.  One of the items he talks about is the cost of information transfer from innovation user to innovation creator.  In his context he’s demonstrating why uniqueness causes innovations to be grown internally by organizations as opposed to being bought off the shelf.

This got me to thinking on a challenge we see in Agile Adoption, explaining the reason we want lighter-weight documentation and more face-to-face collaboration.  I got a small inspiration on the economics of the two opposite ends of the spectrum.

Let’s start with the sequential phased-gate (aka ‘Waterfall’ or as I prefer to call it a ‘Canal’) approach.  Here’s what typically happens:

A set of business or systems analysts create a document. This gets approved by the business user(s) and often management.  This then gets distributed the development team.  They theoretically read the whole thing through once and understand it perfectly (single set of communication paths of one document to N people reading it).  So here’s what the formula would look like for the communication of that information throughout the entire team:

Xfer$W = Labor$avg x [[(Ncreators x CreationHrsavg) + (Napprovers x ApprovalReadingHrsavg)] x Cyclesapproval

+ [ ((Nteam – Ncreators) x ComprehensionReadingHrsavg)]]

In words: The transfer cost is equal to the creation labor hours (number of creators x average creation time for the documents as this is what communicates it to analysts creating it) plus the approval labor hours (number of approvers x average time to read the resulting documents as this is the communications to the business representative(s)) multiplied by the number of approval cycles plus the comprehension  hours (number of remaining team members that need to read the approved document x average time to read) finally multiplied by the average labor cost per hour.

Let’s see this in action as an example with a team of 6 and 1 business user that has to approve the requirements on a small application development effort:

Xfer$W = $100 avg hourly rate x [(1 analyst x 120 hours creation time) + (1 approver x 4 hours reading to approve)] x 1 cycle + [5 remaining team members x 40 hours to read and fully understand the requirements)] = $100 x[ [120 + 4] x 1 +[200]] = $100 x 324 or $32,400

Two primary assumptions here are that an approver won’t be as interested in reading it in detail as they supposedly know the requirements and thus will not pay as much attention to reading the document he or she is signing off on…AND more importantly the team can read the document ONCE and it contains EVERYTHING they need to know.  It is perfect, nothing is missing.  These numbers aren’t exactly realistic of course, most projects would take longer and would involve more signatories and more cycles to get sign-off.  I’ll be discussing the costs of change using this model in a bit.

Now let’s look at the same communications using an Agile approach…

In the Agile approach, the entire team is going to be involved with the creation, which will now include the business owner/manager.  There is no need for a sign-off as he or she is directly involved.  There also is no need to have the development side of the team expend time in reading the documentation since they are also directly involved in creating it. To reflect on the time, the people creating the knowledge (and artifacts) is equal to the by the number of paths of communication in the team multiplied by amount of effort (the average creation time) each person has to put in divided by the number of people assisting in the communications (i.e. the number on the team).  Also, since the business owner is involved throughout the development process there is only one cycle (for the lify of this project cycle).   Thus, our equation becomes the following:

Xfer$A = Labor$avg x [(Ncreators + / CommPaths) x CreationHrsavg]

Where CommPaths = ∑((Nteam – 1 ) + (Nteam – 2 ) + … +(Nteam – (Nteam – 1) )  

The assumptions here are the average creation time per person is the same as the creation time in a canal environment; i.e. the scope is the same.  Since this is done throughout the development by all members of the team, we know this will not be one solid time block and will involve more people.  The effort to distribute the information, however, is represented by the number of paths involved divided by thepeople trying to move the information along those paths. This is why the communication paths variable is the numerator and team members the denominator.

For our example of a team of 7 (since the business owner is now a part of the team),

CommPaths = ∑((7 – 1 ) + (7 – 2 ) + … +(7 – (7 – 1) )  =  21

Xfer$A = $100 x [(21/7) x 120] = $100 x [3 x 120] = $100 * 360 = $36,000

Your probably wondering where the savings is…  This looks like a wash. It isn’t.  What comes into play is the cost of change as it occurs over the project.  To truly understand the costs though, we need to discuss what happens over the life of the project.  In the ‘canal’ project, if we make a change, we have to go though the same expensive communications path as the initial development.

Xfer$W = Labor$avg x [[(Ncreators x CorrectionHrsavg) + (Napprovers x ApprovalReadingHrsavg)] x Cyclesapproval

+ [ ((Nteam – Ncreators)x ComprehensionReadingHrsavg)]]

Let’s use our example and say we had a change that requires roughly a quarter of people’s time to produce version 1.1 of the requirements specification:

Xfer$W = $100 x [[(1 x 30 hours) + (1×1 hour sign-off)] x 1 + (5 x 10 hours comprehension)] = $100 x [[30+1]x1+50] = $100 x 81 = $8100

So now total cost is the $32,400 + $8100 or $40,500 ; each time I go through a change the cost will go up by some amount.

Going back to the Agile side, because we are performing the requirements communication throughout the development and we defer to discussing only the requirements needed for the next piece of work, changes and more importantly the associated communications are already baked in.  We haven’t defined it all upfront an then distribute it for use once.  Thus, the additional costs for the next distribution are near zero.

We expect requirements to change.  We defer unknown things to as late as we can responsibly can (the last iteration possibly if the work can be done in one Sprint) so that the risk of needing to change it is minimized. Thus our costs are not going up with changes, they are remaining basically flat.  In the sequential phased-gate scenario, one significant change could ‘wipe out’ the supposed savings you saw in the simple calculation, which optimistically presumed that everything worked perfectly the first time.

 

Note: I am not a accounting type by nature; this just seemed like a logical fit and I am trying to find some empirical evidence to support it or that contradicts it; if you know of some, it would be appreciated!  Just post below and the sources you are using.

BTW, I have also toyed with the fact that requirements (stories) that need to change along the development cycle have the cost of the original one, but to multiplied by the probability that they are still in the backlog to be done and not done yet.  If you you added up the percentages as buckets of 10% along the project and divided by 10 to get the likelihood that this occurs as 50% (on average) then the cost would be akin to the following by example:

CommPaths = ∑((7 – 1 ) + (7 – 2 ) + … +(7 – (7 – 1) )  =  21

Xfer$A = $100 x 50% of [(21/7) x 40] = $100 x 50% of [3 x 40] = $100 x 50% of 120 = $100 x 60 = $6000 so the total cost of changes accumulate at a slower rate.

During a project execution, you could actually use a real rolling percent of stories closed over total stories.