Agile Dialogs Recap

This will be a short recap of the Agile Dialogs unconference held yesterday.  We discussed ways of predicting value production with and without estimates.  Over the next few days I’ll blog more of what we uncovered, but this will be a simple post on how the unconference was approached.

We had a good mix of people that were passionate, though no one was at I’d say fully at each end of the spectrum. The big takeaway was that both sides are right in many ways and wrong in many ways.  The idea of not using estimates of time, money, and/or story points can be done and is highly context dependent. As with any approach it may nor may not work in your context; it depends, or YMMV.  The best you can do is try it as an experiment and see whether it works for you.

What we did at Agile Dialogs was –

  • register with one side or another along a continuum (how strong we felt on the issue),
  • post the types of things we estimate,
  • tell our stories of both our successes and failures on both sides – with and without estimates
  • explore our objectives for either using or not using objectives and the techniques we use for each side
  • Explore the assumptions used when using estimates
  • Explore the assumptions used when not using estimates
  • Explore what each side could learn from the other
  • Posted and voted on what could possibly be the next thorny topic we tackle
  • and retrospect on how the Agile Dialogs unconference could be better

Here’s a few teasers of some of the discoveries… I’ll go more in depth on what was discussed in future posts as well as post some proceedings on the Agile Dialogs site.

  • When management or customers are asking for estimates, it is more important to understand their need for it; then more valuable alternatives to fulfill that need may be explored. Estimates may prove best for fulfilling that need though, so don’t force fit an alternative technique.
  • Estimation has become a scapegoat for other dysfunctions within the works system. Removing estimation won’t fix these dysfunctions, but it may help uncover them.  Whether at the end of the day, you remain with or without estimation, if these more fundamental dysfunctions can be fixed, then the work climate will improve.
  • Estimation always exists, but when pursuing a noestimates approach, the nature of the estimation actually changes from cost, time, and/or complexity to value (which is not based on those in most environments).
  • Focusing on understanding time and money estimates tends to introduce longer feedback loops for actual learning. If it is possible (and that is an IF), then removing them can eliminate waste in the work system to that learning.
  • Measurement is important in both approaches; when doing estimates we sometimes get lulled into a false sense of security that good measurement exists, when often it doesn’t.
  • Humans suck at estimation except on conceptually obvious items (obvious equating to the obvious domain in the Cynefin framework); mathematical models (particularly when the underlying assumptions on those models are validated by the team doing the work) can really help produce accurate results in the complicated domain.  The complex domain can be assisted greatly by these mathematical models, but the loop through is validating a hypothesis.
  • Another way to test a hypothesis is to set time or cost box and see if the solution at the end of the box is on track decide whether to spend more, accept as-is, or abandon; think Lean Start-up approach.

I have set-up The #AgileDialogs Daily that curates information from both sides of this thorny topic; other thorny topics will get added as a discussion on them emerges.

What’s This Agile Dialogs Thing Anyway?

If you haven’t caught it, I’m running an unconference called Agile Dialogs; you can find out more about it at http://agiledialogs.org.

So why would I want to take on thorny topics, ones that seem to bring out flamewars? Because the lack of listening to each side as we argue from each other’s sidelines seems an inane way of advancing our craft.  If we want organizations to advance their thinking, we in the community need to advance ours and listen to those with differing opinions. It doesn’t mean we need to agree, but we do need to listen, truly listen to what the other side is saying.  When we decide to challenge the other side, we need to do it in a manner that isn’t trying to cole them into accepting we are right, but to have them think through why they are taking the position they have chosen. We may reaffirm it, but in the process, we will have had them rethink underlying assumptions.

Dialog is about understanding and elevating assumptions so we can find answers to our questions and perhaps a new better way forward.  I know I am a believer in good estimates when they make sense and when they don’t not even bothering with them. But perhaps when I thought they weren’t useful, there was a better way to have made them useful.  I certainly welcome learning that in a manner that doesn’t start out with – hey bud you are wrong. That closes down dialog as that is about winning an argument. Save the arguments for a debate, let’s find out what makes each side tick and see what we can learn.

I hope you will join me!

Demonstrating the INVEST Criteria

potters_gold-2

I’ve been doing some rather “loftier” types of post, let’s return to something a bit more fundamental to (software) product development, user stories and in particular the INVEST acronym as developed by Bill Wake (see INVEST in Good Stories, and SMART Tasks). I was helping a coworker with some good examples of stories to showcase the INVEST criteria and felt this may be a useful post for people.

Let’s start with two formats User Stories may be expressed, we’ll stick with latter:

Who-What-Why

Or more commonly as

As a (role or persona)

I want to (perform some business function)

So that I can (get some business value/rationale)

Usually breakdowns in good user stories fail to articulate one or more of the INVEST criteria. Let’s look at each separately along with some examples.

I = Independent

We want stories to be independent; an independent story should be small vertical slice through most, if not all, of the software stack (UI, business logic, data persistence, etc.). Let’s start with a counter example to help demonstrate this.

As a decision-maker,

I want the data selection table menu to show the latest option results

So that I can determine which one to analyze.

Sounds OK right? Not really, the menu is a UI item. Where is this data going to come from, presumably a database, file, or API. It may get processed in a middle tier to do some filtering or sorting. The UI layer where the menu resides is only one layer; this story would be dependent on other stories in other layers to be able to be implementable. Usually any story that goes into the ‘how’, becomes less independent. Let’s rewrite it to –

As a decision-maker,

I want to view the latest option results

So that I can determine which one to analyze.

Besides appearing simpler, this doesn’t specify the menu, leaving the development team needing to do all the tasks to implement the results. Tasks could be querying the table, apply filter algorithm for outliers, sort from highest to lowest, display as a menu. It also doesn’t lock the team into the how – if the result could also come from an API or web service they can present those as an options to the product owner for selection; same with the menu, perhaps a table would be better.

N = Negotiable

Negotiable means the product owner and development team can make trade-offs on the priority of the story and/or acceptance criteria. Again let’s start with a counter example.

As a survey reviewer

I want to compare multiple respondent data sets

So that I can see if a correlation may exist.

What data sets? What data of the data sets? How is the product owner supposed to negotiate on this? Let’s add some detail –

As a survey reviewer

I want to compare age bracket data to geographic region

So that I can see if particular geographic regions contain particular high levels of a particular age group.

This is more negotiable; why? Suppose there was a second story –

As a survey reviewer

I want to compare income bracket data to geographic region

So that I can see if particular geographic regions contain particular high levels of a particular income.

Now the product owner can negotiate on which one is more important? They could also dig into acceptance criteria and talk about the ages or incomes that make up those brackets or what level of granularity they need to do for the regions. Often non-negotiable stories, ones that seem that MUST be done and can’t be ranked against others that MUST be done also are an indicator they are too big; they encompass too much.

V = Valuable

Another counter example will illustrate a story that doesn’t articulate value…

As a decision-maker,

I want to view the latest results

So that I can see them in order.

Why do I want to see them in order? (It’s presumed the order desired would be acceptance criteria. Better to specify the why, this also usually indicates why not only is the function needed, but why the particular acceptance criteria was chosen. Here is our refined story again –

As a decision-maker,

I want to view the latest results

So that I can determine which one to analyze.

Now we know why we need to do it.

E= Estimable

We don’t care so much about the estimate, which is one reason we use relative estimation based on complexity over trying to nail down an estimate in effort/length of time (hours for either). We care that some amount of certainty in the complexity can be articulated; this gives us a gauge that it is understood well enough to start. The higher the estimate, the less certainty, meaning it is more complex. At some point, this may require splitting into 2 or more stories to reduce complexity.

As a investor,

I want the latest analysis

So that I can decide what to do.

What do we mean by latest analysis? How do we estimate that? And that value statement doesn’t help; what decision are we trying to make – the business function – and why do I want to make it – the why. Here’s a story that may be estimable (providing acceptance criteria can be drawn from this)

As a investor,

I want the latest ROI graph with my minimum threshold shown

So that I can decide whether to continue making this investment.

OK, we want a graph, which we know must draw on data; if the raw data needs to go through calculations, we will need to do that. This threshold, is it entered or stored somewhere? Looks like well need tests to ensure the calculations are done properly. If we need to ensure web accessibility for people with sight disabilities, we may need a textual equivalent. Regardless, even with this uncertainty, being able to see most of the tasks and thinking on their complexity will give me the ability to estimate. Many have found that the estimate becomes pointless once the team actually has confidence they can complete it along with other stories in an iteration; remember this is mostly to describe common understanding. This may take months or even years to get to that point though.

S = Sized properly

Hand-in-hand with estimable, is sizing. If the story is large, really complex, then we need to think about splitting it into smaller independent stories. A good example of a story that is probably too large is the first story that dealt with a survey reviewer. The stories that follow it describing the data sets to compare are smaller and clearer and probably could be successfully implemented within an iteration. Who knows if the first one could? Also, if I couldn’t I get no partial credit for getting some of it done. If I get any small story done, then I can take credit for it.

And lastly, T = Testable

Testable stories are determined by their acceptance criteria. Let’s go to our first good story and fill in some acceptance criteria to see this clearly.

As a decision-maker,

I want to view the latest option results

So that I can determine which one to analyze.

When we turn the card over, we find the…

Acceptance Criteria:

  • Display options as menu choices
  • Display options in descending order from highest to lowest
  • Display results below my threshold in red and bold these
  • Don’t display negative results
  • Option results are calculated by the uncertainty index to the simulation result
  • Return the results in 0.3 of a second

These are easily testable, manually or in an automated fashion. (NOTE: there is a more sophisticated method called Given-When-Then from Specifications by Example by Gojko Adzic that allow these tests to be more easily automated in tools such as Cucumber.)

Using Dollars as a Constraint on a Project

I’ve been planning to write this for awhile, and this seemed to me more important to post after seeing an update from a Kickstarter campaign I am backing.

So I backed a board game, I was particularly interested in that it i intended to be small so I can take it with me almost anywhere I go.  What was amazing to me was how they calculated what they needed for funding.

Before I dive into that, I’ve backed quite a few boardgames on Kickstarter (along with music albums, music gear, and camping gear…) Most Kickstarter projects go in with varying degrees of estimations; one nice thing Kickstarter does is if you don’t reach your funding goal you don’t owe to make anything and the backers keep their money.  If you get funded, your estimates hopefully allow you to produce the game and have at least a small measure of profit. Most projects offer stretch goals that when they are reached, component upgrades and such kick in – these usually have a change in your estimate.

The gentleman that developed Carrier Commander, decided on a price point he wanted to be able to sell the game ($3 as it is a nanogame; I love small games to take with me when I travel). From there he reversed everything into size and weight by calculation based on what would be possible should he hit his stretch goals.

On the campaign page, he reveals the cost breakdown including the “Uh-Oh” zone which is the profit area…

To read up on how he calculated his way into the $3 price point without estimating, see this update:

https://www.kickstarter.com/projects/1078944858/star-patrol-carrier-commander-3-sci-fi-strategy-na/posts/1348731?ref=dash

Should all Kickstarters work this way?  Probably not…  The larger the game, the more the calculations would become overly cumbersome, particular as stretch goals needed to be calculated, so using estimation and factoring in reserve to cover the uncertainty would probably suffice.  In his instance, his upgrades were in cardboard only, so this made it much easier.

So how would this relate to software development? As I wrote in my post “When I Have Skipped Estimates”, one could use a team size as a constraint and then measure throughout.  Once the constraining bottleneck is understood and all worthwhile options for increasing throughput there have been exhausted, you could increase capacity.  This really works well for software maintenance.

One could also use something akin to what this gentleman did for his Kickstarter game; establish a fair market value for the cost of what you are building (i.e. how much is someone willing to pay to have something by a particular point in time).  Once you have this you have both time and budget constraint and now you can see how much that would pay for in terms of people and other infrastructural resource one may need; i.e. what is the capacity it can purchase?  Let’s say we got enough money that it would pay for 7 people for 6 months (+ servers, desktops, software licenses, etc). We can then execute and develop based on that.

One may ask at this point, how do you know if you will make what is needed? You actually don’t. What you do know is that this is what the person or people that set the constraint said would be what they are willing to pay. Like a venture capitalist, they have in their mind, I am willing to risk this amount of money to see if I can get what I want.  Yep, no guarantee. But then, an estimate doesn’t produce one either.

Should you do this under all cases? Absolutely not. In fact, I would say estimation is needed more often than not when deciding to fund a project (or program). And for those cases, we as an industry need to improve in estimation. However, there are cases,where estimation doesn’t necessarily help us. The more novel the project (and thus its approach), the greater the uncertainty and at some point it may be best to establish a cost (and perhaps schedule) constraint and see what you get at the end of that.  Got something valuable? Perhaps continue forward (and perhaps now introduce estimation); what you got isn’t valuable? Then you can use the knowledge you gained to decide to continue or not (and perhaps add in estimation or not).  You can use the knowledge you have to make a decision.

At least those are the cases I have for when I would go a #noestimates route… What are yours?

I’m interested in exploring each side; if this interests you, I hope you will consider joining me at the first Agile Dialogs unconference I am putting together.

Agile Coach Camp US – Neat Learnings

I attended several sessions at Agile Coach Camp; I was really impressed by the topics proposed this year. I went to some on Business/Organizational Agility, improving feedback/listening skills, one on creating Joy at work, and several related to using games to teach various Agile concepts. I’ll have to admit, I got lighter on the subject matter as the Camp wore on… Anyone that knows me usually knows I have no fear in proposing 2-3 topics.  This year I proposed none.  I was a bit too dain bread to host one given all the distractions and effort that went into running the Camp itself.

Before I jump into my key learnings/highlights, I was very glad to see one of the emerging themes be one of invitation over imposition. So many organizations are now jumping onto the Agile bandwagon and imposing Agile from above as opposed to helping it emerge; and then we wonder why there is resistance! I also really liked that there was good discussion on various technical topics as well; I often feel these get forgotten.  It’s important for us as a coaching community to understood how we can help organizations adopt things that matter and for software development they ummm… seem… to be technical in nature.

So my highlights; I would be remiss if I did not say one highlight was our extremely energetic facilitator Trica Chirumbole.  I think she brought a great energy to the Camp form opening to closing circles.

I was glad that my first session was one that Ryan Ripley ran to clear up some of the misperceptions people have about why an organization should adopt Agile. We seemed to come up with some great clarifying points to help our organizations or clients understand what to expect as an end result as well as various interim improvements to expect along their journey. Here were some of the key take aways:

  • a focus on improving organizational adaptability/responsiveness
  • use of data to make decisions, but not without regard of what the organization’s people will be undertaking
  • more transparency into organizational performance; risks more visible so better decisions can be made
  • better trust within the organization
  • containing failure and learning from it
  • improved employee engagement and retention

The title of the session was it’s NOT about being Better, Faster, Cheaper; though we rearranged it to mean this by stating: Better = more predictability and customer-focus, Faster = is time to market, not just meeting a schedule, and Cheaper = a focus on producing more value, but not reducing costs.  The hard part we found for measuring organizational performance on these is few organizations have a baseline measurement for any of them; in fact we came up with the hashtag #nobaseline to tweet about these instances. Reminding me I could use that with my current client 🙂

Ryan later ran a follow-in discussion from a session we had in the Open Jam session of Path to Agility in Columbus on creating Joy at work.  It was a complementary session to the earlier session as it focused on the human aspects of making those aspects happen. Since we had a new crowd, we really spent a third of the session kind of bringing them up to speed on our thoughts (at least it felt that way). I have an earlier post to help you. Once there though, we explored why Joy was more important than happiness though several people still thought they were synynomous.  Quite a bit of the conversation focused on how NOT imposing choices on people (what Daniel Pink would refer to as Autonomy) is key to this.  Some other also had it relating towards accomplishment (there’s Mastery) towards a purpose. I mentioned that I like Jurgen Appelo’s CHAMPFROGS; it feels more complete.  Since then, after reading Frédéric Laloux’s book, Reinventing Organizations, I might also say Joy is the integral of Wholeness from time = 0 to the present.  I still also stand by our earlier equation as well from Path to Agility.

I’m going to go quick over some of the rest as I feel I have been rambling a bit; I went to a games session hosted by Declan Whelan and George Dinwiddie on games they had come across or developed.  Declan presented Tom Grant’s tech debt game; everyone played it different and got results that demonstrated WHY we should make investments into things like automated testing and continuous integration. George showcased a game that he has been slowly evolving to show how refactoring works – it more demonstrated how software is malleable and we should treat it as such.  This is of course on its own very valuable.

I attended two other sessions I want to highlight, also both ‘games’-oriented: Mark Sheffield held sort of a games round-up.  I learned several new games to research and variants of games that would prove useful for helping teams and managers understand things better.  Andrew Annett ran a session on the Empathy Toy, which is all about common cognitive empathy (aka developing shared mental models).  This toy is fantastic, every coach should have to play this – you are always trying to find ways to bridge the gap in understanding.  My cohort Ken Furlong and i are already developing new ways to use it.

We had 2 happy hours before and during Camp as well as some food shared in various locations – it was awesome catching up with Diana Larsen, Daniel Mezick, Aaron and Brian Kopel, Jeremy Willets, Kevin Goff, faye Thompson, Declan Whelan, Tim Ottinger, and Ellen Grove at length (during Agile2015, I also had the chance to spend some time with my friends Woody Zuill, Pawel Brodzinski, and Chuck Suscheck at length too).

When I’ve Skipped the Estimates…

spiral_clockWhile the debate carries on whether one must have estimates or not, I thought I’d provide a viewpoint of when I found them no longer needed.

However, before go there, let’s start off with a bit of a story about when estimates were not useful, but required, so I took the *EASIEST* path out.

Let’s go back to 2008; I was just hired on as a software development Branch Chief in USDA and asked to prepare the budget for the next fiscal year.  Of course, the first thing I did was poll around on what upcoming work there was. No one knew except that the same amount of maintenance as was last year. That was easy, apply an inflation factor on what we had this year, add a management reserve, and we’re done.

Now onto the harder problem: what about the unknown new projects looming.  dollar_tunnelSo I investigated how these normally got funded; any estimate done is simply reported up the chain (as requested Development monies), but the funds are actually provided by the programs that need the work done for them. These are used as a projection for  the branch and nothing more. Any work really done goes through its own process of requesting and then actual money is provided.

So I asked, how many projects did we do the year prior and how much did they cost? And the year prior? And the prior to that? 4 projects, 4 projects, and 6 projects were the answers. (I won’t go into the money numbers, but I’ll note this branch did not develop super huge applications, but small to medium sized applications with some complexity – a GIS app, an analytical app, several tracking type apps, a loan package development application, that may give you the picture.)  I didn’t need to know the number of apps for  the reporting, but I used that number to calculate the average cost per app we developed, projected into 2009 dollars; adding a standard deviation game me some more certainty, then a 15% management reserve.  Once I had those numbers, the process was literally a half hour to run through the math a couple of times to ensure I was on target.

My project managers could not believe I was going to use that number; they always went around to each potential customer and asked them to conjecture on applications or upgrades they wanted. Most never got funded and something else came up and got funded, so why spend time estimating what never happened.

This was a very low precision estimate, but got me in a reasonable and justifiable target number. (If the system allowed for ranges, I would have provided those, but alas it didn’t.)

I’m guessing you are wondering how ‘correct’ I was with that… We had 5 projects and it was fairly close to the average.  The next year we did the same thing, but it was off – much higher as the Recovery Act kicked into high gear, but as I pointed out before, it didn’t matter.

OK, that was budgets built using the least painful method of estimates possible.  (Sometime in the future, ping me on how I executed on real work within the branch… The spoiler hint is I limited the WIP of projects going on at any one time, so that I could keep my team close to constant size, the increase meant I experienced a contractor headcount increase by about 2 people.

So now onto some maintenance estimation I did away with…

When I took over running the maintenance team at Office of Pesticide Programs, every Software Change Request (SCR) came in went into a queue where it was examined in a meeting and the contractor told to go estimate it.  When the contractor came back with their estimate, usually a week later, the work was approved.  They estimated in time and they could then quote the money as they figured out who was going to do the work and then they could apply their labor rate. This singular meeting was at least an hour long every week and consisted of telling the contractor go estimate the amount of work to do and report out on estimates made.  This never went anywhere; no one did anything with these estimates. We never said no to the SCRs for the legacy systems we maintained, mostly because no one worked with the business well enough to know whether it should happen or not. On top of that, there were 20 some legacy apps with at least that many stakeholders to try and satisfy. Perhaps at some point, this estimation process was used to say no, but with the mostly low complexity work coming in, there was no drive to say no.

We set budgets based on annual contractor headcount. Perhaps at some point this estimation exercise was used for this, but it wasn’t any longer.

So I did a couple of things, I killed the meeting. I put the onus on the government application maintenance staff to work with the business to prioritize the work in their viewpoint. I set-up a rule set for taking these priorities, along with a quick technical assessment (that set severity) and the date in, to establish a prioritization across all apps.  I got these stakeholders to agree to this scheme so I didn’t have to fight with each app. We still never said no, we just prioritized the work not started constantly.

And I eliminated the estimates.  I decided on contractor staff based on how much work I could get through; I concentrated on further process improvements before I thought of increasing headcount. (You can read about the Kanban system that was set-up on GovLoop if you so desire.)

To go full circle to where once again I found an estimate helpful in this environment was a potential regulatory change was going to require a rather large piece of work to our legacy PowerBuilder app. I was asked how long it would take; the upper management was interested in ensuring that we had enough lead time to get it done. Not having it done, had a financial impact on the Agency.

Since I had a Kanban system implemented in Trac, I filtered out that legacy’s enhancements to similar ones and calculated the average and two standard deviations.  I gave them that range with stating the high number had 95% confidence we’d fit within it. They deeply appreciated the accuracy and precision in this case. This is a form of estimation of course, but the real point is day-to day, we never estimated; there was zero value in it.  We did capture actual data though using our system, which made predictability possible just as I mentioned above.

Hopefully this will help others at least understand one context where estimations weren’t needed and also where low fidelity estimates were good enough to establish a reasonable estimate. I consider myself a no estimates guy, only because I look at the assumptions of why I need to estimate and if I don’t and can derive a more suitable answer in some manner, I’ll probably use that.  It’s all a matter of context.

Yin and Yang in Change Management: Appreciative Inquiry and the Power of Habit

yin-yangThere are many change management approaches out there. Most focus on weaknesses you need to change; several others out there focus more on things to keep the same and build upon. Most change agents then further target using one approach or another, perhaps based on context, or perhaps as a ‘goto’ tool (you know what they say about goto statements – don’t use them!).

My preference is to balance between these approaches, two I have found really useful are Appreciative Inquiry and the Power of Habit.

I have always tried to help people and organizations find their strengths and build on those. This is the basis of Appreciative Inquiry, something I learned by reading the Thin Book of Appreciative Inquiry.  I’m looking forward to reading more on this to be honest as I think it is undervalued as an approach (I almost said under-appreciated…). Being able to identify, really help others identify, the core strengths they have and harness those for the changes they want to see is really powerful.

Here’s an example of how you might use Appreciative Inquiry; have the leadership of an organization (preferably with a sprinkling of lower down in the totem pole) create a KrisMap of where they want to be, personifying what the future organization will become. Then have the people identify the strengths they can leverage towards the resulting characteristics and build action plans to achieve these.  Very powerful, and quite motivating since you are using core strengths.

Screen Shot 2015-07-12 at 10.02.34 PMYet only applying strengths does not help you eliminate weaknesses.  When talking about change such as what may occur in an Agile Transformation, another approach to look at is the Power of Habit. When habits work against where the organization’s people want to be, then one needs to look at changing the habit to a new one, while keeping the reward the same.

The Habit Loop can be defined as what cues a decision that needs to be made and once similar decisions are made repeatedly following the same formula that provide some form of benefit or pain avoidance (reward), then this will be the preferred decision routine; a craving gets established.  This is regardless of whether we’re talking individual, organizational, or societal. Keeping the reward the same, while changing the habit allows new habits to become more positively reinforced.  This can take some work and I always recommend breaking down the habit into a causal loop showing all the steps being taken.  This helps in identifying leverage points that you can use (strengths in Appreciative Inquiry speak) and possible side loops that could railroad the change; essentially risks to mitigate.

Lastly, remember introducing change, regardless of approach, can be overwhelming. Limit the number of changes you are introducing at any particular point in time.  This gives you a chance to better sense the effect the changes are making and respond accordingly.

For more information, see the latest version of my Taking Flight presentation.