Improv Games for Innovation – #InfoCamp Session

This was originally posted 2 Oct 2011.

At InfoCampSC, I decided to host a session on using Improv Games for Innovation to share some of what I learned from Mike Sutton at Agile Coach Camp.  It was a hit!

We played 3 games; I used the machine as a warm-up.  With 12 people, we had a nice machine running.  This machine was dubbed “The Library Stamper”.

We then moved into a game where I had 4 folks describing what they disliked or problems using MS-Excel.  Teams of two folks would cherry pick these problems and run and create a potential solution for the problem and then create a further idea off that one and so on until they had 4 ideas.  Then they were to go cherry pick another problem.  We really got some creative answers.  By using improv, it focuses you on just creating ideas and not judging them.

We next used teams of two taking turns to create a story line.  They had 30 seconds with their first round and then 15 seconds the second round.  The story line was quite creative (I feel sorry for Google) and fun.

(The photo of this seems to have gotten lost; if I can find it, I’ll re-add it…)

I summarized how the concept of Yes And… is the way to create new ideas building on each other.

We closed with creating a picture of how folks thought the session went. Each person got to add one line.  Here is the result…

maori_dragon-improvgame_result

We concluded it was a dragon (perhaps with a Maori bent)…

What is a Matter w/Unconferences

I’m sitting at an unconference and really feel compelled to write a note about what is wrong about MOST unconferences I attend…. Here is a definition so we can focus attention on what’s wrong:

An unconference is a conference organized, structured and led by the people attending it. Instead of passive listening, all attendees and organizers are encouraged to become participants, with discussion leaders providing moderation and structure for attendees.

Definition from http://whatis.techtarget.com/definition/unconference

The disappointing thing I am finding is almost all of these have too many presenters or panels discussing ‘at me’. There is no true peer-to-peer discussion and/or hands on learning. And more and more of these are having their sessions being planned in advance.

One session at the one where I am currently had the title stated such that an audience was supposed to make decisions on what would be discussed yet the speaker had slides! How in the heck could this person know what was going to be proposed? Rather it was a case of twisting the proposals into what they desired to present.

I want folks planning these to be more conscious of this; please do not call your conference an unconference if you are having people talk at me.

Of course not all unconferences are falling into this trap, but most of those that are not are seeming to be open space events; I love open space, but a good unconference doesn’t have to be this format.

If anyone has a way of finding out beforehand where unconferences actually are falling more into a typical conference format, let me know…

ACC US Session: How to Practice Agile Scaling w/o a Condom: un-SAFe Agility

I held a provocatively titled session to explore some of the issues surrounding SAFe, DAD, and other Agile scaling frameworks that prescriptively define the structure in how they work.  I personally don’t feel it useful attacking frameworks, but understanding the context around them so that one can judge for themselves whether they will apply or not.  The path I took was to explore the problems, then the why we are scaling, then finally the assumptions these hierarchical and rigid frameworks make.  I did this through time boxed brain-writing portions and then discussion on the results.  This was to eliminate the possibility for undo influence between participants.

We had several people come and go, but my core group was Karen Spencer, Kristen Tavelli, Dave Rooney, Brett Palmer, Susan Strain, Diana Wiliams, Darren Terrell, Sameli Zeid, Patrick Wojtak, and myself of course.  Brett in particular stated he found the session useful; he had just completed his SPC and was struggling with some of the rationale behind how to apply it.

Problems created when introducing SAFe (or other hierarchical scaling approach)

The following are the problems folks have seen when implementing these hierarchical approaches. (NOTE: we are talking about actual implementations seen and/or the way the framework is being prescribed to apply.)

  • unknown needs for why certain measures or metrics are required – most of the metrics these frameworks seem to roll-up seem to be to ensure that the same items are measured across teams and not necessarily what may be needed for the individual team itself
  • these metrics also seem to be used to compare team performance in a negative manner (and thus lead to team’s gaming their metrics to keep from being viewed negatively)
  • it also seems to prescribe the same process across all teams (mostly around Scrum rituals)
  • often times the organization begins implementing SAFe without executive buy-in, in particular from the business side of the organization
  • it tends to make too many decisions upfront with the product management/program level making decisions around how work will be done and not just about the how, sometimes well before any team(s) will begin solving it
  • it also removes decisions from the team around cadences and architecture, and constrains what improvements or experiments a team may do
  • after getting agile teams to drop formal roles and promote T-shaped people (generalizing specialists); most of these frameworks and SAFe in particular, introduces unneeded roles again
  • SAFE seems to focus at driving a release (train – get that damn train off my runway says the aviator!) at the team level; they are still left to simply struggle on their own (in fact there seems no one that they can truly turn to for impediment removal either)
  • with all these new roles and early decisions, this introduces unneeded coordination overhead
  • it reintroduces big upfront requirements again, sometimes using lighter models, but sometimes favoring back towards heavier ones
  • it reinvents Gantt charts with post-its
  • for teams starting their implementation of this sort of framework, it begins to force common processes and tools onto teams that may have evolved to a different set
  • also when beginning implementation, there seems ot be a lack of communication to the teams why such changes are needed, it just begins to impose them without explaining the rationale behind them
  • and there may be some possible unsound assumptions being made as to the need to scale in the first place

Whew! That’s a lot of problems, but there must be a reason for scaling?

Why Are We Scaling? (What do organizations want…?)

So we turned our attention to why we are doing this in the first place.  Understanding the reason(s) may help us make more useful decisions on approaches and such.  Here’s our take on the why’s organizations we’ve encountered are doing so…

  • there’s a silver bullet mentality; there must be a one right way to getting consistent result from all teams
  • there is a desire to help large programs adopt Agile across the enterprise with an approach that can be easily visualized
  • the above two reasons also seem to be a means for simply trying to organize a large number of teams and the people within them
  • for programs with large technical products, it can help them coordinate their activities, dependencies, and constraints
  • there may be multiple teams with multiple dependencies
  • often ‘programs’ are defined by the organizational structure that already exists or the budget that is provided to fund the work (it’s easier to sell large programs for large budgets that will produce large benefits that than a collection of smaller products that may collectively and more loosely accomplish the same results)
  • there is a belief it will remove impediments more easily (remove an impediment for one team will remove it for all teams)
  • there is a desire on management’s part to see consistency and predictability across all teams
  • and lastly in many Agile approaches, middle managers don’t see where they fit; they feel a loss of power – these hierarchical approaches show where they retain power and control

In particular, we discussed how some of these desires to retain hierarchy for coordination produce results that follow Conway’s Law.  The resulting development may have rigid and brittle interfaces.

Underlying Assumptions

So lastly we turned our attention to the assumptions these approaches seem to be based on…

  •  belief that management has limited insight into what teams are doing; our discussion on this revealed two parts to this – management expects information to be pushed to them as opposed to pulling for it and secondly management has a belief all data coming to them should be identical for easy consumption
  • a fundamental belief that process is more important; essentially it is process that helps interactions between people
  • a belief that this is how one should scale/coordinate Scrum teams and not through simpler mechanisms such as Scrum of Scrums
  • along with the organization and budget above, BIG Budgets = Importance = Easy Approval over having to ponder each smaller need/budget request on its merit
  • that management believes they will be able to see better productivity and identify where teams need to improve their performance
  • there is a fundamental belief that all development work is the same and thus should follow the same process
  • it assumes that organizations will customize the approach and not adopt it as-is
  • for organizations where they have removed these roles, introducing new roles will be easy (or having people swap from one role to a new one will be easily done)
  • it assumes all teams can standardize on a cadence
  • it assumes we must manage complexity from a central location; I mentioned that a wonderful book that explores where complexity can be managed in a decentralized fashion is Organizing for Complexity by Niels Pflaeging
  • there is a belief that effectiveness is derived via consistency or that efficiency yields effectiveness (or that they are the same)
  • the trade space (trade-offs being made) between autonomy and measurement around effectiveness is (are) obvious
  • management should be able to continue as-is; as Agile moves out from teams, management should not be expected to change in its role
  • the architectural stuff that needs to be done does not equate to business value or is too hard to equate to business value, so we’ll manage it as separate items of work
  • and lastly hierarchies are a natural way for people to organize; people coming together for common purpose would naturally choose it as their preferred structure
  • one I mentioned at the end was that organizations (management) must start with an end structure in sight and that they don’t need to just evolve to a structure

Sameli also had an item that came up that organizations can easily change their structure to match what SAFe has.

I hope you found what we discussed useful and that it will help guide you in your decisions on whether SAFe is right for you and/or how to customize it.  Start with this latter assumptions part to help you avoid the problems that may arise, regardless of whether you use SAFe or a similar approach, how to customize it, or decide on another approach altogether.

Game Mechanics Session – ACC Games Day

At Agile Games Day, I hosted a session where we took various game mechanics (mostly from boardgames, but some from video games) and then explored where these might be used to simulate or improve various things done in software development.

Here are the mechanics we explored (not exactly in the order we explored them):

Worker Placement

First was one I have explored extensively; the Worker Placement mechanic is useful to represent anytime people or some form of resource is being assigned to do something. It is a fairly hot mechanic in the boardgame world.

I’ve seen this played out as specifically in good stand-ups where people state they are committing to work on specific stories or tasks. A few others noted that it could be useful for other commitment actions. I’ve used this is in several of my simulation games; the most meaningful one is my OPTIMUS Prime game.

Event Deck

We also discussed that during a game/simulation we may want to have specific events occur (either positive or negative). These are best captured with some randomization (shuffling for example) or if a specific order is needed, these can be ordered (see deck building). A form Doug Alcorn mentioned was cards that are used to alter the rules – much like Fluxx; this could be useful for the effects of say a CI server or automated tests being put into play (for a positive effect) or management interference (for a negative effect).

Some events may only take effect if a player has a specific knowledge (or lacks a specific knowledge).

Another person mentioned (didn’t catch who it was) that the 8 Lean Wastes as temptations may be able to be incorporated into a deck and used as an event deck… I’m planning on noodling on this as this sounds intriguing.

Role-Playing

Another analogy I like to use is that software development is like a (cooperative) role-playing game. Team members are like characters with certain skills. From release to release is like playing a campaign, while each individual release itself is like an adventure. In these release ‘adventures’ you learn new skills or acquire new special ‘gear’ like CI, automated tests, etc. that will help you along future releases.

There seemed to be some common agreement that having character sheets might be a fun way of gamifying learning.

Power Ups

You can view these new skills or gear as Power-Ups also; a common element in video games. Mostly these are permanent in nature (which is similar to levels). Some temporary power-ups could be the temporary removal of constraints or impediments or the use of swarming to temporarily increase capacity.

Tech Tree

Much of the ‘gear’ acquired by teams that improve performance follow in a progression of sorts; this is similar to what is known as a Tech Tree in board games. An example, a team needs a source code repository and build scripts before a CI server can effectively be used.

Variable Player Powers

Where each player has a different set of powers or skills is known as variable player powers; again this is something that could be useful to simulate. The GetKanban game does this in reverse by allowing team members to work in different swimlanes, but reducing their ability to do work.

Role Selection

When you want to allow people to consciously choose a role or job they are doing that is distinctive from others, this is known as role selection. This could be useful for example when a person takes on the role say of a tester, even if they may be a developer. I personally could see using this mechanic (combined with power-ups) in a game to help show the usefulness of developing T-shaped people and we discussed developing an awareness of other roles. We also discussed using it along the lines of 6 Hats Thinking.

Simultaneous Action

Where people make decisions at the same time is known as simultaneous action or selection. If one is playing Planning Poker, the team is selecting the story point values across each member and revealing them simultaneously to see where people think complexity is. Effectively when people state what they are committing to work on during stand-up this is also a simultaneous action (of worker placement).

Other ways this plays out on Agile teams is when using a Fist of Five or Roman Vote to gauge commitment or understanding. Or silent brainwriting exercises so that people aren’t biased by answers being given (often done in retro spectives). Simultaneous surveys to people also simulates simultaneous action.

Hidden Information/Perfect Information

Some information is hidden (example: secret orders in Diplomacy) and some information is available in plain sight (example: Chess). In most cases we want to help make hidden information become perfect information; particularly if it is important to the team (known as transparency). Some areas we explored are the discovery of acceptance criteria or developing people’s journey lines or journey maps to further understanding of each other. We also discussed that this may be able to be combined with an event deck to expose additional information as events unfold.

Deck-Building

Deck-building is creating an ordered deck for play; this is very similar in nature to creating prioritized backlogs.  We also discussed where this may be useful for ordering strategy actions to possibly counter risks.

Dice Rolling

When you need to simulate a random element, rolling dice can be an effective means to do this. Multiple dice will produce average probability curves while singular dice will given discrete possibilities with the same probability of each occurrence. If you plan to use dice in a game, I recommended the highly useful site http://anydice.com

Quest Mechanic

Last, either Doug or Ryan Ripley (I forgot who) mentioned that the Quest mechanic is very useful for the actions decided from a retrospective.

Let me know if you have either useful analogies or uses of mechanics!

Agile Coach Camp: Exhergizing

So it’s now roughly 24 hours since the circle was closed and after a good night’s sleep, I am physically recharged.  I was mentally recharged during the hole time…

Our Canadian friend Bryan Beecham (@billygarnet) tweeted out how he was simultaneously inspired, refueled and energized and Mike Bowler (@mike_bowler) tweeted out that he was exhausted.  I feel this way after just about every Agile Coach Camp, so I’ve coined a Boosism (think strategery but only better): Exhergized – the state of being simultaneously exhausted and reenergized at the same time – usually the exhaustion is physical and the reenergized is cognitive in nature, but I suppose they could be reversed.

I’ll be posting more about the Camp in the upcoming days, but thought I would get that tidbit out there…

Helping Managers Become Personally Agile

This post originally appeared on 5 October 2011. I thought it fitting to repost this as I go into the next Agile Coach Camp.

I held a session at Agile Coach Camp for folks to discuss how to get management to become agile on a personal level as a means to help them understand at least some of the aspects of being Agile as a team.  This was based on my experience of using Personal Kanban and the Pomodoro technique. ( See http://agilescout.com/how-to-be-agile-now-with-tomatoes/ )

I was interested in not only spreading this ‘gospel’ so that those coaches that worked with management could begin utilizing these techniques ‘with management coaching’, but also in finding other techniques I coudl use.  The following is a summary of my session:

We explored some things to consider:

  • mid-level management may be more open to adopting various techniques than senior management
  • start with helping managers understand what the last responsible moment is
  • encourage face-to-face communication overe email and other written forms
  • set-up regular standing management meetings (preferably as stand-ups)

The first big highlight for me was using a timebox for the meeting and using a Meeting Kanban for managing the agenda items discussion.  Have every manager place items in the ‘to be discussed column as they come in’; late? You don’t get to add to the agenda. As the items are discussed, they go into the ‘being discussed’ column and then finally to the ‘discussed’ column. (This column has a WIP limit of 1.)  A separate action items Kanban is where action items go. Undiscussed items roll to the next meeting to go in the the ‘to be discussed’ column again.  This gets updated at the next meeting. I’ve done this now twice with my boss and once at our weekly Branch meeting.  It seemed to move us along more efficiently.  Thanks to@topsurf for this suggestion.

We brainstormed a little about how to ask the questions around identifying the last responsible moment.  Do we have to make that decision now?  Do we know enough to responisbly make that decision now? What if wait on making that decision?

Another item we discussed was playing the Elimination of Waste game to help create the understanding of a win-win.  The idea is time saved on wasteful activities is usually demoralizing to team members and costs the organization money.

Back to meetings…

Use a speaking token? Use it for granting someone the opportunity to speak on a subject, but not a something to force someone to speak (at least most of the time).  The idea is that everyone has an opinion, but not the same one; help those opinions be heard.

Also encourage the use of idea cards, question cards, topic cards at meetings as ways to get information and opinions out in the open.  Dot voting encourages group participation.  Perhaps do this on-line?  That way no one can see how the boss votes ( Seehttp://www.dotpoll.info )

We discussed around some question on what Agile may need in order to succeed and be sustained in the organization. The real reason why we care that management picks this up…

We concluded that without both top down support and grass roots motivation, Agile will not succeed in the organization.

We concluded the session with some othe quick hit items that we discussed around helping Agile succeed:

  • adopt the terms of the organization and not force those of Agile (or a particular approach)
  • perhaps try out some of the Agile strategy mapping Dave Sharrock discussed in a prior session

Introducing The Facilitation Kernel

Now that I’ve reposted a few older posts, I’ll give a new one…

One of the things I often get called upon to do is facilitate; meetings, workshops, retrospectives and other occasional agile ceremonies are all meetings I get called upon to facilitate.  I also find myself facilitating teams talking to one other (which actually goes into the encouragement to get together, not just the resulting meeting session) and sometimes what normally would be one-on-one sessions.

A few years back, I took one of the IC-Agile certified courses on Facilitation; they presented what they called the Facilitation Stance.  It’s useful.  (Because of possible IP ownership issues, I won’t present it here…) One thing that didn’t feel right was the treatment of maintaining neutrality as a facilitator; it wasn’t treated as core.  As I gave training to others on facilitation, they also seemed to question that lack of centrality.  Another area that I personally got, but others struggled with was the “stand in the storm”. So I began rethinking how to depict the concepts and came up with what I think is something easier to understand.

Facilitation_Kernel_Final

I call this the Facilitation Kernel.  It places Maintain Neutrality central to the entire concept.  This is important as if I am asked to render an opinion, where I am no longer a neutral party, the entire rest of the Kernel can be sacrificed.  This is particularly true if I am asked to give insight from experience or observations.  The Facilitation Stance doesn’t make this as explicit as I would like (though it does acknowledge it).

My personal feeling is that the ‘Stance’ over complicates itself with the internal “being” and external “doing” (of which maintaining neutrality is an external “doing”.  This may be just me, but I find neutrality at the core.  In the “doing” circle, I place Modeling Servant Leader Behaviors, Leading the Group’s Agenda, Promoting Dialog, Decisions, and Actions, and Harnessing Conflict. Let’s dissect these one by one:

Modeling Servant Leader behaviors is very important to exhibit as a facilitator; you are there for the team and to serve them.  You are not there to serve someone else or yourself.

By Leading the Group’s Agenda you are not just Stance’s Holding the Group’s Agenda; you are also leading them through their Agenda, whether explicit or implicit through design of the session or keeping a watchful eye and ear on what is occurring and needed.

In Promoting Dialog, Decisions, and Actions (which encompasses the Stance’s Upholding the Wisdom of the Group), you are gently nudging the group to a bias of action versus inaction and making assumptions explicit so that good decisions can be made.

And lastly by Harnessing Conflict you are doing more than simply “Standing in the Storm”, but are helping people through their differences to a positive outcome.

To do this, you need to maintain three states of “being”; self management (which IMHO encompasses self-awareness), group awareness, and situational awareness (this may be my aviation background talking to me). The alignment I have chosen in the model is important.  In order to Model Servant Leader Behaviors, I need to mostly manage myself; the situation and group awareness are far less important.  To Harness Conflict, I need to be able to be wary of where the group is currently (in terms of emotional state and energy) and the situation at hand (in terms of positions and opinions).

I places the Lean and Agile Values & Principles outside this Kernel as if I wasn’t facilitating in this realm, it may be replaced some other set.  I think this makes the Kernel fully aligned with what any general facilitator may provide.  I know I have found this useful when considering facilitating more generalized sessions such as Open Space (which I have had the opportunity to do twice) and various workshops.

What do you think? Is this congruent with your thinking on facilitation?

The Economics of Agile Communications for Requirements

This post originally appeared on my BoosianSpace blog on 28 October 2011. Some minor updates were made.

I’ve been reading Democratizing Innovation by Eric von Hippel.  One of the items he talks about is the cost of information transfer from innovation user to innovation creator.  In his context he’s demonstrating why uniqueness causes innovations to be grown internally by organizations as opposed to being bought off the shelf.

This got me to thinking on a challenge we see in Agile Adoption, explaining the reason we want lighter-weight documentation and more face-to-face collaboration.  I got a small inspiration on the economics of the two opposite ends of the spectrum.

Let’s start with the sequential phased-gate (aka ‘Waterfall’ or as I prefer to call it a ‘Canal’) approach.  Here’s what typically happens:

A set of business or systems analysts create a document. This gets approved by the business user(s) and often management.  This then gets distributed the development team.  They theoretically read the whole thing through once and understand it perfectly (single set of communication paths of one document to N people reading it).  So here’s what the formula would look like for the communication of that information throughout the entire team:

Xfer$W = Labor$avg x [[(Ncreators x CreationHrsavg) + (Napprovers x ApprovalReadingHrsavg)] x Cyclesapproval

+ [ ((Nteam – Ncreators) x ComprehensionReadingHrsavg)]]

In words: The transfer cost is equal to the creation labor hours (number of creators x average creation time for the documents as this is what communicates it to analysts creating it) plus the approval labor hours (number of approvers x average time to read the resulting documents as this is the communications to the business representative(s)) multiplied by the number of approval cycles plus the comprehension  hours (number of remaining team members that need to read the approved document x average time to read) finally multiplied by the average labor cost per hour.

Let’s see this in action as an example with a team of 6 and 1 business user that has to approve the requirements on a small application development effort:

Xfer$W = $100 avg hourly rate x [(1 analyst x 120 hours creation time) + (1 approver x 4 hours reading to approve)] x 1 cycle + [5 remaining team members x 40 hours to read and fully understand the requirements)] = $100 x[ [120 + 4] x 1 +[200]] = $100 x 324 or $32,400

Two primary assumptions here are that an approver won’t be as interested in reading it in detail as they supposedly know the requirements and thus will not pay as much attention to reading the document he or she is signing off on…AND more importantly the team can read the document ONCE and it contains EVERYTHING they need to know.  It is perfect, nothing is missing.  These numbers aren’t exactly realistic of course, most projects would take longer and would involve more signatories and more cycles to get sign-off.  I’ll be discussing the costs of change using this model in a bit.

Now let’s look at the same communications using an Agile approach…

In the Agile approach, the entire team is going to be involved with the creation, which will now include the business owner/manager.  There is no need for a sign-off as he or she is directly involved.  There also is no need to have the development side of the team expend time in reading the documentation since they are also directly involved in creating it. To reflect on the time, the people creating the knowledge (and artifacts) is equal to the by the number of paths of communication in the team multiplied by amount of effort (the average creation time) each person has to put in divided by the number of people assisting in the communications (i.e. the number on the team).  Also, since the business owner is involved throughout the development process there is only one cycle (for the lify of this project cycle).   Thus, our equation becomes the following:

Xfer$A = Labor$avg x [(Ncreators + / CommPaths) x CreationHrsavg]

Where CommPaths = ∑((Nteam – 1 ) + (Nteam – 2 ) + … +(Nteam – (Nteam – 1) )  

The assumptions here are the average creation time per person is the same as the creation time in a canal environment; i.e. the scope is the same.  Since this is done throughout the development by all members of the team, we know this will not be one solid time block and will involve more people.  The effort to distribute the information, however, is represented by the number of paths involved divided by thepeople trying to move the information along those paths. This is why the communication paths variable is the numerator and team members the denominator.

For our example of a team of 7 (since the business owner is now a part of the team),

CommPaths = ∑((7 – 1 ) + (7 – 2 ) + … +(7 – (7 – 1) )  =  21

Xfer$A = $100 x [(21/7) x 120] = $100 x [3 x 120] = $100 * 360 = $36,000

Your probably wondering where the savings is…  This looks like a wash. It isn’t.  What comes into play is the cost of change as it occurs over the project.  To truly understand the costs though, we need to discuss what happens over the life of the project.  In the ‘canal’ project, if we make a change, we have to go though the same expensive communications path as the initial development.

Xfer$W = Labor$avg x [[(Ncreators x CorrectionHrsavg) + (Napprovers x ApprovalReadingHrsavg)] x Cyclesapproval

+ [ ((Nteam – Ncreators)x ComprehensionReadingHrsavg)]]

Let’s use our example and say we had a change that requires roughly a quarter of people’s time to produce version 1.1 of the requirements specification:

Xfer$W = $100 x [[(1 x 30 hours) + (1×1 hour sign-off)] x 1 + (5 x 10 hours comprehension)] = $100 x [[30+1]x1+50] = $100 x 81 = $8100

So now total cost is the $32,400 + $8100 or $40,500 ; each time I go through a change the cost will go up by some amount.

Going back to the Agile side, because we are performing the requirements communication throughout the development and we defer to discussing only the requirements needed for the next piece of work, changes and more importantly the associated communications are already baked in.  We haven’t defined it all upfront an then distribute it for use once.  Thus, the additional costs for the next distribution are near zero.

We expect requirements to change.  We defer unknown things to as late as we can responsibly can (the last iteration possibly if the work can be done in one Sprint) so that the risk of needing to change it is minimized. Thus our costs are not going up with changes, they are remaining basically flat.  In the sequential phased-gate scenario, one significant change could ‘wipe out’ the supposed savings you saw in the simple calculation, which optimistically presumed that everything worked perfectly the first time.

 

Note: I am not a accounting type by nature; this just seemed like a logical fit and I am trying to find some empirical evidence to support it or that contradicts it; if you know of some, it would be appreciated!  Just post below and the sources you are using.

BTW, I have also toyed with the fact that requirements (stories) that need to change along the development cycle have the cost of the original one, but to multiplied by the probability that they are still in the backlog to be done and not done yet.  If you you added up the percentages as buckets of 10% along the project and divided by 10 to get the likelihood that this occurs as 50% (on average) then the cost would be akin to the following by example:

CommPaths = ∑((7 – 1 ) + (7 – 2 ) + … +(7 – (7 – 1) )  =  21

Xfer$A = $100 x 50% of [(21/7) x 40] = $100 x 50% of [3 x 40] = $100 x 50% of 120 = $100 x 60 = $6000 so the total cost of changes accumulate at a slower rate.

During a project execution, you could actually use a real rolling percent of stories closed over total stories.

Recommendations for a PM to start using an Agile Approach

NOTE: This post originally appeared on my former BoosianSpace blog on 2 Nov 2011.

I have a colleague here at EPA, she was very interested in getting started using an Agile approach to help her produce a better project (or really the software application they were to build).  I’m going to repeat and expand our conversation in the hope that it may prove useful for others.  This is set of recommendations I had for her based on her context.  I’ll start with a brief overview of that context.

Project Context

This will be a greenfield development project; i.e. there is no legacy code to worry about or legacy data to migrate. It is intended to be a public facing web application.  The infrastructure is fairly open, but will need to incoporate GIS services that deal with watersheds; if she finds it useful she may utilize a cloud service to host this.  As a public site, it needs to comply with Section 508.  There is some consideration for a mobile app as well…

The development work will be done by an offsite contractor; this contractor to her knowledge has not done any Agile development projects as of yet.  The GIS services portion will most likely be developed by a specific subcontractor on that team and another may provide UX.  Her biggest constraint will most likely be available funds; schedule and scope (hopefully) can both be in play.

Finally, she has a very interested Govt product owner and a group that is interested in participating that represents stakeholders of urban watersheds.  The target goal is to have people represent the interests and activities occurring at particular local, small watersheds and then utilize GIS services to identify the larger watersheds these are a part of so the relevant groups of interest (e.g. Chesapeake Bay Foundation) can be portfolio managers of these watersheds and provide upper level guidance.

Initiation

She was interested in getting started.  I had previousl recommended some books/directions for learning and I’ll repeat some of those here as they apply.  But she was interested in some specifics.

My first recommendation was to develop a project charter; this should have the following:

  • A description of the project goals and risks
  • A relevant ranking of these goals from a simply worded measurement perspective; I recommended having 4-6 of these in the Project Success Sliders format.  This allows people to understand that trade-offs must be made.  How she gets there should be a facilitated project charter discussion.
  • If needed, a description of roles and responsibilities of the organizations contributing to/participating on the project.  I’m not sure this is necessary given she has an engaged stakeholder.

Finally, I’d suggest that the project charter have a high level prioritized set of functional areas/epics as a roadmap for what will be developed (essentially it shoudl include a release plan).  If she is able, she can get a high level of estimates in time for these with the potential contractor, add some management reserve and then calculate the amount of funds to develop and a maintenance estimate for this with a presumed product life-cycle of 5 years.  The roadmap should have the highest business value and riskiest items first, then simply high business value, then simply risky, and lastly other items that are desired.  This will ensure that risky items have less opportunity to hold-up the project in its delivery of high business value items.Once funded, the roadmap will have only a subset of activities that can be met, this will become the release plan.

Contract Considerations

I’d recommend doing this as time and materials with an award fee.  The bid should be in two parts: the initial development to release 1.0 and the long-term software maintenance of the resulting application as an option.  This perhaps could be a fixed yearly cost.  I recommended a warranty period (perhaps 60 days) to assess how well the application is doing from a quality standpoint. Depending on how good (or bad) the application turns out to be, you can execute this option.  Really good, execute it if it is a good deal.  It also could potentially give you a point for renegotiating.  If it is really bad, the team that developed is probably the best team to maintain it also, but there would need to be some incentives around improvement.

I recommended that the contract call for a dedicated team and that team’s full participation along the entire development project AND the optional maintenance component if executed.

I’d make the award two-fold, the execution of the maintenance option is one.  The bigger one though is that if the contractor delivers under budget (the contract ceiling on development) and the quality is an acceptable level; then the remainder of the funds get split in half – the contractor gets that as pure profit, the agency can deobligate the other half and use it for something else.  It’s a win-win.

These above recommendations need to be worked out with the Contracts Office.

Recommended Agile Approach

I recommended starting with Scrum as an Agile Project Management framework. I made this recommendation based on a few things:

  • It is lightweight and supports rolling wave planning so that detailed tasks can be articulated just-in-time
  • It looks as though there will be an engaged product owner and a set of actual users that can be tapped to provide rapid feedback
  • Given her and the product owner will get a set amount of funding, which will then layout what prioritized epics that can be accomplished, she will need to be able to measure progress; Scrum’s velocity technique is useful for this.  As an initial start, I recommended 2 week Sprints and if the team finds they aren’t making what they pull in consistently regardless of how many that are pulled in, perhaps shorten the iteration time to one week.
  • Given the stakeholder audience and non-familiarity with techniques such as planning poker, I recommended the concept of inch-pebbles, all tasks/stories should be broken down in Sprint Planning to something that will last no longer than 2 normal workdays or less as a work estimate.
  • The initial Sprint planning session should be expected to be about 4 hours.  All remaining ones could be planned to be 2 hours.  The Sprint Plan should be the prioritized backlog of stories/tasks and also identify when subject matter expertise is needed.  This will allow an estimate of when these people need to be available to provide information as requirements.
  • Sprint Reviews should be scheduled for about 2 hours and consist of a demo of ‘done’ software; the definition of ‘done’ should be very clear and agreed upon by all parties.  I’d recommend ensuring it is deployable ready software.  It’s been coded, tested, added to the build, and had some amount of regression testing done.  Again because of lack of expertise, I’m not counting on any continuous integration or automated test suite to be activated.  Regression testing will be smoke tests in reality.
  • Retrospectives should be scheduled for about 2 hours and directly follow Sprint Review.   This needs to be sold to the product owner as how the team can improve AND possibly deliver more or deliver what can be done at lower cost while still maintaining quality.  It won’t guarantee it, but it will improve the chance it will happen. I recommend conducting the Retrospectives using the format described in Esther Derby and Diana Larsen’s book Agile Retrospectives.
  • Try and make the Retro immediately follow the Review and the next Planning session immediately follw the Retro.
  • Plan for a daily stand-up of 15 mnutes max.  Investigate some form of teleconference/vid conference capability for this.  Do the same with some on-line white-boarding, mind-mapping, etc.  for the Sprint Planning/Review and Retros.  The team will be distributed.  If possible, try and bring as much of the team together as possible for the Review/Retro/planning sessions.

Recommended Technical Practices

I’ll conclude with a set of technical practices I recommended.  Due to the team’s lack of experience,I didn’t recommend too many; I tried to focus on a few key items that ensure the team delivers high quality software and that what ever is delivered meets the requirements specified.  The entire scope may not be completed, but you want it working correctly and properly.

Good SOLID principles will be the foundation.

Use of Specifications by Example (see book of the same title by Godjko Adzic), whether automated with Cucumber, Lettuce, Fitnesse, or JBehave or performed manually will ensure the software meets requirements.  It makes it easy to iterate over the requirements as well.

Develop for a scenario at a time.  Use unit testing and develop tests before coding.  Once a test has been written, check into the source code repository.  Once the code has been written and passes the test, check it into the source code repository. Iterate until the scenario test passes.  Move onto the next scenario.  Once all scenarios have passed, move to the next requirement/example set.

At least every 2-3 days have someone execute elected sets of the specs fully for the entire app to date to see if any bugs have crept in as new features get implemented.

Use an issue tracker for the tasks/stories and also to track any bugs that show up during regression tests; I recommend Trac.  It’s OSS and works well.  It allows implementation of a pull process so that as developers bring stories into work from the backlog and they immediately get assigned as owners.  This is useful for the PM (Scrum Master) to see what is being worked at any point in time.

I hope folks find this useful for how to ease into being Agile if they have a similar context.  There is no one size fits all approach though, so consider this as just one approach.  because there is a clear product owner, interested user representation, and it is a greenfield project, this was my set of recommendations to try initially.  Using Retrospectives, hopefuly the team will adapt what I described above to meet their needs.

Welcome…

This is my new blog, replacing my older one, BoosianSpace, which will remain on Tumblr for a short while.  I’ll be replicating some content from that one over time, updating my articles as I do.

On this blog, I’ll explore various Product/Portfolio Management, Agility, UX, and Leadership concepts and how to implement them.

So why Nimblicious? Well I want to Make Agility Taste Great