Facilitative Leadership Overview

IMG_1414

In my last post I brought up the concept of a facilitative leader; so what do facilitative leaders do and how do the effectively lead?

What facilitative leaders do

I won’t go into exhaustive details here as this itself could be several posts, however it is important to have some idea what makes a facilitative leader distinct and that is the behaviors they exhibit. We’ll discuss this as if the behaviors are in the upper right of the Leadership Quadrant.

So in this space, a facilitative leader exhibits a desire to serve others, much like a servant leader as described by Robert Greenleaf. They also are participatory in nature, thus rather than say define a plan for a group to do work towards a goal, she or he will help the people create the plan so that is theirs. Thus a facilitative leader is one who helps the group collectively solicit and select creative ideas for the work and committing to complete it.

They also help individuals cope with their ever-changing roles and responsibilities as the team organizes and executes the work. They act as outside observers and offer improvements to the group and overall organization at large. They help the group gain clarity in the goal. They lead through influence.

How facilitative leaders effectively lead

As we explored in the last post, in order to be an effective leader, particularly when using influence as your primary mechanism, one must maintain good will with those you are leading.

Will_Equation

When your actions are opposite of what you say you will do, they work against each other and your will approaches zero. Since influence is based on will, this reduces your leadership effectiveness.

Here’s a few examples, I say I have an open door policy and will listen and attend to people’s needs. If people bring these to me and I never listen, perhaps always finding ways to dismiss their needs, or I never take action when I say I will, I am undermining my will and thus my ability to influence behaviors, my primary mechanism to lead.

If on the other hand, I state I will observe where people appear to have roadblocks and help them through them, followed by attending stand-ups hearing of impediments outside a team’s control and visibly taking action on them, I gain will to get things done.

Side note: for most of this article, I called people a group, that was to emphasize two aspects – 1) this can be done in a non-team environment, particularly if you are a leader that has authority. And 2) you actually don’t need to have authority to influence folks through will; this generally not true where you are directive in nature, there you needed to have been granted authority in some manner.

The Dimensions of Choosing a Scaling Approach

boy_Ladder_into_clouds

From Issue 26 of Compute! magazine, July 1982

As many folks know, I have been exploring what scaling means in various discussions. I am no fan of the Scaled Agile Framework (SAFe); I got and let lapse my SAFe Agilist certification. I chose to get it mostly as it was offered to me at extremely low cost AND it allowed me to hear about what it was about straight from someone certified in it. I am not here to bash it though; it has its place. It’s not for EVERYWHERE you need to scale as it is portrayed though. This post will explore the scaling approaches available to you and when to apply them.

Let’s start with some “definition” of what is meant by scaling first…

Generally, when folks say they want to scale something, it means they want to expand its use or its capacity. So to now to fill in the ‘it’, to expand Agile’s use is to replicate Agile teams over more of the organization and to expand Agile’s capacity is to allow what is currently working to do more. These are different classes of needs. The act of replicating Agile teams (and more importantly its benefits) is solved by “scaling out” and requires thoughts on cultural change, choices of approaches and practices, and how these teams should be instantiated and organized. The act of expanding capacity of current Agile teams to accomplish more is “scaling up”. Here the choices are how to help existing teams work together effectively and gain more product throughput. The confusion on which of these apply stems from the fact that both expand overall organizational capacity.

So I’d like to provide a means for thinking along a couple of dimensions to determine which one applies as you decide to scale up. People in an organization may be choosing different approaches based on what their needs are at any one time, but I want folks to understand when and why to choose specific approaches. I plan to apply the Cynefin framework to classify problem spaces as simply a means of determining what types of approaches may be more effective. Lastly, I want this to focus on the end result of your organization’s value stream and what it needs, not on simply making choices for your organization in a vacuum.

To do this, let’s look at the following graph; it has two dimensions, both attribute of the end product lines of a value stream. I actually use the term product lines (could also be service lines) to indicate that these have an inter-relationship. So the vertical dimension is one of interdependency among products, which formulate a product line. Let’s take a concrete example: an Enterprise Resource Planning system. There is a core product and a set of product modules; this is a product line produced by a value stream. The company may have a different product line totally unrelated; say machinery control software used in factories that may be its own product line – the end result of a different value stream.

The second dimension is how responsive a value stream (and the products produced by it) may need to be to the market. (For organizations not driven by a market, say government agencies, replace market with mission.) As the market changes, so does the needs of the resulting products (or services) the organization is providing.

Scaling_Dimensions

These two dimensions can define the ‘space’ for our organization’s business agility needs. So let’s explore this space now to understand how and when to apply scaling….

If our market (or mission) is slow to change (i.e. our demand for market responsiveness is low) AND we have few products with interdependencies, then we are in the Obvious domain (this used to be called the Simple domain in Cynefin terminology). In this space, we have a simple, stable product line. We are probably the market leaders with little competition to worry about. If this is our domain, we don’t need to worry about scaling; if we are transitioning to use Agile/Lean approaches, we are probably doing this to remain ahead of our competition as a proactive component in our strategy (or maybe we have always been Agile or Lean). The key here is few products and the need to respond to external market forces is low; our need for agility is low.

So what if the market is rapidly changing or the mission is rapidly evolving? Our need to respond is high… This is where start-ups generally find themselves; they are constantly reacting. This is the Chaotic domain. If our organizations are still exploring how to fit customer needs, there may be several competitors trying to do this as well. There are no market leaders yet. Or maybe we have a small product line and have found ourselves facing new and stiff competition. This demands agility, but not a need to scale as the product line inter-relatedness is low.

As the number of our product lines increases, so does our need to scale our Agile capacity. This may be able to first be accomplished by simply using some lightweight activities like a Scrum of Scrums to help coordinate interdependencies. Eventually though, we’ll need to think more formally about how we want to scale and when we look at interdependent product lines and whether a scale up or a scale out approach is more appropriate.

So let’s return where our now interdependent product lines have market stability; the demand to respond to market changes is low. The primary driver now for any agility is to coordinate product line activities together into cohesive releases. We may be a market leader across most, if not all, of the interdependent products that make up our line. A scale up approach can handle this need for cohesiveness via coordination. We can roll the products into a program and use it to coordinate activities, thus a hierarchical approach to organizing will work. Approaches like the Scaled Agile Framework (SAFe), Disciplined Agile Delivery (DAD), and the lesser-known Enterprise Agility framework can be applied. We can take time to analyze the situation and provide a means for gathering product needs and rolling them out to the product teams; this is the Complicated domain.

If the need to respond to the market though is high, each individual product (within the product line) needs to evolve fairly rapidly so that it can meet customer needs. This does not mean that there should not be some form of congruency among teams. Our scaling approach should be one of scaling out teams that are networked together to maintain this congruency; there needs to be allowable deviations so that we can keep pace with the market (or mission) needs. Each deviation needs evaluation to ensure this isn’t a new path for the entirety of the interdependent product line. This is where probe-sense-respond comes into play, the Complex domain.

One thing to remember here is that the organizational structure and its communication paths will create the coupling of the products. This is Conway’s Law. The result for most hierarchical approaches will have product lines that are tightly coupled, while for most networked organizations, loosely coupled product lines will result.

I’ll close this post with a final thought; regardless of the scaling need, who is choosing it? Is it your people in the organization or is it someone dictating how and when you need to scale? This is the difference I see happening is that people are not exploring how they themselves can scale, but they are being told how to do it. Use this as a tool to help your people figure out what will work for them…

ACC US Session: How to Practice Agile Scaling w/o a Condom: un-SAFe Agility

I held a provocatively titled session to explore some of the issues surrounding SAFe, DAD, and other Agile scaling frameworks that prescriptively define the structure in how they work.  I personally don’t feel it useful attacking frameworks, but understanding the context around them so that one can judge for themselves whether they will apply or not.  The path I took was to explore the problems, then the why we are scaling, then finally the assumptions these hierarchical and rigid frameworks make.  I did this through time boxed brain-writing portions and then discussion on the results.  This was to eliminate the possibility for undo influence between participants.

We had several people come and go, but my core group was Karen Spencer, Kristen Tavelli, Dave Rooney, Brett Palmer, Susan Strain, Diana Wiliams, Darren Terrell, Sameli Zeid, Patrick Wojtak, and myself of course.  Brett in particular stated he found the session useful; he had just completed his SPC and was struggling with some of the rationale behind how to apply it.

Problems created when introducing SAFe (or other hierarchical scaling approach)

The following are the problems folks have seen when implementing these hierarchical approaches. (NOTE: we are talking about actual implementations seen and/or the way the framework is being prescribed to apply.)

  • unknown needs for why certain measures or metrics are required – most of the metrics these frameworks seem to roll-up seem to be to ensure that the same items are measured across teams and not necessarily what may be needed for the individual team itself
  • these metrics also seem to be used to compare team performance in a negative manner (and thus lead to team’s gaming their metrics to keep from being viewed negatively)
  • it also seems to prescribe the same process across all teams (mostly around Scrum rituals)
  • often times the organization begins implementing SAFe without executive buy-in, in particular from the business side of the organization
  • it tends to make too many decisions upfront with the product management/program level making decisions around how work will be done and not just about the how, sometimes well before any team(s) will begin solving it
  • it also removes decisions from the team around cadences and architecture, and constrains what improvements or experiments a team may do
  • after getting agile teams to drop formal roles and promote T-shaped people (generalizing specialists); most of these frameworks and SAFe in particular, introduces unneeded roles again
  • SAFE seems to focus at driving a release (train – get that damn train off my runway says the aviator!) at the team level; they are still left to simply struggle on their own (in fact there seems no one that they can truly turn to for impediment removal either)
  • with all these new roles and early decisions, this introduces unneeded coordination overhead
  • it reintroduces big upfront requirements again, sometimes using lighter models, but sometimes favoring back towards heavier ones
  • it reinvents Gantt charts with post-its
  • for teams starting their implementation of this sort of framework, it begins to force common processes and tools onto teams that may have evolved to a different set
  • also when beginning implementation, there seems ot be a lack of communication to the teams why such changes are needed, it just begins to impose them without explaining the rationale behind them
  • and there may be some possible unsound assumptions being made as to the need to scale in the first place

Whew! That’s a lot of problems, but there must be a reason for scaling?

Why Are We Scaling? (What do organizations want…?)

So we turned our attention to why we are doing this in the first place.  Understanding the reason(s) may help us make more useful decisions on approaches and such.  Here’s our take on the why’s organizations we’ve encountered are doing so…

  • there’s a silver bullet mentality; there must be a one right way to getting consistent result from all teams
  • there is a desire to help large programs adopt Agile across the enterprise with an approach that can be easily visualized
  • the above two reasons also seem to be a means for simply trying to organize a large number of teams and the people within them
  • for programs with large technical products, it can help them coordinate their activities, dependencies, and constraints
  • there may be multiple teams with multiple dependencies
  • often ‘programs’ are defined by the organizational structure that already exists or the budget that is provided to fund the work (it’s easier to sell large programs for large budgets that will produce large benefits that than a collection of smaller products that may collectively and more loosely accomplish the same results)
  • there is a belief it will remove impediments more easily (remove an impediment for one team will remove it for all teams)
  • there is a desire on management’s part to see consistency and predictability across all teams
  • and lastly in many Agile approaches, middle managers don’t see where they fit; they feel a loss of power – these hierarchical approaches show where they retain power and control

In particular, we discussed how some of these desires to retain hierarchy for coordination produce results that follow Conway’s Law.  The resulting development may have rigid and brittle interfaces.

Underlying Assumptions

So lastly we turned our attention to the assumptions these approaches seem to be based on…

  •  belief that management has limited insight into what teams are doing; our discussion on this revealed two parts to this – management expects information to be pushed to them as opposed to pulling for it and secondly management has a belief all data coming to them should be identical for easy consumption
  • a fundamental belief that process is more important; essentially it is process that helps interactions between people
  • a belief that this is how one should scale/coordinate Scrum teams and not through simpler mechanisms such as Scrum of Scrums
  • along with the organization and budget above, BIG Budgets = Importance = Easy Approval over having to ponder each smaller need/budget request on its merit
  • that management believes they will be able to see better productivity and identify where teams need to improve their performance
  • there is a fundamental belief that all development work is the same and thus should follow the same process
  • it assumes that organizations will customize the approach and not adopt it as-is
  • for organizations where they have removed these roles, introducing new roles will be easy (or having people swap from one role to a new one will be easily done)
  • it assumes all teams can standardize on a cadence
  • it assumes we must manage complexity from a central location; I mentioned that a wonderful book that explores where complexity can be managed in a decentralized fashion is Organizing for Complexity by Niels Pflaeging
  • there is a belief that effectiveness is derived via consistency or that efficiency yields effectiveness (or that they are the same)
  • the trade space (trade-offs being made) between autonomy and measurement around effectiveness is (are) obvious
  • management should be able to continue as-is; as Agile moves out from teams, management should not be expected to change in its role
  • the architectural stuff that needs to be done does not equate to business value or is too hard to equate to business value, so we’ll manage it as separate items of work
  • and lastly hierarchies are a natural way for people to organize; people coming together for common purpose would naturally choose it as their preferred structure
  • one I mentioned at the end was that organizations (management) must start with an end structure in sight and that they don’t need to just evolve to a structure

Sameli also had an item that came up that organizations can easily change their structure to match what SAFe has.

I hope you found what we discussed useful and that it will help guide you in your decisions on whether SAFe is right for you and/or how to customize it.  Start with this latter assumptions part to help you avoid the problems that may arise, regardless of whether you use SAFe or a similar approach, how to customize it, or decide on another approach altogether.

Game Mechanics Session – ACC Games Day

At Agile Games Day, I hosted a session where we took various game mechanics (mostly from boardgames, but some from video games) and then explored where these might be used to simulate or improve various things done in software development.

Here are the mechanics we explored (not exactly in the order we explored them):

Worker Placement

First was one I have explored extensively; the Worker Placement mechanic is useful to represent anytime people or some form of resource is being assigned to do something. It is a fairly hot mechanic in the boardgame world.

I’ve seen this played out as specifically in good stand-ups where people state they are committing to work on specific stories or tasks. A few others noted that it could be useful for other commitment actions. I’ve used this is in several of my simulation games; the most meaningful one is my OPTIMUS Prime game.

Event Deck

We also discussed that during a game/simulation we may want to have specific events occur (either positive or negative). These are best captured with some randomization (shuffling for example) or if a specific order is needed, these can be ordered (see deck building). A form Doug Alcorn mentioned was cards that are used to alter the rules – much like Fluxx; this could be useful for the effects of say a CI server or automated tests being put into play (for a positive effect) or management interference (for a negative effect).

Some events may only take effect if a player has a specific knowledge (or lacks a specific knowledge).

Another person mentioned (didn’t catch who it was) that the 8 Lean Wastes as temptations may be able to be incorporated into a deck and used as an event deck… I’m planning on noodling on this as this sounds intriguing.

Role-Playing

Another analogy I like to use is that software development is like a (cooperative) role-playing game. Team members are like characters with certain skills. From release to release is like playing a campaign, while each individual release itself is like an adventure. In these release ‘adventures’ you learn new skills or acquire new special ‘gear’ like CI, automated tests, etc. that will help you along future releases.

There seemed to be some common agreement that having character sheets might be a fun way of gamifying learning.

Power Ups

You can view these new skills or gear as Power-Ups also; a common element in video games. Mostly these are permanent in nature (which is similar to levels). Some temporary power-ups could be the temporary removal of constraints or impediments or the use of swarming to temporarily increase capacity.

Tech Tree

Much of the ‘gear’ acquired by teams that improve performance follow in a progression of sorts; this is similar to what is known as a Tech Tree in board games. An example, a team needs a source code repository and build scripts before a CI server can effectively be used.

Variable Player Powers

Where each player has a different set of powers or skills is known as variable player powers; again this is something that could be useful to simulate. The GetKanban game does this in reverse by allowing team members to work in different swimlanes, but reducing their ability to do work.

Role Selection

When you want to allow people to consciously choose a role or job they are doing that is distinctive from others, this is known as role selection. This could be useful for example when a person takes on the role say of a tester, even if they may be a developer. I personally could see using this mechanic (combined with power-ups) in a game to help show the usefulness of developing T-shaped people and we discussed developing an awareness of other roles. We also discussed using it along the lines of 6 Hats Thinking.

Simultaneous Action

Where people make decisions at the same time is known as simultaneous action or selection. If one is playing Planning Poker, the team is selecting the story point values across each member and revealing them simultaneously to see where people think complexity is. Effectively when people state what they are committing to work on during stand-up this is also a simultaneous action (of worker placement).

Other ways this plays out on Agile teams is when using a Fist of Five or Roman Vote to gauge commitment or understanding. Or silent brainwriting exercises so that people aren’t biased by answers being given (often done in retro spectives). Simultaneous surveys to people also simulates simultaneous action.

Hidden Information/Perfect Information

Some information is hidden (example: secret orders in Diplomacy) and some information is available in plain sight (example: Chess). In most cases we want to help make hidden information become perfect information; particularly if it is important to the team (known as transparency). Some areas we explored are the discovery of acceptance criteria or developing people’s journey lines or journey maps to further understanding of each other. We also discussed that this may be able to be combined with an event deck to expose additional information as events unfold.

Deck-Building

Deck-building is creating an ordered deck for play; this is very similar in nature to creating prioritized backlogs.  We also discussed where this may be useful for ordering strategy actions to possibly counter risks.

Dice Rolling

When you need to simulate a random element, rolling dice can be an effective means to do this. Multiple dice will produce average probability curves while singular dice will given discrete possibilities with the same probability of each occurrence. If you plan to use dice in a game, I recommended the highly useful site http://anydice.com

Quest Mechanic

Last, either Doug or Ryan Ripley (I forgot who) mentioned that the Quest mechanic is very useful for the actions decided from a retrospective.

Let me know if you have either useful analogies or uses of mechanics!

Introducing The Facilitation Kernel

Now that I’ve reposted a few older posts, I’ll give a new one…

One of the things I often get called upon to do is facilitate; meetings, workshops, retrospectives and other occasional agile ceremonies are all meetings I get called upon to facilitate.  I also find myself facilitating teams talking to one other (which actually goes into the encouragement to get together, not just the resulting meeting session) and sometimes what normally would be one-on-one sessions.

A few years back, I took one of the IC-Agile certified courses on Facilitation; they presented what they called the Facilitation Stance.  It’s useful.  (Because of possible IP ownership issues, I won’t present it here…) One thing that didn’t feel right was the treatment of maintaining neutrality as a facilitator; it wasn’t treated as core.  As I gave training to others on facilitation, they also seemed to question that lack of centrality.  Another area that I personally got, but others struggled with was the “stand in the storm”. So I began rethinking how to depict the concepts and came up with what I think is something easier to understand.

Facilitation_Kernel_Final

I call this the Facilitation Kernel.  It places Maintain Neutrality central to the entire concept.  This is important as if I am asked to render an opinion, where I am no longer a neutral party, the entire rest of the Kernel can be sacrificed.  This is particularly true if I am asked to give insight from experience or observations.  The Facilitation Stance doesn’t make this as explicit as I would like (though it does acknowledge it).

My personal feeling is that the ‘Stance’ over complicates itself with the internal “being” and external “doing” (of which maintaining neutrality is an external “doing”.  This may be just me, but I find neutrality at the core.  In the “doing” circle, I place Modeling Servant Leader Behaviors, Leading the Group’s Agenda, Promoting Dialog, Decisions, and Actions, and Harnessing Conflict. Let’s dissect these one by one:

Modeling Servant Leader behaviors is very important to exhibit as a facilitator; you are there for the team and to serve them.  You are not there to serve someone else or yourself.

By Leading the Group’s Agenda you are not just Stance’s Holding the Group’s Agenda; you are also leading them through their Agenda, whether explicit or implicit through design of the session or keeping a watchful eye and ear on what is occurring and needed.

In Promoting Dialog, Decisions, and Actions (which encompasses the Stance’s Upholding the Wisdom of the Group), you are gently nudging the group to a bias of action versus inaction and making assumptions explicit so that good decisions can be made.

And lastly by Harnessing Conflict you are doing more than simply “Standing in the Storm”, but are helping people through their differences to a positive outcome.

To do this, you need to maintain three states of “being”; self management (which IMHO encompasses self-awareness), group awareness, and situational awareness (this may be my aviation background talking to me). The alignment I have chosen in the model is important.  In order to Model Servant Leader Behaviors, I need to mostly manage myself; the situation and group awareness are far less important.  To Harness Conflict, I need to be able to be wary of where the group is currently (in terms of emotional state and energy) and the situation at hand (in terms of positions and opinions).

I places the Lean and Agile Values & Principles outside this Kernel as if I wasn’t facilitating in this realm, it may be replaced some other set.  I think this makes the Kernel fully aligned with what any general facilitator may provide.  I know I have found this useful when considering facilitating more generalized sessions such as Open Space (which I have had the opportunity to do twice) and various workshops.

What do you think? Is this congruent with your thinking on facilitation?

The Economics of Agile Communications for Requirements

This post originally appeared on my BoosianSpace blog on 28 October 2011. Some minor updates were made.

I’ve been reading Democratizing Innovation by Eric von Hippel.  One of the items he talks about is the cost of information transfer from innovation user to innovation creator.  In his context he’s demonstrating why uniqueness causes innovations to be grown internally by organizations as opposed to being bought off the shelf.

This got me to thinking on a challenge we see in Agile Adoption, explaining the reason we want lighter-weight documentation and more face-to-face collaboration.  I got a small inspiration on the economics of the two opposite ends of the spectrum.

Let’s start with the sequential phased-gate (aka ‘Waterfall’ or as I prefer to call it a ‘Canal’) approach.  Here’s what typically happens:

A set of business or systems analysts create a document. This gets approved by the business user(s) and often management.  This then gets distributed the development team.  They theoretically read the whole thing through once and understand it perfectly (single set of communication paths of one document to N people reading it).  So here’s what the formula would look like for the communication of that information throughout the entire team:

Xfer$W = Labor$avg x [[(Ncreators x CreationHrsavg) + (Napprovers x ApprovalReadingHrsavg)] x Cyclesapproval

+ [ ((Nteam – Ncreators) x ComprehensionReadingHrsavg)]]

In words: The transfer cost is equal to the creation labor hours (number of creators x average creation time for the documents as this is what communicates it to analysts creating it) plus the approval labor hours (number of approvers x average time to read the resulting documents as this is the communications to the business representative(s)) multiplied by the number of approval cycles plus the comprehension  hours (number of remaining team members that need to read the approved document x average time to read) finally multiplied by the average labor cost per hour.

Let’s see this in action as an example with a team of 6 and 1 business user that has to approve the requirements on a small application development effort:

Xfer$W = $100 avg hourly rate x [(1 analyst x 120 hours creation time) + (1 approver x 4 hours reading to approve)] x 1 cycle + [5 remaining team members x 40 hours to read and fully understand the requirements)] = $100 x[ [120 + 4] x 1 +[200]] = $100 x 324 or $32,400

Two primary assumptions here are that an approver won’t be as interested in reading it in detail as they supposedly know the requirements and thus will not pay as much attention to reading the document he or she is signing off on…AND more importantly the team can read the document ONCE and it contains EVERYTHING they need to know.  It is perfect, nothing is missing.  These numbers aren’t exactly realistic of course, most projects would take longer and would involve more signatories and more cycles to get sign-off.  I’ll be discussing the costs of change using this model in a bit.

Now let’s look at the same communications using an Agile approach…

In the Agile approach, the entire team is going to be involved with the creation, which will now include the business owner/manager.  There is no need for a sign-off as he or she is directly involved.  There also is no need to have the development side of the team expend time in reading the documentation since they are also directly involved in creating it. To reflect on the time, the people creating the knowledge (and artifacts) is equal to the by the number of paths of communication in the team multiplied by amount of effort (the average creation time) each person has to put in divided by the number of people assisting in the communications (i.e. the number on the team).  Also, since the business owner is involved throughout the development process there is only one cycle (for the lify of this project cycle).   Thus, our equation becomes the following:

Xfer$A = Labor$avg x [(Ncreators + / CommPaths) x CreationHrsavg]

Where CommPaths = ∑((Nteam – 1 ) + (Nteam – 2 ) + … +(Nteam – (Nteam – 1) )  

The assumptions here are the average creation time per person is the same as the creation time in a canal environment; i.e. the scope is the same.  Since this is done throughout the development by all members of the team, we know this will not be one solid time block and will involve more people.  The effort to distribute the information, however, is represented by the number of paths involved divided by thepeople trying to move the information along those paths. This is why the communication paths variable is the numerator and team members the denominator.

For our example of a team of 7 (since the business owner is now a part of the team),

CommPaths = ∑((7 – 1 ) + (7 – 2 ) + … +(7 – (7 – 1) )  =  21

Xfer$A = $100 x [(21/7) x 120] = $100 x [3 x 120] = $100 * 360 = $36,000

Your probably wondering where the savings is…  This looks like a wash. It isn’t.  What comes into play is the cost of change as it occurs over the project.  To truly understand the costs though, we need to discuss what happens over the life of the project.  In the ‘canal’ project, if we make a change, we have to go though the same expensive communications path as the initial development.

Xfer$W = Labor$avg x [[(Ncreators x CorrectionHrsavg) + (Napprovers x ApprovalReadingHrsavg)] x Cyclesapproval

+ [ ((Nteam – Ncreators)x ComprehensionReadingHrsavg)]]

Let’s use our example and say we had a change that requires roughly a quarter of people’s time to produce version 1.1 of the requirements specification:

Xfer$W = $100 x [[(1 x 30 hours) + (1×1 hour sign-off)] x 1 + (5 x 10 hours comprehension)] = $100 x [[30+1]x1+50] = $100 x 81 = $8100

So now total cost is the $32,400 + $8100 or $40,500 ; each time I go through a change the cost will go up by some amount.

Going back to the Agile side, because we are performing the requirements communication throughout the development and we defer to discussing only the requirements needed for the next piece of work, changes and more importantly the associated communications are already baked in.  We haven’t defined it all upfront an then distribute it for use once.  Thus, the additional costs for the next distribution are near zero.

We expect requirements to change.  We defer unknown things to as late as we can responsibly can (the last iteration possibly if the work can be done in one Sprint) so that the risk of needing to change it is minimized. Thus our costs are not going up with changes, they are remaining basically flat.  In the sequential phased-gate scenario, one significant change could ‘wipe out’ the supposed savings you saw in the simple calculation, which optimistically presumed that everything worked perfectly the first time.

 

Note: I am not a accounting type by nature; this just seemed like a logical fit and I am trying to find some empirical evidence to support it or that contradicts it; if you know of some, it would be appreciated!  Just post below and the sources you are using.

BTW, I have also toyed with the fact that requirements (stories) that need to change along the development cycle have the cost of the original one, but to multiplied by the probability that they are still in the backlog to be done and not done yet.  If you you added up the percentages as buckets of 10% along the project and divided by 10 to get the likelihood that this occurs as 50% (on average) then the cost would be akin to the following by example:

CommPaths = ∑((7 – 1 ) + (7 – 2 ) + … +(7 – (7 – 1) )  =  21

Xfer$A = $100 x 50% of [(21/7) x 40] = $100 x 50% of [3 x 40] = $100 x 50% of 120 = $100 x 60 = $6000 so the total cost of changes accumulate at a slower rate.

During a project execution, you could actually use a real rolling percent of stories closed over total stories.

Recommendations for a PM to start using an Agile Approach

NOTE: This post originally appeared on my former BoosianSpace blog on 2 Nov 2011.

I have a colleague here at EPA, she was very interested in getting started using an Agile approach to help her produce a better project (or really the software application they were to build).  I’m going to repeat and expand our conversation in the hope that it may prove useful for others.  This is set of recommendations I had for her based on her context.  I’ll start with a brief overview of that context.

Project Context

This will be a greenfield development project; i.e. there is no legacy code to worry about or legacy data to migrate. It is intended to be a public facing web application.  The infrastructure is fairly open, but will need to incoporate GIS services that deal with watersheds; if she finds it useful she may utilize a cloud service to host this.  As a public site, it needs to comply with Section 508.  There is some consideration for a mobile app as well…

The development work will be done by an offsite contractor; this contractor to her knowledge has not done any Agile development projects as of yet.  The GIS services portion will most likely be developed by a specific subcontractor on that team and another may provide UX.  Her biggest constraint will most likely be available funds; schedule and scope (hopefully) can both be in play.

Finally, she has a very interested Govt product owner and a group that is interested in participating that represents stakeholders of urban watersheds.  The target goal is to have people represent the interests and activities occurring at particular local, small watersheds and then utilize GIS services to identify the larger watersheds these are a part of so the relevant groups of interest (e.g. Chesapeake Bay Foundation) can be portfolio managers of these watersheds and provide upper level guidance.

Initiation

She was interested in getting started.  I had previousl recommended some books/directions for learning and I’ll repeat some of those here as they apply.  But she was interested in some specifics.

My first recommendation was to develop a project charter; this should have the following:

  • A description of the project goals and risks
  • A relevant ranking of these goals from a simply worded measurement perspective; I recommended having 4-6 of these in the Project Success Sliders format.  This allows people to understand that trade-offs must be made.  How she gets there should be a facilitated project charter discussion.
  • If needed, a description of roles and responsibilities of the organizations contributing to/participating on the project.  I’m not sure this is necessary given she has an engaged stakeholder.

Finally, I’d suggest that the project charter have a high level prioritized set of functional areas/epics as a roadmap for what will be developed (essentially it shoudl include a release plan).  If she is able, she can get a high level of estimates in time for these with the potential contractor, add some management reserve and then calculate the amount of funds to develop and a maintenance estimate for this with a presumed product life-cycle of 5 years.  The roadmap should have the highest business value and riskiest items first, then simply high business value, then simply risky, and lastly other items that are desired.  This will ensure that risky items have less opportunity to hold-up the project in its delivery of high business value items.Once funded, the roadmap will have only a subset of activities that can be met, this will become the release plan.

Contract Considerations

I’d recommend doing this as time and materials with an award fee.  The bid should be in two parts: the initial development to release 1.0 and the long-term software maintenance of the resulting application as an option.  This perhaps could be a fixed yearly cost.  I recommended a warranty period (perhaps 60 days) to assess how well the application is doing from a quality standpoint. Depending on how good (or bad) the application turns out to be, you can execute this option.  Really good, execute it if it is a good deal.  It also could potentially give you a point for renegotiating.  If it is really bad, the team that developed is probably the best team to maintain it also, but there would need to be some incentives around improvement.

I recommended that the contract call for a dedicated team and that team’s full participation along the entire development project AND the optional maintenance component if executed.

I’d make the award two-fold, the execution of the maintenance option is one.  The bigger one though is that if the contractor delivers under budget (the contract ceiling on development) and the quality is an acceptable level; then the remainder of the funds get split in half – the contractor gets that as pure profit, the agency can deobligate the other half and use it for something else.  It’s a win-win.

These above recommendations need to be worked out with the Contracts Office.

Recommended Agile Approach

I recommended starting with Scrum as an Agile Project Management framework. I made this recommendation based on a few things:

  • It is lightweight and supports rolling wave planning so that detailed tasks can be articulated just-in-time
  • It looks as though there will be an engaged product owner and a set of actual users that can be tapped to provide rapid feedback
  • Given her and the product owner will get a set amount of funding, which will then layout what prioritized epics that can be accomplished, she will need to be able to measure progress; Scrum’s velocity technique is useful for this.  As an initial start, I recommended 2 week Sprints and if the team finds they aren’t making what they pull in consistently regardless of how many that are pulled in, perhaps shorten the iteration time to one week.
  • Given the stakeholder audience and non-familiarity with techniques such as planning poker, I recommended the concept of inch-pebbles, all tasks/stories should be broken down in Sprint Planning to something that will last no longer than 2 normal workdays or less as a work estimate.
  • The initial Sprint planning session should be expected to be about 4 hours.  All remaining ones could be planned to be 2 hours.  The Sprint Plan should be the prioritized backlog of stories/tasks and also identify when subject matter expertise is needed.  This will allow an estimate of when these people need to be available to provide information as requirements.
  • Sprint Reviews should be scheduled for about 2 hours and consist of a demo of ‘done’ software; the definition of ‘done’ should be very clear and agreed upon by all parties.  I’d recommend ensuring it is deployable ready software.  It’s been coded, tested, added to the build, and had some amount of regression testing done.  Again because of lack of expertise, I’m not counting on any continuous integration or automated test suite to be activated.  Regression testing will be smoke tests in reality.
  • Retrospectives should be scheduled for about 2 hours and directly follow Sprint Review.   This needs to be sold to the product owner as how the team can improve AND possibly deliver more or deliver what can be done at lower cost while still maintaining quality.  It won’t guarantee it, but it will improve the chance it will happen. I recommend conducting the Retrospectives using the format described in Esther Derby and Diana Larsen’s book Agile Retrospectives.
  • Try and make the Retro immediately follow the Review and the next Planning session immediately follw the Retro.
  • Plan for a daily stand-up of 15 mnutes max.  Investigate some form of teleconference/vid conference capability for this.  Do the same with some on-line white-boarding, mind-mapping, etc.  for the Sprint Planning/Review and Retros.  The team will be distributed.  If possible, try and bring as much of the team together as possible for the Review/Retro/planning sessions.

Recommended Technical Practices

I’ll conclude with a set of technical practices I recommended.  Due to the team’s lack of experience,I didn’t recommend too many; I tried to focus on a few key items that ensure the team delivers high quality software and that what ever is delivered meets the requirements specified.  The entire scope may not be completed, but you want it working correctly and properly.

Good SOLID principles will be the foundation.

Use of Specifications by Example (see book of the same title by Godjko Adzic), whether automated with Cucumber, Lettuce, Fitnesse, or JBehave or performed manually will ensure the software meets requirements.  It makes it easy to iterate over the requirements as well.

Develop for a scenario at a time.  Use unit testing and develop tests before coding.  Once a test has been written, check into the source code repository.  Once the code has been written and passes the test, check it into the source code repository. Iterate until the scenario test passes.  Move onto the next scenario.  Once all scenarios have passed, move to the next requirement/example set.

At least every 2-3 days have someone execute elected sets of the specs fully for the entire app to date to see if any bugs have crept in as new features get implemented.

Use an issue tracker for the tasks/stories and also to track any bugs that show up during regression tests; I recommend Trac.  It’s OSS and works well.  It allows implementation of a pull process so that as developers bring stories into work from the backlog and they immediately get assigned as owners.  This is useful for the PM (Scrum Master) to see what is being worked at any point in time.

I hope folks find this useful for how to ease into being Agile if they have a similar context.  There is no one size fits all approach though, so consider this as just one approach.  because there is a clear product owner, interested user representation, and it is a greenfield project, this was my set of recommendations to try initially.  Using Retrospectives, hopefuly the team will adapt what I described above to meet their needs.