Agile Coach Camp US – Neat Learnings

I attended several sessions at Agile Coach Camp; I was really impressed by the topics proposed this year. I went to some on Business/Organizational Agility, improving feedback/listening skills, one on creating Joy at work, and several related to using games to teach various Agile concepts. I’ll have to admit, I got lighter on the subject matter as the Camp wore on… Anyone that knows me usually knows I have no fear in proposing 2-3 topics.  This year I proposed none.  I was a bit too dain bread to host one given all the distractions and effort that went into running the Camp itself.

Before I jump into my key learnings/highlights, I was very glad to see one of the emerging themes be one of invitation over imposition. So many organizations are now jumping onto the Agile bandwagon and imposing Agile from above as opposed to helping it emerge; and then we wonder why there is resistance! I also really liked that there was good discussion on various technical topics as well; I often feel these get forgotten.  It’s important for us as a coaching community to understood how we can help organizations adopt things that matter and for software development they ummm… seem… to be technical in nature.

So my highlights; I would be remiss if I did not say one highlight was our extremely energetic facilitator Trica Chirumbole.  I think she brought a great energy to the Camp form opening to closing circles.

I was glad that my first session was one that Ryan Ripley ran to clear up some of the misperceptions people have about why an organization should adopt Agile. We seemed to come up with some great clarifying points to help our organizations or clients understand what to expect as an end result as well as various interim improvements to expect along their journey. Here were some of the key take aways:

  • a focus on improving organizational adaptability/responsiveness
  • use of data to make decisions, but not without regard of what the organization’s people will be undertaking
  • more transparency into organizational performance; risks more visible so better decisions can be made
  • better trust within the organization
  • containing failure and learning from it
  • improved employee engagement and retention

The title of the session was it’s NOT about being Better, Faster, Cheaper; though we rearranged it to mean this by stating: Better = more predictability and customer-focus, Faster = is time to market, not just meeting a schedule, and Cheaper = a focus on producing more value, but not reducing costs.  The hard part we found for measuring organizational performance on these is few organizations have a baseline measurement for any of them; in fact we came up with the hashtag #nobaseline to tweet about these instances. Reminding me I could use that with my current client 🙂

Ryan later ran a follow-in discussion from a session we had in the Open Jam session of Path to Agility in Columbus on creating Joy at work.  It was a complementary session to the earlier session as it focused on the human aspects of making those aspects happen. Since we had a new crowd, we really spent a third of the session kind of bringing them up to speed on our thoughts (at least it felt that way). I have an earlier post to help you. Once there though, we explored why Joy was more important than happiness though several people still thought they were synynomous.  Quite a bit of the conversation focused on how NOT imposing choices on people (what Daniel Pink would refer to as Autonomy) is key to this.  Some other also had it relating towards accomplishment (there’s Mastery) towards a purpose. I mentioned that I like Jurgen Appelo’s CHAMPFROGS; it feels more complete.  Since then, after reading Frédéric Laloux’s book, Reinventing Organizations, I might also say Joy is the integral of Wholeness from time = 0 to the present.  I still also stand by our earlier equation as well from Path to Agility.

I’m going to go quick over some of the rest as I feel I have been rambling a bit; I went to a games session hosted by Declan Whelan and George Dinwiddie on games they had come across or developed.  Declan presented Tom Grant’s tech debt game; everyone played it different and got results that demonstrated WHY we should make investments into things like automated testing and continuous integration. George showcased a game that he has been slowly evolving to show how refactoring works – it more demonstrated how software is malleable and we should treat it as such.  This is of course on its own very valuable.

I attended two other sessions I want to highlight, also both ‘games’-oriented: Mark Sheffield held sort of a games round-up.  I learned several new games to research and variants of games that would prove useful for helping teams and managers understand things better.  Andrew Annett ran a session on the Empathy Toy, which is all about common cognitive empathy (aka developing shared mental models).  This toy is fantastic, every coach should have to play this – you are always trying to find ways to bridge the gap in understanding.  My cohort Ken Furlong and i are already developing new ways to use it.

We had 2 happy hours before and during Camp as well as some food shared in various locations – it was awesome catching up with Diana Larsen, Daniel Mezick, Aaron and Brian Kopel, Jeremy Willets, Kevin Goff, faye Thompson, Declan Whelan, Tim Ottinger, and Ellen Grove at length (during Agile2015, I also had the chance to spend some time with my friends Woody Zuill, Pawel Brodzinski, and Chuck Suscheck at length too).

When I’ve Skipped the Estimates…

spiral_clockWhile the debate carries on whether one must have estimates or not, I thought I’d provide a viewpoint of when I found them no longer needed.

However, before go there, let’s start off with a bit of a story about when estimates were not useful, but required, so I took the *EASIEST* path out.

Let’s go back to 2008; I was just hired on as a software development Branch Chief in USDA and asked to prepare the budget for the next fiscal year.  Of course, the first thing I did was poll around on what upcoming work there was. No one knew except that the same amount of maintenance as was last year. That was easy, apply an inflation factor on what we had this year, add a management reserve, and we’re done.

Now onto the harder problem: what about the unknown new projects looming.  dollar_tunnelSo I investigated how these normally got funded; any estimate done is simply reported up the chain (as requested Development monies), but the funds are actually provided by the programs that need the work done for them. These are used as a projection for  the branch and nothing more. Any work really done goes through its own process of requesting and then actual money is provided.

So I asked, how many projects did we do the year prior and how much did they cost? And the year prior? And the prior to that? 4 projects, 4 projects, and 6 projects were the answers. (I won’t go into the money numbers, but I’ll note this branch did not develop super huge applications, but small to medium sized applications with some complexity – a GIS app, an analytical app, several tracking type apps, a loan package development application, that may give you the picture.)  I didn’t need to know the number of apps for  the reporting, but I used that number to calculate the average cost per app we developed, projected into 2009 dollars; adding a standard deviation game me some more certainty, then a 15% management reserve.  Once I had those numbers, the process was literally a half hour to run through the math a couple of times to ensure I was on target.

My project managers could not believe I was going to use that number; they always went around to each potential customer and asked them to conjecture on applications or upgrades they wanted. Most never got funded and something else came up and got funded, so why spend time estimating what never happened.

This was a very low precision estimate, but got me in a reasonable and justifiable target number. (If the system allowed for ranges, I would have provided those, but alas it didn’t.)

I’m guessing you are wondering how ‘correct’ I was with that… We had 5 projects and it was fairly close to the average.  The next year we did the same thing, but it was off – much higher as the Recovery Act kicked into high gear, but as I pointed out before, it didn’t matter.

OK, that was budgets built using the least painful method of estimates possible.  (Sometime in the future, ping me on how I executed on real work within the branch… The spoiler hint is I limited the WIP of projects going on at any one time, so that I could keep my team close to constant size, the increase meant I experienced a contractor headcount increase by about 2 people.

So now onto some maintenance estimation I did away with…

When I took over running the maintenance team at Office of Pesticide Programs, every Software Change Request (SCR) came in went into a queue where it was examined in a meeting and the contractor told to go estimate it.  When the contractor came back with their estimate, usually a week later, the work was approved.  They estimated in time and they could then quote the money as they figured out who was going to do the work and then they could apply their labor rate. This singular meeting was at least an hour long every week and consisted of telling the contractor go estimate the amount of work to do and report out on estimates made.  This never went anywhere; no one did anything with these estimates. We never said no to the SCRs for the legacy systems we maintained, mostly because no one worked with the business well enough to know whether it should happen or not. On top of that, there were 20 some legacy apps with at least that many stakeholders to try and satisfy. Perhaps at some point, this estimation process was used to say no, but with the mostly low complexity work coming in, there was no drive to say no.

We set budgets based on annual contractor headcount. Perhaps at some point this estimation exercise was used for this, but it wasn’t any longer.

So I did a couple of things, I killed the meeting. I put the onus on the government application maintenance staff to work with the business to prioritize the work in their viewpoint. I set-up a rule set for taking these priorities, along with a quick technical assessment (that set severity) and the date in, to establish a prioritization across all apps.  I got these stakeholders to agree to this scheme so I didn’t have to fight with each app. We still never said no, we just prioritized the work not started constantly.

And I eliminated the estimates.  I decided on contractor staff based on how much work I could get through; I concentrated on further process improvements before I thought of increasing headcount. (You can read about the Kanban system that was set-up on GovLoop if you so desire.)

To go full circle to where once again I found an estimate helpful in this environment was a potential regulatory change was going to require a rather large piece of work to our legacy PowerBuilder app. I was asked how long it would take; the upper management was interested in ensuring that we had enough lead time to get it done. Not having it done, had a financial impact on the Agency.

Since I had a Kanban system implemented in Trac, I filtered out that legacy’s enhancements to similar ones and calculated the average and two standard deviations.  I gave them that range with stating the high number had 95% confidence we’d fit within it. They deeply appreciated the accuracy and precision in this case. This is a form of estimation of course, but the real point is day-to day, we never estimated; there was zero value in it.  We did capture actual data though using our system, which made predictability possible just as I mentioned above.

Hopefully this will help others at least understand one context where estimations weren’t needed and also where low fidelity estimates were good enough to establish a reasonable estimate. I consider myself a no estimates guy, only because I look at the assumptions of why I need to estimate and if I don’t and can derive a more suitable answer in some manner, I’ll probably use that.  It’s all a matter of context.

Yin and Yang in Change Management: Appreciative Inquiry and the Power of Habit

yin-yangThere are many change management approaches out there. Most focus on weaknesses you need to change; several others out there focus more on things to keep the same and build upon. Most change agents then further target using one approach or another, perhaps based on context, or perhaps as a ‘goto’ tool (you know what they say about goto statements – don’t use them!).

My preference is to balance between these approaches, two I have found really useful are Appreciative Inquiry and the Power of Habit.

I have always tried to help people and organizations find their strengths and build on those. This is the basis of Appreciative Inquiry, something I learned by reading the Thin Book of Appreciative Inquiry.  I’m looking forward to reading more on this to be honest as I think it is undervalued as an approach (I almost said under-appreciated…). Being able to identify, really help others identify, the core strengths they have and harness those for the changes they want to see is really powerful.

Here’s an example of how you might use Appreciative Inquiry; have the leadership of an organization (preferably with a sprinkling of lower down in the totem pole) create a KrisMap of where they want to be, personifying what the future organization will become. Then have the people identify the strengths they can leverage towards the resulting characteristics and build action plans to achieve these.  Very powerful, and quite motivating since you are using core strengths.

Screen Shot 2015-07-12 at 10.02.34 PMYet only applying strengths does not help you eliminate weaknesses.  When talking about change such as what may occur in an Agile Transformation, another approach to look at is the Power of Habit. When habits work against where the organization’s people want to be, then one needs to look at changing the habit to a new one, while keeping the reward the same.

The Habit Loop can be defined as what cues a decision that needs to be made and once similar decisions are made repeatedly following the same formula that provide some form of benefit or pain avoidance (reward), then this will be the preferred decision routine; a craving gets established.  This is regardless of whether we’re talking individual, organizational, or societal. Keeping the reward the same, while changing the habit allows new habits to become more positively reinforced.  This can take some work and I always recommend breaking down the habit into a causal loop showing all the steps being taken.  This helps in identifying leverage points that you can use (strengths in Appreciative Inquiry speak) and possible side loops that could railroad the change; essentially risks to mitigate.

Lastly, remember introducing change, regardless of approach, can be overwhelming. Limit the number of changes you are introducing at any particular point in time.  This gives you a chance to better sense the effect the changes are making and respond accordingly.

For more information, see the latest version of my Taking Flight presentation.

Using a Business Canvas in a Government Environment

At least some of you know I worked at the Environmental Protection Agency in the Office of Pesticide Programs (OPP).  At one point I and a colleague created a Business Canvas for our office; this concept comes from Alex Osterwalder’s book, Business Model Generation.  Below is what I can remember of our canvas (we did this about 5 years ago and I did not take it with me, so this was reproduced from memory; it’s mostly correct).

OPP_Biz_Canvas

These high level items allowed us to identify quite a few useful things. I’m not going to go through every box at the moment, but what we found we could do with this was identify weak spots (our IT contractor at the time was a weakness for us) and the primary activities to leverage to create our value propositions.  We did some postulating on new possible customer segments and thought specifically targeting farmers (one of the largest users of pesticides) may be a good thing to call out.

We then did an analysis on various trends. One trend stuck out; while we were a monopoly, we still were subject to market forces. The economy at the time had been in recession for a couple of years, a pretty severe one at that.  PRIA registrant fees funded much of our work. If the economy is tanking, less pesticides will be purchased (farmers in particular will try and get with less to lower costs). This in turn normally lowers the amount companies will invest in R&D. Without R&D, less new pesticides will be rolling out for registration, meaning less funds and work for OPP. There isn’t anything magic here, but the canvas had us postulating on it.  We went to talk with our IT Director as we wanted to find a way of testing this hypothesis as it would have a severe impact on the work we do; he showed little interest.

Later that year, the Office Director for OPP announced we were going to have the least number of registrations on record since the Office was founded. I can only envision had we tested our hypothesis we would have had a leading indicator as opposed to the lagging indicator of watching the number of registrations trend significantly lower than expected.

Most Government organizations have only appropriation.  Even so, thinking in terms of the value propositions being delivered to customer segments and the activities and partners needed to do this can be really advantageous.

Agile Dialogs – Why We Need It

Agile_Dialog_Logo-2Recently I have noticed conversations in the Agile Community getting increasingly hostile.  Whether it be about scaling, self-organization, estimation, or a variety of other topics, there seems to be some reason one side or the other has to be ‘right’. I’ve personally been in the crossfire and not once was there any inquiry about why I had my opinion, only some circumspect attribution as to my opinion being off the mark.

Perhaps it was… Perhaps not… Who is the judge?

So something I and a colleague (@Ryan Ripley) have decided to try is put together is an unconference to bring together people to discuss these thorny conversations. And by discussion, I mean dialog, not debate.  In other words, the point is not to prove someone wrong or right, but rather understand there position and whether it is valid for your context.  Using a philosophy espoused by Peter Senge, we need to expose and elevate our assumptions so that we can find what works and doesn’t between the positions. We call this Agile Dialogs and have set-up a website (rudimentary at the moment).  Our first dialog will be about how to predict value with or without estimates. If you have an opinion for against or somewhere in the middle, we hope you will join us. You can find out more info at the Agile Dialogs website; please consider taking the short survey at the end and of course joining us on November 13th at the Navy League Building in Arlington, VA..

T-Shaped/H-Shaped Contracting Officers

Recently the US Digital Services and Office of Federal Procurement Policy issued an OMB Challenge; in it they discuss how contracting officers need to be more knowledgeable in digital services procurements. (Digital Services seems to be the new 18F-ish buzzword for user-centric software development, though they also reference cloud-based services…)

In this challenge, they mention creating depth of knowledge in digital services procurement, however they also suggest a desire to increase their business savviness, though they don’t express exactly what is meant.

T-shaped people have both depth and breadth of expertiseThis prompts me to simply point out that contracting officers and specialists (as well as any acquisition-related professional) are needed to aspire to become generalizing specialists or T-shaped people.  What do I mean by this?  For a contracting officer, this means becoming not only steeped in contracting services, but knowing enough about information technology to understand what may or may not apply to procurements. I’d also suggest getting more knowledgeable in their department’s or agency’s mission and understanding their needs earlier on is what will also aid them in becoming better at digital services procurements.

The challenge wants a CORE-Plus curriculum; IMHO this indicates that the government is interested in beginning to create contracting officers that have more breadth.  This helps attune their contributions to become more valuable as their knowledge increases to better align with the services being procured.  In some ways the desire to have contracting officers undergo a CORE-Plus certification, means they will be more like H-shaped people with some deper knowledge of digital services technologies as well.

Contracting, particularly in the government, is a complex undertaking.  As someone who maintained several DAWIA (Defense Acquisition Workforce Improvement Act) certifications myself, I can attest to how valuable it is for personnel to have a broader understanding for what they are acquiring and how it fits into the needs of the organization that will utilize it.

For an excellent general write-up on what T-shaped people are, drop by Darren Negraeff’s post The Importance of T-Shaped Individuals.  It contains links to further reading and is also where the T-shaped image above comes from…

A Short Essay on Using Models – Why Should You Use Them & Why You Should Create Some

EA-7L_Corsair_Line_Drawing I use many models in my thinking, whether they are mine or someone else’s, yet I don’t think of myself as a theorist. I thought it may be helpful to some on why models are so valuable to a pragmatist. Another word for model is framework…

“essentially, all models are wrong, but some are useful”

George E.P. Box

This quote is the first thing to remember when you begin using any model; you need to remember that at some point a model will break down and no longer support what you were using it for…  Like a lean start-up idea, create and use models passionately, but stop using them the moment evidence points that they are no longer helpful.  (he nice thing about a model though is that generally this means you have crossed an edge-case where the model doesn’t work any longer, but may still be useful in the long run.  If the model consistently doesn’t work, then perhaps the model has some invalid assumptions.  Exploring these assumptions then may help you refine the model into something that once again works or to find or develop a model that does work under the broader circumstances.

This brings me to the next point – ALWAYS realize models have a set of assumptions.  Explore how the model works under these assumptions.  This helps you understand when the model may be useful and when it may not. With that, why do you need them if you are simply someone (particularly a coach or manager) who needs to help people get things done?

Models help you understand systems; they may not provide a means to achieve an answer, but may simply may provide a means for organizing your thoughts.  The Cynefin model by David Snowden is one of these latter ones – it can help you understand the problem space you are exploring for decision-making. Finding models that can represent systems or at least significant and important portions of a system is mostly useful for helping you organize your thoughts.  The act of thinking through when and how these apply including valid and invalid assumptions about variables, algorithms, or organization (for more pictorial models) really helps you determine on which things to pay attention.  Even if you find the model doesn’t work, the amount of thinking you went through will serve you well.

And I invite you, particularly when you don’t find a model that seems to represent what you need, to try and think through creating one.  Don’t worry about it being perfect, you can always adapt the model after inspecting how it works.  Again, you are using this to organize your thoughts.  Creating a model could be as simple as combining models; Jurgen Appelo’s CHAMPFROGS model about motivation does this.  It appears Jurgen saw gaps, overlaps, and some inconsistencies in representation and blended a new model to make it more clear to him.

It’s also extremely useful to find where different models connect in explaining the same observations (data) differently.  This helps you understand where options may be found and where the thinking on these has many dimensions, which again exposes assumptions about the models.

Going back to the usefulness, one huge benefit for applying or creating a model is stepping back from tactical thinking to a more strategic layer.  This helps in prioritizing based on importance over simple urgency.

People serving as coaches and managers are there to help the people improve the system, you can do this best when you have your own thoughts organized. Models can be an essential tool in selecting and organizing the particular tools and techniques needed to apply.

Locking Cadences to Optimize the Whole Scaled System – Not Really…

I had a reminder through some recent comments that people view locking cadences in step as a means for optimizing the whole of the system and not for individual teams (by allowing them to choose the cadences at which they wish to deliver).  this is used to justified in the case of when you have a need for a program.  I think this is missing the point, so I am going to go through some explanations.  I really like how Jurgen Appelo has applied David Snowden’s Cynefin framework to work systems so I am going to illustrate my rationale using some of his work.

So let us start with a team:

Programs_Explained_in_Mgmt3.0_speak-1Teams are simple to understand, predominantly because of their simple structure with few people; however they are  complex in nature because we are dealing with humans.  Sometimes we can’t even predict our own behavior, much less a whole team’s.  Next, let’s think of where most policies and processes wind up…Programs_Explained_in_Mgmt3.0_speak-2Good processes (and their accompanying policies), try to add order into systems; this is particularly true of many of the scaling systems out there (SAFe, DAD, to some degree LeSS) where structure and process is imposed on the ‘program’ system in order to achieve more predictability.  Unfortunately most of these are quite complicated in nature; some have helped by providing well diagrammed (some even animated) pictures, but there is still no denying the complicated nature of their arrangements to attempt to get predictability.

This is very well intentioned, yet what happens in reality is the following:

Programs_Explained_in_Mgmt3.0_speak-3The complicated-ordered process thrown on a simple-complex team yields a complicated-complex result.  This isn’t achieving what we wanted… and we’re just talking about a single team! And if we expand this to many teams such as we would have in a program, this is the best case we can hope to achieve.  It may become complicated and chaotic as the additive results yield less predictable results. Programs_Explained_in_Mgmt3.0_speak-4So why is this happening?

It’s because we humans create complex social systems.  There’s a reason why we value individuals and interactions over processes and tools; the latter can be complicated in nature perhaps and yet they are ordered in nature, while people systems can be either simple or complicated, yet are always at best complex.  People aren’t robots, so our behavior is never entirely predictable.

And yet… we try and put systems in place that have unintended consequences, such as imposing cadences on teams to get more order (predictability) out of them.  Think about the last time you had something forced on you that you disdained; it probably had you at best working at less than motivated – it sucked the motivation out of you, so you didn’t perform as predictably as desired. And at worst case, you went and found a new job and now the team was thrown into reforming and restorming to get back to renorming and performing.

Each team and its individuals will be different, perhaps some won’t care that much about the ‘normalization’ of cadence.  But some will have deep negative impacts that will occur.

So I ask you what ‘system’ are we trying to optimize? The process or the people?  Imposing a process to de-optimize how humans perform seems to me have many potential negative longterm effects; besides losing good people or demotivating people, even if this happens to only one team out of ten, it sends a signal that people don’t control their work system at all, that any element can be changed on a whim. Basically apply the pants principle and let teams adopt as simple a process as possible, including the orchestration.  As Saint-Exupery said, simplicity is not achieved by deciding on not when there is more to add, but deciding on when there is not more to take away.

Programs_Explained_in_Mgmt3.0_speak-6 Does this mean that locking cadences can’t ever be adopted? Not at all… Facilitate teams to select a good cadence within themselves firstand then collaborate with other teams to find how to best orchestrate delivery.  This may result in lockstep cadences or perhaps a creative branching and merging strategy. This could be done during team chartering by holding a futurespective. Regular intra-team retrospectives could help teams identify when changes need to occur. Simply installing a locked cadence at the beginning may result in a sub-optimal approach as it overlooks the people part of the equation.

Calculating Joy and Fulfillment

At the Path to Agility, several of us got together and had an open discussion about what possible relationships Happiness, Joy, Purpose, and Passion had.  In attendance was Ryan Ripley, Faye Thompson, Joe Astolfi, Jeremy Willets, and Kevin Goff.  Others dropped in from time to time as well and provided some input.  Ryan kickstarted it with a premise that focusing on pursing solely creating a happy team(s) destroy longterm joy and fulfillment.  The discussions were contained in the photo of all the stickies (semi-organized into various areas):

image1

We discussed many different things, and I’ll mostly focus on my take aways and contributions.  We’re still discussing this (primarily via twitter currently), so it’s an ever evolving concept and I am not sure we’re all in agreement yet.

Happiness was felt to be a short term thing, while Joy yielded a long term gratification. I started my (useful) input showing how happiness of a team over time can be captured via a Niko-Niko calendar and that this is useful in understanding whether a team is working well together.  Ryan still said that a happy team may not be joyous, but we did at one point all seem to agree that at some point if a team had little to no happiness occurring, then it was unlikely they would feel joy.  This got me to drawing a stacked area chart; the x-axis is time and the y-axis is the amount of joy felt by the team.  The areas that add up to this happiness (which is more volatile), passion (mildly volatile), and purpose (little volatility).

I wish we had spent more time coming to some form of agreement on Passion and what it means; I defined Passion as the collective sum of the motivations I have.  I really like the CHAMPFROGS set that Jurgen Appelo has created.  I think some of the inconsistencies showing up in our Twitter convos deals with each of us having a different mental model around passion.  These passions can change some over time.

Alignment on a purpose is also very important and alignment of this purpose with my longterm passion is also very important. It’s this latter part that gives me motivation to pursue it, yet if it is a continuous unhappy environment then I will also find it difficult to stay focused. We ultimately settled into this equation, which I wrote onto the whiteboard under the advertisement for our session:

Joy_equation

This equation states that Joy is a function of the Length of Happiness I feel multiplied in the pursuit of Purpose in addition to the Passion I bring to it.  Thus if I have little time I spend feeling happy as I pursue a purpose, then I will not feel longterm Joy.  Likewise, if I have no purpose, I also get no Joy either; this because Joy is the feeling of Fulfillment we get (Joy = Fulfillment).  Lastly, it needs to be aligned with passion as if my passion would rather pursue something else, then I likewise will have little Joy.  If we bump this from an I to a We for a team or organization, it means getting alignment on purpose and passion while having a supportive environment which increases happiness.

The important aspect to me though is the role of leadership in this; when exercising leadership, our job is to discover people’s passions, help them see how they align with a collective purpose.  It also means that I want to create this supportive environment, but not pursue short term hygenic treatments to make people happy, they need to be factors that create longterm possibilities for team members to be happy.  An example of a longer term factor would be one of safety as one would find in Anzeneering. To create Joy, leaders (which in a self-organizing team can actually be any team member)  is the application of the Antimatter Principle to attend to people’s needs.

Addition (that I forgot to mention, but it was discussed and is very relevant), Tobias Mayer has an excellent post that if you attempt to encode someone’s values, you kill that person’s spirit.  This can be true even what is being imposed is happiness; this will not create longterm Joy.

An Alternative for Identifying Classes of Service

appleorange-1

In most Kanban systems established, classes of service refer to an assessment of impact to the business.  While I personally like this approach, often this assessment technique doesn’t fit well for some teams or organizational issues.  It may also not be very informative for some work items being managed.  I have always believed in using Kanban, and particularly its associated metrics, for identifying areas to improve.  Sometimes we need abilities to slice by similar items as far as impact, but that may have other degrees in which they vary. So I’d like to present a few other styles for identifying these.  I’ll start at a team level and move upwards towards something more organizational wide.

Maintenance Activities

I often seem to find teams performing maintenance activities (upgrades, defect/bug fixes, small to large enhancements, etc.) struggling to find ways of understanding the metrics that will be useful to them.  While an Expedite class of service, with its own identifiable swimlane and corresponding WIP limit, is invaluable, a standard class of service is not when the timeframe or scope tends to skew the metrics results.  I want to be able to predict when an activity may be done with some confidence.  If I lump all of the activities into one standard class of service, the larger items will skew the average lead time to a higher number than my smaller activities and my variability will be very high.

A concrete example is an ERP upgrade versus an important (but perhaps not critical enough to go into the Expedite column) bug fix.  The ERP upgrade may fix numerous (just as) important bugs as well.  Upgrades in ERPs often can’t be broken into apples to apples comparisons as the tasks are entirely different though the lifecycle that may be managed through the Kanban process may be identical.  Additionally, the items that must be completed for the definition of done (which become cumulative entry/exit criteria along my columns) may also be different.

BTW, these types of items may be tracked within a higher level Kanban and not necessarily a team based one…

Portfolio Items

Definitely moving a level or two upward, if I have portfolio items that need to follow an identical process, but may have varying entry/exit criteria or varying typical timelines may also be worth tracking as separate classes of service though each may be more or less equally important to the business (i.e. close to the same prioritization in the backlog).  Here’s some examples: reogranizing a particular function, redesigning a business process, implementing a new application (at the highest level).  Each may follow a similar process of: Backlog -> Analyze -> Implement -> Measure Performance -> Done.  The definition of done and timelines may be quite different on each of these items.  Wouldn’t it be nice the next reorganization being proposed I understood how long my last set took in terms of average lead time and its variability so I can predictably give an answer to the board?  I don’t want to skew my data with that for my last network upgrade.

One could argue that we could (or should) use separate Kanban boards for these, but I think this is less than useful. I can think of two reasons to have these on the same board.

  1. I want to understand how my organizational WIP of change affects cycle-time overall.  This would be very difficult to do if these were spread across multiple boards. (This is not to say that each effort may not have its own more detailed board.)
  2. Also if I want to also think through alternatives approaches and compare cycle-times as a magnitude of cost (since often time is money) and benefit (time to market), having these on the same board makes this much easier.  By tracking this information, I can use this information as input to my decision-making on which approach I may use.  For example, if I can use it as input to analyze whether I redesign my current business process or automate the existing process.  Knowing the cycle-time can become part of the analysis both in terms of cost and benefit

A Quick Analysis View

So how do we determine these different classes of service? Well I have already hinted at the dimensions that we will use.  We’re going to basically categorize the work item types by time it takes to get them done (just a gut feel of time) and differences in the scope of definition of done, looking for vastly large differences.  You can place these on a grid such as the one below.

Classes of Service Matrix

So even items with a similar definition of done may have vastly different timelines, knowing this keeps us from skewing the data when we want like items.  Additionally, not lumping things that have vastly different definitions of done (column(s) exit criteria) yet follow an identical process at the level we are looking at it can also be very helpful.  The bottlenecks that may occur can be different; this also makes a useful distinction.  Lastly, I can now view all of these dissimilar items on the same board and yet have a means of distinguishing them and their corresponding metrics.

When one is stuck on identifying classes of service, or the classes of service between the items appears meaningless, give this a shot and see if it helps.  I’d be interested in other viewpoints.