SAFe does not equal Safety

So I am working with a client that has decided to implement SAFe across its teams. They want to make their management of multiple applications easier. I have no doubt that from their perspective it is well intentioned. They do not understand it fully and the non inquisitive ‘coaches’ they have hired have no problem imposing it. And this is why I hate SAFe (sorry rant for a bit).

It really has nothing to do with having good ideas. SAFe has lots of them (I wish they gave credit to all those they pulled from – most of their, if not ALL their thoughts are not original.) They have plenty of good ideas baked into the framework. What kills SAFe from a safety standpoint is how much it is a practice mandatory approach. They talk about principles (I’ll get to one that I think is misused momentarily), but in the end, when people they certified execute this stuff, it is all about practices and implementing as stated.

So I have a big qualm in how transparency is described. That principle is used in how management can look into a team. No problems there. So what about the reverse? Can a team know why a decision was made by a product owner/manager or other stakeholder? This is usually not very clear and favors not letting a team know; it is presumed the external authority is more wise than the team.

I know, you’re thinking what is the big deal… Well I am working with an organization that tried SAFe a few years back and dumped it as it lowered its ability to release on demand. It has some minimal ‘gates’ it has imposed; it’s a government organization so it needs some for compliances external to the organization. Yet here comes SAFe and here is how it will do things. The consultants have no inquisitive mind about what is in place currently. When they decide to implement something, they also don’t ask what teams are currently doing. They plough ahead with what is mandated by SAFe.

Given I was asked to coach coaches (that were recognized as being too much by the book), I find it interesting how unresponsive they are to different perspectives on how SAFe ideas can be implemented. We’re going fully virtual, they planned to set up WebEx breakout rooms by team for PI planning. I pointed out that if teams could independently plan in their own room, then we have no need to ‘scale’, perhaps the breakout rooms should be set up as teams identify dependencies. This was brushed aside. Same for how POs should interact with teams. I expect coaches to recognize the need for adaption and to listen to feedback.

Unfortunately, most SAFe consultants (BTW, technically I am one – so disclaimer) seem to not consider the context of the organization, the work systems in place, the actual organizational context that is calling for scaling, and/or the various needs of teams. Why not? SAFe subserves this to the needs of its system of operation.

If you want to implement SAFe successfully, first consider organizational context. Skip their ‘patterns’ and implement the minimal elements you think you need and add or reduce elements on what you discover.

Second, think about the teams you are asking to change their work patterns. Some will go willingly; some won’t. This feedback is a gift, not a pattern of stubbornness to overcome. Perhaps what you perceive as an ROI isn’t as good as what they are offering. Skip the sales message, learn why they think their ROI is what is needed and how can fit into it.

Third, quit treating teams as people you can’t entrust with organizational decisions. This happens predominantly with two things at play: 1) there are incentives to not play nice and 2) the overall organizational goals are not clear and prioritized.

And that brings me to the last point… If at any time you feel you need to do a sales message, ask yourself why. SAFe seems reliant on ‘selling’ its importance. It’s part of the training for god’s sake… If the need is not self-evident, one should think of why an approach is being pushed. I’m not for not using it, but more for understanding true needs and letting it guide us.

See the Scaling Manifesto for more understanding…

Without these considerations, SAFe will not bring safety, something it proclaims,

Creating and Funding Pipelines of Decisions

In the Lean community, there is much talk about identifying Value Streams. For some organizations, I think this gets hard to understand in practicality. Yes they can identify the end point where a product or service comes out and the first few steps backward are easy and then the complexity of its construction takes over as it enters the more complicated structure of the hierarchy. This is especially true in knowledge work or creative construction of a product as opposed to a manufacturing line.

(Bear with me as I span two metaphors here… Pipeline which is a construct used often around continuous delivery of a value stream. And rivers, which I will use as a construct for decisions being made that flow into one another.)

I would propose the fundamental difference is that what we think of as a pipeline is a set of decisions that get made. Using the decisions, allows an earlier view into the pipeline. If you were to map this out for one pipeline of decisions in the current organization, it may look more like a river with a set of tributaries connected by some canals.  This is because even though the production has been pulled into teams, the earlier decisions get spread across many functions still. Simply put, each pipeline then would be its own river system. The linear model of a pipeline isn’t all that linear when mapped against the current reality of the organization.

Some decisions though are connected within a single river (canals connecting within a single set of tributaries) and some go across multiple rivers. These canals are important to distinguish. We’ll return to them shortly.

Some of these decisions are made very close to or simultaneously in time. Often these are not independent. So as a first act, let’s think where some of these simultaneous, dependent decisions could be grouped. This might become a ‘team’; thus rather than having decisions shown as canals between tributaries, I now have just one stream where these decisions get jointly made.

An example? Sure – deciding on the exact vision, and thus its scope, for a proven need to be worked; this is very easily shaped by how much funding is available. This could be a team of business/mission (need), marketing (when it may be needed by…), IT (this team is the best fit and this is their capacity in terms of throughput), and financial personnel (we can shape the funding in this manner).  This team can set the scope based on funding, throughput, and team capacity and the business (with marketing perhaps), can establish a vision for this congruent with the organizational vision.

Where decisions now truly go across “rivers” become integration points where people need to work together. Some may even be in the early stages. Going back to the previous example. The team that is shaping this need has it as the next #1 priority to be enabled for the organization; the resulting financial decision may impact how the #2 priority may get shaped (worked by another team).

The resulting rivers (representing the full pipeline) aligns teams along the entire pipeline resulting in services or products and now better represents sets of decisions.

If you hadn’t guessed it yet, this is where many of what were the traditional managers fit; they get embedded in these teams to help make decisions for shaping the flow of work as opposed to directing groups in how to do their work. They use the broader knowledge of other value streams to know when new integration points (canals in our metaphor) may be needed between them. They receive retrospective input from teams downstream so they can improve how they make decisions. They also balance between what teams are starting anew to what they may have to maintain.

Based on the size of the portfolio the organization can maintain simultaneously will dictate how many of these teams may be needed. It’s possible a team such as this may shape work for a few pipelines or be a part of just one. The pipeline is based on downstream capacity as represented by throughput.

Views of Estimating and Not Estimating for an Executive

This post was developed in order to give a longer response to Henrik Ebbeskog’s.

https://twitter.com/henebb/status/896404981003296768

My personal response to this tweet is that it represents a one-sided static and unsophisticated view of what a CEO may want. I am going to use this question as launch point to show that a range is possible. Which one may be ‘correct’ is highly dependent on the CEO’s mental models (and motivations), the organization, and the environment in which the organization finds itself. In this post, I am only trying to disprove the hypothesis that the CEO must understand some estimate from a traditional point of view at the beginning of an initiative. I’m also going to explore this from a financial sense, not so much from what a team may do around story points, et cetera, though I will make a short mention of it at the end.

For context that we can work within, the scenario is a SaaS company that provides financial compliance services. The company already has revenues in the high tens of millions of dollars and thus is not a start-up. The CEO is interested potentially expanding into a new market by launching new product services in helping clients monitor environmental compliance.

A Traditional Estimation Mindset

If the CEO is in a traditional estimation mindset, she or he will be interested in knowing as much about the iron triangle’s values of cost, time, and scope as possible. The CEO will turn to marketing (the Chief Marketing Officer if they have one) and ask of them “what are all the environmental compliance monitoring needs, who are our potential customers, and what is the potential revenue for these services?” Before marketing runs off and does this research, the CEO also asks, “how long will it take you to research these, and how much will this research will cost?” These are of course fair enough questions; the CEO wants to know the potential cost of the information before giving a go-ahead and whether it can be done in a reasonable timeframe.

OK, an estimate is made on cost and time (hopefully using historic data if they have it) the answer sounds reasonable to the CEO, so the green light to proceed is given. So marketing proceeds with the work they do in order to understand the market space taking about one quarter to do so at roughly $150K; this is on schedule and on budget from the estimate they gave the CEO (1 quarter and $150K+/-$10K; woot! win!). They may research the web on compliance needs, survey companies, see if competitors exist, et cetera. It looks promising; the revenue looks like it will be $10M for the first year, $20M the second year, and an estimated $30M the third year.

Now the CEO turns to the Chief Technical Officer asking, “how long will it take you to build this and when will it be done?” as he hands marketing’s finding on scope to him. Of course the CTO doesn’t give her or him a flippant answer, so the CTO goes back and pulls together a cross-functional team, including an experienced product manager, (let’s assume they have been using Scrum/XP practices for years) and this team defines an MVP with a rough price tag of $225K+/-$50K to get there. They also come up with an estimate of a first marketable release a quarter after that, and (in talking with marketing) another 2 subsequent improvement releases based on prioritized environmental monitoring needs the next two quarters after that for a total cost of $900K+/-$200K. Cool beans! Let’s go!

They execute and for simplicities sake they stay true to their estimate of $900 and $225K per quarter. I want to state that once the team was pulled together, the cost over a time interval is known if it is a stable cross-functional team.

The mindset here is understanding risk before executing (and of course managing it during execution).

A Lean Start-up Mindset

The CEO is interested in exploring the same environmental compliance space. He talks with his other executives and they decide to form a cross-functional team of marketing, which includes an experienced product manager, and IT personnel. They pull together a hypothesis of customer, problem, and solution and identify a set of assumptions about this. The team and executives set a vision for the need and boundary constraints that ensure it stays aligned with the company’s core vision and mission. Within these constraints is a set of questions that if understood state criteria for a transition state to a development effort as it provides enough detail to define an MVP and MMP as well as what the revenue stream for the MMP and other potential known releases of the SaaS product will be. The CMO is appointed as the oversight on this effort and agree that after each assumption is tested he will review whether to proceed, pivot, or kill. It’s worth noting at this point, none of the iron triangle is known. Costs per week of this assigned team are known (just like AFTER marketing provided the estimate above and were told to execute their research).

The first assumption is that monitoring a particular environmental compliance aspect is unserved in the marketplace. The team tries to find evidence of this through market research and a survey to their financial service customers in the same space. It is not disproved, so the CMO gives a proceed (or in start-up terms persevere) signal. They test the next assumption, and then the next. Sometimes, they build a quick prototype to see if a particular compliance rule can be enabled (it was the riskiest assumption). At the end of a quarter and $150K they know what the MVP and MMP look like and what the next 2 releases look like. They have the same revenue stream predicted as the traditional team.

At this point the team is reconfigured to look like the Scrum/XP team above and proceed (however, no estimate is asked for) and we’ll say that they deliver just as the team above costing $900K. The important point is that the same stable cost over the time intervals operates as above. At each quarter a proceed or kill decision is made based on the throughput of work was completed to what remains. This is a form of estimation – yes I realize it. What is different in this case is when the estimate is made; I didn’t start with an estimate. A rougher approach is simply looking at the remaining backlog compared to what’s left. I could also evaluate the marketplace at this point using the competitive analysis approach I had done and see if I want to continue (basing a decision on expected value in terms of a change in potential revenue – another form of estimate).

Other Alternatives

I could choose to do a traditional approach to marketing and then build without the estimate on how much it will cost. I could reverse this and do a lean start-up approach to understanding the market demand and transition to an estimated approach as well.

One thing to note: the lean start-up approach could be modified so that once the MVP (or MMP) was defined, the company could actually start building knowing they have some revenue stream that will come in; they may not know whether the company will get to the revenue stream as predicted though. This would decrease time to market for the MMP and may allow it to even expand into the other markets or it could decide to not and become a complementary service companies purchase.

Take-Aways for the Reader

First, I painted a rosy picture of delivery. It is very likely delivery will not go as smoothly as this. Thus when I reach the end of a quarter where a release is defined, I need to decide whether to continue or not or release with what I have. In a traditional, estimated viewpoint, I am deciding whether to add more time to the schedule (and potentially run over budget as a result) to release with the fully expected scope or release with less scope on time and at budget. Regardless of whether I estimated the length of time it took me or not, I can use my actual throughput of work as the predictor or not on whether to continue. Again, as I mentioned before this is a form of estimation; I am just choosing to do it later.

Second, I didn’t get into estimation that may occur (or not) by the team.

The primary use of story points (or another team estimation method on stories) is to know whether a story is small enough to be completed easily within an iteration. Some teams get really good at understanding their sizing and can stop using story points. (Lunar Logic’s estimation cards are a good insight into this, every story is either a 1 – we can take it on, TFB – too f-ing big, or NFC – no f-ing clue.) I encourage teams to examine the story’s independence and testability to gain this understanding as these two parts of the INVEST criteria are what feed the complexity thinking one needs to understand in applying story points. Teams can still measure throughput and lead time as these can be useful for later questions when an estimate may be needed about ‘how much’ or ‘how long’.

Third, I’d like to change the convo a little a bit. I personally think value and cost are decoupled. Net value (value minus cost aka ROI) are coupled. A great place to get a sense of this is Reinventing Project Management by Shenhar and Dvir. In this book, the authors describe two scenarios where the cost of the project had nothing to do with the end value of what was produced. Another change is ridding ourselves of the use of project thinking when we are doing product efforts. Product life-cycles extend beyond initial delivery and when use project thinking we often short change understanding of both costs and value in the long-term, whether we estimate or not.

Fourth, I hope this gives some insight into when choices can be made about estimation; it is not a simple binary answer, but one of fidelity. In some cases, one will want to run several detailed simulations in order to understand whether an undertaking should be done. In other cases, maybe we can just get started with none what-so-ever. Humans actually never escape mental models of estimation however, even a zero on this range assumes we will get some learning insight that has value and that in itself is an intuitive estimate. We certainly discovered this at the first Agile Dialogues unconference. (Biggest personal disappointment at this unconference is that the person that helped shape the theme then chose not to come after indicating they would.) What the thinking in the #noestimates ‘movement’ is trying to do is change the nature of this and question what our assumptions and beliefs are about what and when to estimate.

I’ll close with saying that there are people that add well to this conversation – they bring in well-formulated opinions. There are others that prefer to provoke – this occurs on both sides unfortunately. I personally seek actual dialogue so we can get out of binary thinking on this (see the Agile Bramble). I point out circumstances where not estimating work not to debate that not estimating is the path to follow, I’ve never said ‘never estimate’, but to have more dialogue of when we should undertake it or not and what we should estimate. Notice I didn’t say ‘if’. I’ve had someone state I evidently had no evidence when I have given some. I’m also not interested in endless debate – ask yourself do you feel you need to ‘win’ an argument. If so, you are not in a mindset for dialogue or learning, but to prove a point.

With this, I hope I have shown that the Rule of 3 applies 🙂

 

Observing Human Work Systems – A Coaching Fundamental

The topic of observing people working has caught my interest more ever since I attended the Problem Solving Leadership workshop run by Gerald (Jerry) Weinberg and Esther Derby. At this year’s Agile Coach Camp, I ran a session on this to learn more about what other coaches do to see humans at work as well as share areas I know had learned to start paying attention to…

Our first step was to identify behaviors we observe. Some of the more common ones showed up in our list: work artifacts, conflict, movement, body language, noise levels from people and the environment, and patterns of communication between members (who talks to who).  Others are a bit less common: when people take on leadership, where focus is, and what Jerry calls ‘Jiggling’, an interaction or event that gets a system to change in a meaningful direction from being stuck.

I then had a few people volunteer to observe and select what on the system they wanted to observe. The remainder of the people did an exercise I gave them. After, we debriefed what was observed. One person chose to watch people’s body language and with the exercise focused on when people had eye contact. The other chose to watch for who had control of the pen since it was the primary method for getting work done. Lastly, I chose to focus on who took leadership roles at what time.

Comm_Patterns_Mapping

This led to a discussion for understanding how one can observe communications in a meeting and get an idea for who dominates the discussion by drawing lines with who talks to whom and how much. Equal lines with everyone shows little domination while lots of of lines between just a few may show others being ignored. This continued with some discussion around Google’s Project Aristotle and the work of Alex ‘Sandy’ Pentland; here’s a really good paper measuring Face-to-Face communications.

As a follow on, I had the team do a cluster exercise I learned in PSL. I asked the people in the exercise stand next to those with which they most closely worked. This is a very revealing; I’ve done it in a few retrospectives and it can have a team self-reflect on whether they may be isolating others in their work system or if the connections may be wrong to do the work.

We closed by sharing what made observing work systems difficult and how we may be able to improve this important skill. Below is the photo of the flip chart we took about this…

Last ACCUS Session Last Page

It was great seeing the variety of answers for improving our skills as coaches in this domain. Several mentions of humility were made as well as the core Scrum value of Focus. My favorite comment though was the metaphorical answer to “Listen with Your Eyes” and, of course another was one that hints at cognitive empathy, “have multiple views” to remind us of the Elephant and the Blind Men.

Leadership in Agile Transformations: A Haiku

In keeping with my thoughts on transformation; I wrote a haiku on good leadership that is needed in Agile transformations.

Farmers cultivate

Burros make furrows in minds

More emerge to join

Can you see what leadership is happening in the above? How has leadership been happening in your organization?

 

A bit of Agile Transformation Haiku

One of the Agile Retroflections of the Day I submitted is this one, “What haiku can you write that reflects on your transformation efforts thus far?”

I’m a huge fan of haiku; it uses powerful imagery to convey messages. This was the one I came up with for my customer’s transformation thus far:

Strong burro works hard

Bureaucratic harnesses

But it’s pull, not push

I was inspired to use a burro as they are real workhorses ummm… really burros and because I know a fellow Agilista, Lisa Crispin, raises them.

But upon additional thought, I’d rewrite the haiku slightly to become:

Burro of strong heart

Bureaucratic harnesses

But it’s pull, not push

What would your haiku be?

 

 

Symptom of Anti-Agility: CRM Groups

Collapsed_Bridge_Narrower

I haven’t written in a bit (well I have actually, but on Excella’s Insight blog), but I wanted to write about an anti-pattern I am seeing in one of our clients. Many have written that management seems to think of “Agile” as only within development teams, and forget that this Lean and Agile thinking needs to permeate everywhere.  It’s a dysfunction to continue working on…

So let me give you the example I am seeing so we can talk in more concrete terms.

The organization has had a rocky past between central IT and the business. Software development is distributed in various business units as well as a central IT shop (I’m staying away from actual terms the customer uses BTW). Central IT also does considerably more though, such as manage the network, run a server farm or three, manage cloud providers, email and content management services, etc.

The complaints from the business have been articulated as (in no particular order):

  • I have to know who to call in order to get good service; any official channel for requests is difficult to know.
  • Service is often slow and paper/form intensive.
  • When I use any official channel, it doesn’t necessarily get routed to the right location.
  • I’m not told everything I need to do to get my request fulfilled, resulting in delays.
  • I don’t know the status on requests that have a longer time to fulfill, nor have I been told or know where to go to find out my status.
  • I’m uncertain if any feedback I provide is acted upon.

I’m certain everyone has seen these before in some mix.  IT also has a set of complaints:

  • People go to whoever they happen to know that is related to the request they want to make to get service. This eats into people’s time without management having much knowledge of it (perhaps the immediate supervisor knows, but no one else).
  • IT needs to begin a charge-back model due to budget restructuring that is removing a great deal of central ITs funding, thus more knowledge of requests and services is needed to be known. The manner in which requests are coming does not provide a means to track it.
  • The business sides doesn’t see the great work we are doing.
  • The right people in IT don’t receive the needed feedback.
  • IT has a blemished image/brand and are thus not viewed as a valuable partner.
  • The CIO can’t keep up with the requests given to him by his peers; too many are bubbling up and being made directly to him.

If you work in an IT organization, particularly as a senior manager type, how many of these have you seen? Before I go into how to handle these with some Agility, let’s look at what this organization is doing that I consider an anti-pattern; literally anti-Agility thinking.

The senior manager tasked with solving this sees this primarily as a communications problem in 3 areas:

  • Routing to the right people to do the work
  • Taking in feedback on ITs performance
  • Improving the communications on good performance (improving the “brand”)

To solve this, they are standing up a Customer Relations Group that will take in initial requests/requirements, receive feedback about performance during and at the end of the work and provide it to the group(s) that performed it, and get the word out on the good stuff IT does.

They are starting by creating a master catalog of services IT provides (which on the back-end will have costs associated with them). It’s unclear whether these costs and/or the parts of the IT organization that will perform them will be a part of this catalog.  So far not bad… They also will stand up a single intake service (along with an online form) to process these needs, then ‘interview’ the customer to get any additional information, and route this request to the proper parts of IT.  It seems at this point there will be a hand-off; what is unclear is how this hand-off will work if multiple parts of the organization are involved simultaneously. If software development is the primary concern, it will be handed off to a development team (or a new team will be stood up).

This new Customer Relations Group will also query the customers periodically to find out how well the various parts are doing and give this to the right part of the organization. It will also give any responses back to the customer. And finally it will create some marketing material and positive ‘press’ to help sell IT services.

To keep this from being a burden on the current IT organization, the bulk of this work will be contracted out to a big name firm that everyone respects. Sounds great doesn’t it!? Every organization needs this, right?

As well as intentioned at this group (and its contractors) may be, this creates several dysfunctions.

  1. This is actually adding an additional step that will actually slow requests getting to the right people. This will increase the time to service, probably decreasing happiness, perhaps not initially, but certainly in the long run.
  2. It adds a hand-off of information so that the people needing it are getting it second-hand. This will lead to greater misunderstandings between the groups that need to work together.
  3. Feedback and working communications are at least partly being removed from the group that needs them directly.
  4. There is an assumption that a one-size-fits-all intake process can accommodate getting all of the unique needs a customer needs to portray to a unique supplier. Requests for an email distribution list would be vastly different than one for a software development project, or just even business analysis services for a development project.
  5. Messaging (branding) will be coming from a third party not responsible for any of the results.

Even worse though, there are fundamental root-cause problems being masked over.  For example, why does the business feel feedback they give is not listened to and acted upon? This solution isn’t going to address this root-cause concern. What is causing the long lead-time for IT to respond to business needs? Why are we trying to create a standard process for intake and routing as opposed to simply better connecting people to those that would supply the services and give proper visibility into the incoming work, but allow a self-routing approach? Why do we need to do a charge-back for the services internally as opposed to getting the budget reallocated properly to pay for them?

So how would I approach these items? I’d start with challenging the fundamental way things are done. I would learn where are the bottlenecks causing the unacceptable lead-time. I’d investigate the root-causes to the image/branding and see how to solve those. I’d see how I could make a catalog of services attached to the people that provide the services and work them to create mechanisms that give me an understanding of the allocation of requests. And I would at least talk with the budget personnel to find out how I could simply get the budget allocated properly; if that couldn’t be done, I’d make my service costs transparent to those when they look up my services. If I wanted something Customer Relationship-like, I’d perhaps think about deploying customer relationship software to all the groups directly, if evidence showed it would help.

Bottomline: I’d reduce hand-offs and keep with the spirit of individuals and interactions over processes and tools. I’d do the simplest thing I think is in the right direction and then retrospect on how that is working for me.

 

When I’ve Skipped the Estimates…

spiral_clockWhile the debate carries on whether one must have estimates or not, I thought I’d provide a viewpoint of when I found them no longer needed.

However, before go there, let’s start off with a bit of a story about when estimates were not useful, but required, so I took the *EASIEST* path out.

Let’s go back to 2008; I was just hired on as a software development Branch Chief in USDA and asked to prepare the budget for the next fiscal year.  Of course, the first thing I did was poll around on what upcoming work there was. No one knew except that the same amount of maintenance as was last year. That was easy, apply an inflation factor on what we had this year, add a management reserve, and we’re done.

Now onto the harder problem: what about the unknown new projects looming.  dollar_tunnelSo I investigated how these normally got funded; any estimate done is simply reported up the chain (as requested Development monies), but the funds are actually provided by the programs that need the work done for them. These are used as a projection for  the branch and nothing more. Any work really done goes through its own process of requesting and then actual money is provided.

So I asked, how many projects did we do the year prior and how much did they cost? And the year prior? And the prior to that? 4 projects, 4 projects, and 6 projects were the answers. (I won’t go into the money numbers, but I’ll note this branch did not develop super huge applications, but small to medium sized applications with some complexity – a GIS app, an analytical app, several tracking type apps, a loan package development application, that may give you the picture.)  I didn’t need to know the number of apps for  the reporting, but I used that number to calculate the average cost per app we developed, projected into 2009 dollars; adding a standard deviation game me some more certainty, then a 15% management reserve.  Once I had those numbers, the process was literally a half hour to run through the math a couple of times to ensure I was on target.

My project managers could not believe I was going to use that number; they always went around to each potential customer and asked them to conjecture on applications or upgrades they wanted. Most never got funded and something else came up and got funded, so why spend time estimating what never happened.

This was a very low precision estimate, but got me in a reasonable and justifiable target number. (If the system allowed for ranges, I would have provided those, but alas it didn’t.)

I’m guessing you are wondering how ‘correct’ I was with that… We had 5 projects and it was fairly close to the average.  The next year we did the same thing, but it was off – much higher as the Recovery Act kicked into high gear, but as I pointed out before, it didn’t matter.

OK, that was budgets built using the least painful method of estimates possible.  (Sometime in the future, ping me on how I executed on real work within the branch… The spoiler hint is I limited the WIP of projects going on at any one time, so that I could keep my team close to constant size, the increase meant I experienced a contractor headcount increase by about 2 people.

So now onto some maintenance estimation I did away with…

When I took over running the maintenance team at Office of Pesticide Programs, every Software Change Request (SCR) came in went into a queue where it was examined in a meeting and the contractor told to go estimate it.  When the contractor came back with their estimate, usually a week later, the work was approved.  They estimated in time and they could then quote the money as they figured out who was going to do the work and then they could apply their labor rate. This singular meeting was at least an hour long every week and consisted of telling the contractor go estimate the amount of work to do and report out on estimates made.  This never went anywhere; no one did anything with these estimates. We never said no to the SCRs for the legacy systems we maintained, mostly because no one worked with the business well enough to know whether it should happen or not. On top of that, there were 20 some legacy apps with at least that many stakeholders to try and satisfy. Perhaps at some point, this estimation process was used to say no, but with the mostly low complexity work coming in, there was no drive to say no.

We set budgets based on annual contractor headcount. Perhaps at some point this estimation exercise was used for this, but it wasn’t any longer.

So I did a couple of things, I killed the meeting. I put the onus on the government application maintenance staff to work with the business to prioritize the work in their viewpoint. I set-up a rule set for taking these priorities, along with a quick technical assessment (that set severity) and the date in, to establish a prioritization across all apps.  I got these stakeholders to agree to this scheme so I didn’t have to fight with each app. We still never said no, we just prioritized the work not started constantly.

And I eliminated the estimates.  I decided on contractor staff based on how much work I could get through; I concentrated on further process improvements before I thought of increasing headcount. (You can read about the Kanban system that was set-up on GovLoop if you so desire.)

To go full circle to where once again I found an estimate helpful in this environment was a potential regulatory change was going to require a rather large piece of work to our legacy PowerBuilder app. I was asked how long it would take; the upper management was interested in ensuring that we had enough lead time to get it done. Not having it done, had a financial impact on the Agency.

Since I had a Kanban system implemented in Trac, I filtered out that legacy’s enhancements to similar ones and calculated the average and two standard deviations.  I gave them that range with stating the high number had 95% confidence we’d fit within it. They deeply appreciated the accuracy and precision in this case. This is a form of estimation of course, but the real point is day-to day, we never estimated; there was zero value in it.  We did capture actual data though using our system, which made predictability possible just as I mentioned above.

Hopefully this will help others at least understand one context where estimations weren’t needed and also where low fidelity estimates were good enough to establish a reasonable estimate. I consider myself a no estimates guy, only because I look at the assumptions of why I need to estimate and if I don’t and can derive a more suitable answer in some manner, I’ll probably use that.  It’s all a matter of context.

Yin and Yang in Change Management: Appreciative Inquiry and the Power of Habit

yin-yangThere are many change management approaches out there. Most focus on weaknesses you need to change; several others out there focus more on things to keep the same and build upon. Most change agents then further target using one approach or another, perhaps based on context, or perhaps as a ‘goto’ tool (you know what they say about goto statements – don’t use them!).

My preference is to balance between these approaches, two I have found really useful are Appreciative Inquiry and the Power of Habit.

I have always tried to help people and organizations find their strengths and build on those. This is the basis of Appreciative Inquiry, something I learned by reading the Thin Book of Appreciative Inquiry.  I’m looking forward to reading more on this to be honest as I think it is undervalued as an approach (I almost said under-appreciated…). Being able to identify, really help others identify, the core strengths they have and harness those for the changes they want to see is really powerful.

Here’s an example of how you might use Appreciative Inquiry; have the leadership of an organization (preferably with a sprinkling of lower down in the totem pole) create a KrisMap of where they want to be, personifying what the future organization will become. Then have the people identify the strengths they can leverage towards the resulting characteristics and build action plans to achieve these.  Very powerful, and quite motivating since you are using core strengths.

Screen Shot 2015-07-12 at 10.02.34 PMYet only applying strengths does not help you eliminate weaknesses.  When talking about change such as what may occur in an Agile Transformation, another approach to look at is the Power of Habit. When habits work against where the organization’s people want to be, then one needs to look at changing the habit to a new one, while keeping the reward the same.

The Habit Loop can be defined as what cues a decision that needs to be made and once similar decisions are made repeatedly following the same formula that provide some form of benefit or pain avoidance (reward), then this will be the preferred decision routine; a craving gets established.  This is regardless of whether we’re talking individual, organizational, or societal. Keeping the reward the same, while changing the habit allows new habits to become more positively reinforced.  This can take some work and I always recommend breaking down the habit into a causal loop showing all the steps being taken.  This helps in identifying leverage points that you can use (strengths in Appreciative Inquiry speak) and possible side loops that could railroad the change; essentially risks to mitigate.

Lastly, remember introducing change, regardless of approach, can be overwhelming. Limit the number of changes you are introducing at any particular point in time.  This gives you a chance to better sense the effect the changes are making and respond accordingly.

For more information, see the latest version of my Taking Flight presentation.

A Short Essay on Using Models – Why Should You Use Them & Why You Should Create Some

EA-7L_Corsair_Line_Drawing I use many models in my thinking, whether they are mine or someone else’s, yet I don’t think of myself as a theorist. I thought it may be helpful to some on why models are so valuable to a pragmatist. Another word for model is framework…

“essentially, all models are wrong, but some are useful”

George E.P. Box

This quote is the first thing to remember when you begin using any model; you need to remember that at some point a model will break down and no longer support what you were using it for…  Like a lean start-up idea, create and use models passionately, but stop using them the moment evidence points that they are no longer helpful.  (he nice thing about a model though is that generally this means you have crossed an edge-case where the model doesn’t work any longer, but may still be useful in the long run.  If the model consistently doesn’t work, then perhaps the model has some invalid assumptions.  Exploring these assumptions then may help you refine the model into something that once again works or to find or develop a model that does work under the broader circumstances.

This brings me to the next point – ALWAYS realize models have a set of assumptions.  Explore how the model works under these assumptions.  This helps you understand when the model may be useful and when it may not. With that, why do you need them if you are simply someone (particularly a coach or manager) who needs to help people get things done?

Models help you understand systems; they may not provide a means to achieve an answer, but may simply may provide a means for organizing your thoughts.  The Cynefin model by David Snowden is one of these latter ones – it can help you understand the problem space you are exploring for decision-making. Finding models that can represent systems or at least significant and important portions of a system is mostly useful for helping you organize your thoughts.  The act of thinking through when and how these apply including valid and invalid assumptions about variables, algorithms, or organization (for more pictorial models) really helps you determine on which things to pay attention.  Even if you find the model doesn’t work, the amount of thinking you went through will serve you well.

And I invite you, particularly when you don’t find a model that seems to represent what you need, to try and think through creating one.  Don’t worry about it being perfect, you can always adapt the model after inspecting how it works.  Again, you are using this to organize your thoughts.  Creating a model could be as simple as combining models; Jurgen Appelo’s CHAMPFROGS model about motivation does this.  It appears Jurgen saw gaps, overlaps, and some inconsistencies in representation and blended a new model to make it more clear to him.

It’s also extremely useful to find where different models connect in explaining the same observations (data) differently.  This helps you understand where options may be found and where the thinking on these has many dimensions, which again exposes assumptions about the models.

Going back to the usefulness, one huge benefit for applying or creating a model is stepping back from tactical thinking to a more strategic layer.  This helps in prioritizing based on importance over simple urgency.

People serving as coaches and managers are there to help the people improve the system, you can do this best when you have your own thoughts organized. Models can be an essential tool in selecting and organizing the particular tools and techniques needed to apply.