Incorporating Security into User Stories

Violet_ForcefieldOne of the biggest initial resistors I have run across within Federal government employee stakeholders are information security personnel (and their supporting contractors).  This is often because when they learn about how requirements are managed with user stories they don’t see a fit for their requirements; in the Federal space these are guided by FIPS 200 and NIST 800-53 (currently at rev 4). Other writings on the subject do little to help them. Most advocate a separate and distinct type of story such as this paper by SafeCode or relying wholly on Dark or Evil stories written from a point of view of someone trying to gain access to a system, or deny access to a system.

There is no reason software-centric security controls can’t become user stories and/or acceptance criteria to user stories.  This post is going to attempt to show you a bit how to think on this. There is value in using dark stories as well, but I advocate first getting stories that incorporate NIST controls within the backlog.

First up, one must understand that the controls in NIST 800-53 cover a large number of dimensions including physical space control, configuration management, training, etc.  When attempting to convert the controls into user stories and/or acceptance criteria, focus only on the ones that are software-centric.  Controls that deal with authentication, authorization, audit logging, system monitoring, or encryption are great candidates.

Second, per NIST guidance, the organization is expected to establish the baseline needs of applications/systems and then select and tailor the controls needed. This responsibility can fall to the people managing the product features (in Scrum this would be the Product Owner) in concert with IT Security staff. By articulating these as user stories or acceptance criteria to user stories, these now have business value. The person managing what gets done no longer has to make a leap of faith or be told by someone externally that they just have to have it.

IN the following examples, we’re going to follow the Specifications by Example format and use a generics  records review system as an example that places records in a list for authoritative storage (a separate system); it handles Personal Identifiable Information. Let’s start with a story…

User Login Story

As a User Requiring Authentication,
I want to Login to the DataReview application
So that I can review incoming survey data for quality

//Standard Login Scenario
Given the username jsmith // valid userid
When I attempt a login using “nS3cure” // correct password
Then I the main home portal page is displayed for me

Given the username jsmith // valid userid
When I attempt a login using “123” // incorrect password
Then I see the login page again with the statement “Incorrect userid or password” stated along with a count of Login attempts.

Given the username smithj // invalid userid
When I attempt a login using “nS3cure”     //password correctness is immaterial
Then I see the login page again with the statement “Incorrect userid or password” stated along with a count of Login attempts.

// Multiple Login Failure Scenario
Given two failed login attempts and the username jsmith // valid userid – 3 attempts require a lock out for a period of time
When I attempt a login using using “123” // incorrect password
Then I see the login page again with the statement “Third failed login attempt, your IP address has been locked out for 3 hours” // 3 hrs by policy

Given two failed login attempts and the username smithj // invalid userid – 3 attempts require a lock out for a period of time
When I attempt a login using “nS3cure”     //password correctness is immaterial
Then I see the login page again with the statement “Third failed login attempt, your IP address has been locked out for 3 hours” // 3 hrs by policy

Given the IP address 130.3.55.121 is locked and the password lockout timer less than or equal to 3 hours // we’re testing using a static IP
When I attempt a login using any username or passowrd
Then I see the login page again with the statement “This IP address has been locked out.” and the password lockout timer is reset to 3 hours.

There could be numerous other additional acceptance criteria as well (password complexity for example), but at least now one can see how these user stories can articulate some of the NIST requirements (in this case AC-7 Unsuccessful Logon Attempts).

Let’s look at how security controls for auditing significant events might show up as acceptance criteria.  In the following example from our records management system, post QC review, we are adding the person’s record into a queue for inclusion into the authoritative system. It is fundamental that we know when this data is placed in the queue for transmittal. This has been determined to be an auditable event per control AU-2.

Approve Record Story

As a QC,
I want to approve records
So that they can be queued for entry into the official records repository.

//Approval
Given authenticated user jsmith has a valid QC role and bjones submitted the record “Mary Maryland” to jsmith for approval
When I approve the record “Victoria Virginia”
Then I see the message “Victoria Virginia record approved”,  the record is appended to awaiting transmittal queue list, and an entry is made to the security audit log with <date-time>, “jsmith”, “QC”, “approved “Victoria Virginia” // policy requires approvals to official records to be logged

//Disapproval
Given authenticated user jsmith has a valid QC role and bjones submitted the record “Mary Maryland” to jsmith for approval
When I disapprove the record “Mary Maryland”
Then I see the message “Mary Maryland record disapproved and returned to bjones” and  the record is returned as the first item in the review queue for bjones

Here we can see how the record gets logged (one form of auditing) when approved.  Because the record isn’t being readied to transition to the authoritative system when disapproved, it was determined that event wasn’t auditable.

Hopefully, this helps folk understand how NIST 800-53 security controls can be incorporated into user stories. By putting them into this format, the development tem now can develop to them and hook them into acceptance testing via something like Cucumber.

 

When I’ve Skipped the Estimates…

spiral_clockWhile the debate carries on whether one must have estimates or not, I thought I’d provide a viewpoint of when I found them no longer needed.

However, before go there, let’s start off with a bit of a story about when estimates were not useful, but required, so I took the *EASIEST* path out.

Let’s go back to 2008; I was just hired on as a software development Branch Chief in USDA and asked to prepare the budget for the next fiscal year.  Of course, the first thing I did was poll around on what upcoming work there was. No one knew except that the same amount of maintenance as was last year. That was easy, apply an inflation factor on what we had this year, add a management reserve, and we’re done.

Now onto the harder problem: what about the unknown new projects looming.  dollar_tunnelSo I investigated how these normally got funded; any estimate done is simply reported up the chain (as requested Development monies), but the funds are actually provided by the programs that need the work done for them. These are used as a projection for  the branch and nothing more. Any work really done goes through its own process of requesting and then actual money is provided.

So I asked, how many projects did we do the year prior and how much did they cost? And the year prior? And the prior to that? 4 projects, 4 projects, and 6 projects were the answers. (I won’t go into the money numbers, but I’ll note this branch did not develop super huge applications, but small to medium sized applications with some complexity – a GIS app, an analytical app, several tracking type apps, a loan package development application, that may give you the picture.)  I didn’t need to know the number of apps for  the reporting, but I used that number to calculate the average cost per app we developed, projected into 2009 dollars; adding a standard deviation game me some more certainty, then a 15% management reserve.  Once I had those numbers, the process was literally a half hour to run through the math a couple of times to ensure I was on target.

My project managers could not believe I was going to use that number; they always went around to each potential customer and asked them to conjecture on applications or upgrades they wanted. Most never got funded and something else came up and got funded, so why spend time estimating what never happened.

This was a very low precision estimate, but got me in a reasonable and justifiable target number. (If the system allowed for ranges, I would have provided those, but alas it didn’t.)

I’m guessing you are wondering how ‘correct’ I was with that… We had 5 projects and it was fairly close to the average.  The next year we did the same thing, but it was off – much higher as the Recovery Act kicked into high gear, but as I pointed out before, it didn’t matter.

OK, that was budgets built using the least painful method of estimates possible.  (Sometime in the future, ping me on how I executed on real work within the branch… The spoiler hint is I limited the WIP of projects going on at any one time, so that I could keep my team close to constant size, the increase meant I experienced a contractor headcount increase by about 2 people.

So now onto some maintenance estimation I did away with…

When I took over running the maintenance team at Office of Pesticide Programs, every Software Change Request (SCR) came in went into a queue where it was examined in a meeting and the contractor told to go estimate it.  When the contractor came back with their estimate, usually a week later, the work was approved.  They estimated in time and they could then quote the money as they figured out who was going to do the work and then they could apply their labor rate. This singular meeting was at least an hour long every week and consisted of telling the contractor go estimate the amount of work to do and report out on estimates made.  This never went anywhere; no one did anything with these estimates. We never said no to the SCRs for the legacy systems we maintained, mostly because no one worked with the business well enough to know whether it should happen or not. On top of that, there were 20 some legacy apps with at least that many stakeholders to try and satisfy. Perhaps at some point, this estimation process was used to say no, but with the mostly low complexity work coming in, there was no drive to say no.

We set budgets based on annual contractor headcount. Perhaps at some point this estimation exercise was used for this, but it wasn’t any longer.

So I did a couple of things, I killed the meeting. I put the onus on the government application maintenance staff to work with the business to prioritize the work in their viewpoint. I set-up a rule set for taking these priorities, along with a quick technical assessment (that set severity) and the date in, to establish a prioritization across all apps.  I got these stakeholders to agree to this scheme so I didn’t have to fight with each app. We still never said no, we just prioritized the work not started constantly.

And I eliminated the estimates.  I decided on contractor staff based on how much work I could get through; I concentrated on further process improvements before I thought of increasing headcount. (You can read about the Kanban system that was set-up on GovLoop if you so desire.)

To go full circle to where once again I found an estimate helpful in this environment was a potential regulatory change was going to require a rather large piece of work to our legacy PowerBuilder app. I was asked how long it would take; the upper management was interested in ensuring that we had enough lead time to get it done. Not having it done, had a financial impact on the Agency.

Since I had a Kanban system implemented in Trac, I filtered out that legacy’s enhancements to similar ones and calculated the average and two standard deviations.  I gave them that range with stating the high number had 95% confidence we’d fit within it. They deeply appreciated the accuracy and precision in this case. This is a form of estimation of course, but the real point is day-to day, we never estimated; there was zero value in it.  We did capture actual data though using our system, which made predictability possible just as I mentioned above.

Hopefully this will help others at least understand one context where estimations weren’t needed and also where low fidelity estimates were good enough to establish a reasonable estimate. I consider myself a no estimates guy, only because I look at the assumptions of why I need to estimate and if I don’t and can derive a more suitable answer in some manner, I’ll probably use that.  It’s all a matter of context.

Using a Business Canvas in a Government Environment

At least some of you know I worked at the Environmental Protection Agency in the Office of Pesticide Programs (OPP).  At one point I and a colleague created a Business Canvas for our office; this concept comes from Alex Osterwalder’s book, Business Model Generation.  Below is what I can remember of our canvas (we did this about 5 years ago and I did not take it with me, so this was reproduced from memory; it’s mostly correct).

OPP_Biz_Canvas

These high level items allowed us to identify quite a few useful things. I’m not going to go through every box at the moment, but what we found we could do with this was identify weak spots (our IT contractor at the time was a weakness for us) and the primary activities to leverage to create our value propositions.  We did some postulating on new possible customer segments and thought specifically targeting farmers (one of the largest users of pesticides) may be a good thing to call out.

We then did an analysis on various trends. One trend stuck out; while we were a monopoly, we still were subject to market forces. The economy at the time had been in recession for a couple of years, a pretty severe one at that.  PRIA registrant fees funded much of our work. If the economy is tanking, less pesticides will be purchased (farmers in particular will try and get with less to lower costs). This in turn normally lowers the amount companies will invest in R&D. Without R&D, less new pesticides will be rolling out for registration, meaning less funds and work for OPP. There isn’t anything magic here, but the canvas had us postulating on it.  We went to talk with our IT Director as we wanted to find a way of testing this hypothesis as it would have a severe impact on the work we do; he showed little interest.

Later that year, the Office Director for OPP announced we were going to have the least number of registrations on record since the Office was founded. I can only envision had we tested our hypothesis we would have had a leading indicator as opposed to the lagging indicator of watching the number of registrations trend significantly lower than expected.

Most Government organizations have only appropriation.  Even so, thinking in terms of the value propositions being delivered to customer segments and the activities and partners needed to do this can be really advantageous.

T-Shaped/H-Shaped Contracting Officers

Recently the US Digital Services and Office of Federal Procurement Policy issued an OMB Challenge; in it they discuss how contracting officers need to be more knowledgeable in digital services procurements. (Digital Services seems to be the new 18F-ish buzzword for user-centric software development, though they also reference cloud-based services…)

In this challenge, they mention creating depth of knowledge in digital services procurement, however they also suggest a desire to increase their business savviness, though they don’t express exactly what is meant.

T-shaped people have both depth and breadth of expertiseThis prompts me to simply point out that contracting officers and specialists (as well as any acquisition-related professional) are needed to aspire to become generalizing specialists or T-shaped people.  What do I mean by this?  For a contracting officer, this means becoming not only steeped in contracting services, but knowing enough about information technology to understand what may or may not apply to procurements. I’d also suggest getting more knowledgeable in their department’s or agency’s mission and understanding their needs earlier on is what will also aid them in becoming better at digital services procurements.

The challenge wants a CORE-Plus curriculum; IMHO this indicates that the government is interested in beginning to create contracting officers that have more breadth.  This helps attune their contributions to become more valuable as their knowledge increases to better align with the services being procured.  In some ways the desire to have contracting officers undergo a CORE-Plus certification, means they will be more like H-shaped people with some deper knowledge of digital services technologies as well.

Contracting, particularly in the government, is a complex undertaking.  As someone who maintained several DAWIA (Defense Acquisition Workforce Improvement Act) certifications myself, I can attest to how valuable it is for personnel to have a broader understanding for what they are acquiring and how it fits into the needs of the organization that will utilize it.

For an excellent general write-up on what T-shaped people are, drop by Darren Negraeff’s post The Importance of T-Shaped Individuals.  It contains links to further reading and is also where the T-shaped image above comes from…