Like Wine and Cheese (Part 2)

In yesterday’s post, I proposed a model for assessing work and assigning an appropriate lifecycle.  That’s great in theory, and for those folks who deliver framework workshops as it allows us to spend a couple more hours diagramming stuff on the white board and adding lots of arrows and boxes and circles.  But what does that mean at a more tactical level?  How do multiple lifecycle models allow us to get closer to answering the two questions that prompted this discussion:

Do you also think that all schedules can or should follow the *same* (arcane or standardized, old or new, blue or red, agile or fatcat) schedule model?

…and….

How far ahead are you able to do almost perfect, good and less great predictions in your schedule? How does your scheduling model affect your prediction capabilities?

I’d contend that yesterday’s post addressed question #1, i.e. not all work is created equal, and the lifecycle should be tailored to the work.  That leaves the second question, which essentially boils down to asking how to solve the conundrum of being able to meet organizational estimating and control requirements while still maintaining a flexible lifecycle model.  This post is intended to address that question, i.e. to identify how we can have our cake and eat it too, by using iterative models and still meeting the organizational estimating requirements.

Hence, this post is more tactical in nature.  My goal here is to talk about how to actually structure a system to work with the models proposed yesterday.

Work Authorization Systems

Typically, as the project progresses through the business case development process, the risks are assessed and the appropriate model assigned.  We would need to incorporate an assessment of which lifecycle model would be appropriate into our work authorization systems.

How would that impact the organization?  Realistically, we might not wish to authorize a project that maps to a model we’re not familiar with.  Perhaps we need to focus on hiring resources that can actually manage projects like this.  Or….we may wish to restructure the project to mitigate risk, i.e. to shrink the project size by chartering each iteration separately and reducing the overall scope of the effort….or extend the business case development process to include the initial prototyping.

At the end of the day, process is inherently an exercise in risk mitigation, and the process applied to the work should be commensurate with the level of risk that has been identified in the work.  Flagging the project appropriately from the beginning allows us to route it through the appropriate estimating process.

‘Ware the Bean Counters

Which brings us to the bean counters.  You know who they are.  They’re the folks that want to know how many resources will be used and when they’ll be required.  They’re the bogeyman that project managers use to justify not developing detailed estimates, but simply peanut buttering resource requirements across the project lifetime of a project.  “Bean counters,” it is argued, “do not understand iterative planning.  We must give them standard CPM schedules, even if we know they’re wrong.”

This is a fallacy.  The reality is that the bean counters need to know what resources are required and approximately when.  They’re not the ones looking for near term resource contention.  That’s the functional managers.  We need to differentiate between the two main goals of our estimate consumers:

  • Short term resource contention – looking for potential resource conflicts in the next three months.
  • Long term resource availability – looking for availability of specific roles over the next 3-12 months.

These  inherently are two different goals.  We must review our estimating behavior to ensure that we can meet both of those goals.  The implication however, is that when building my detailed schedule, I need to absolutely focus on those resource assignments I can define in the near term but I can be a bit more vague about the resource assignments I need after that immediate planning horizon.  That methodology will still meet the needs of my stakeholders.

That being said, each step in the estimating and re-estimating process must also serve to validate the upstream budget.

Untangling Scope and Schedule Control

But organizations like estimates.  Organizations like some sort of static prediction of the future – regardless of whether or not its correct.  Hence, we need to make certain assumptions about the future.  The question in IT typically comes down to a question of how well our scope is defined before we can be confident about our estimates.

There’s no absolute answer to this question, but I would say that the scope must be defined to the point that I can confidently point to the upstream budget and either validate it or invalidate it based on my refined definition of the scope of the project.  In a waterfall project, I typically depict that process as follows, with the two lines representing the various levels of control over the scope of the project.  Once the blue and red lines intersect, the scope could be said to be “under control.”  The project manager has defined scope well enough that a change – which would put us outside of the agreed upon parameters of the project could be defined as a change.

The goal of planning an estimating process is to move the white line from the left to the right until the project manager has defined enough scope to determine if the budget is adequate and if a new requirement is identified, that this would result in a change.  If your schedule estimate can do that, can identify near term resource contention, and can meet the long term role-based predictive needs of the organization, you’ve successfully estimated your project.

Scope Management

Critical Path, Shmitical Shmath

So can a critical path schedule be developed for a long term project with only a partially defined scope?  Probably not, although I can define a near term critical path, and couple that with some longer term tasks for the more undefined future.  Remember, the critical path is the beginning of the scheduling process, not the end.  The critical path is the starting foundation to perform risk analysis and then load uncertainty into the schedule through the use of buffers and probabilistic analysis.

But what about iterative projects?  Can I define the critical path on a project using an agile methodology?  Should I define the critical path on a project using an agile methodology?  I prefer to use a different method for estimating agile projects.  The trick to estimating these kind of highly iterative projects is to define the path to get the project kicked off and started; and then some of the high level design.  After that, I continuously perform an assessment: do the costs of my dedicated development resources exceed the value and opportunity cost of the remaining development effort?  If the answer at any point is yes, that project should be terminated.  If the answer is no, the project is generating more value than cost and it should be continued.

Scope Management

Hence, with iterative development we want to plan out the project just far enough until that inflection point is visible.  If we assume a dedicated team, then we can effectively predict our costs over the lifetime of the project.  If we don’t have a dedicated team, then good luck, make your best guess, and modify it on a weekly basis.  Understand that inaccurate estimates are almost always the result of multitasking team members.

Reporting Concerns

Flagging the lifecycle applied to the project is also critical from a reporting standpoint.  Reports typically roll up process compliance indicators to the PMO or executive staff.  These reports would have to be amended to call out the various compliance points for projects adhering to different lifecycles, i.e. the milestone report that would work for waterfall projects may not work for iterative projects.  Reports need to be tailored by execution model.

From the resource management perspective, resources may also be allocated or estimated differently for each of the lifecycle models.  For example, a waterfall methodology may incorporate task based estimating with or without rolling wave for long term future tasks.  A more iterative approach with dedicated resources would essentially allocate capacity in an operational method, allocating resources until we reach the inflection point between cost and value.  Some sophistication would be required to roll up the data effectively across the various execution models.

Going back to the comment that kicked off this entire diatribe…

 

How far ahead are you able to do almost perfect, good and less great predictions in your schedule? How does your scheduling model affect your prediction capabilities?

…the answer would be as far enough as I need to meet the organizational requirements – knowing that I can modify my estimating methodology to match the work lifecycle just like pairing wine and cheese.

Advertisements
Like Wine and Cheese (Part 2)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s