Your Schedule is Totally Mental

Folks like me often get a lot of push back from project managers as we work with their PMO to ramp up the quality of their schedules.  Most often, I see complaints not about the schedule itself, but about the seemingly arbitrary list of arcane rules required to ensure that the schedule prediction is underpinned by established modeling best practices.  Examples of such rules might be that every task should have at least one predecessor and successor, tasks should not exceed a specific duration, or that the update methodology must be strictly adhered to.  You know, simple DCMA 14 Point Assessment stuff.

A schedule developed without following these rules may still be accurate, insofar as the dates may actually be realized as reported, and the data all supports the organizational reporting requirements.  The model however, the underlying logic, is invisible.  It’s all happening in the PM’s head and then being reported through the schedule mechanism.

What we have at this point is a reporting schedule.  It’s a schedule created to meet the minimal organizational requirements and to document the key dates of the project.  It does not, however, capture the underpinning logic of the schedule.  It is missing the schedule model.

A couple of years ago, PMI introduced these two concepts, the concept of the “schedule model,” or the logical predictive model of a schedule, and the concept of the schedule itself, which is a static snapshot of the schedule model at a specific point in time.  For example, I create the schedule model in my favorite scheduling application, with all of the dependencies and sparing use of constraints, etc.   Then, every week, after updating my model, I generate my prediction of what the future will look like.  That prediction is my schedule.  The schedule is refreshed each week with the output of my updated schedule model.

Are these predictions correct?  I don’t know that anyone can ever say a prediction of the future is correct.  The more accurate question is whether these predictions are valid.  Are they an accurate reflection of everything we know about the work to date?  In fact, that’s my litmus test for validity.  Can I look at your schedule, and ask you, point blank, “Is this the most accurate prediction of the future based on what you know today?”  If the answer is anything other than yes, I would consider the schedule to be invalid.

Let’s take that and apply it to a typical audit scenario.  In this scenario, you, the project manager, are telling me that the dates are all correct and valid per your latest understanding of the project.  This statement is something that I cannot challenge.  Well, I cannot challenge this statement – with the possible exception of calling out tasks completed in the future or incomplete work still scheduled in the past.  What I can do is ask how the model was developed, to which the response is invariably, “It’s all up here.” with a finger tapping the temple.

In essence, what you’re telling me is that your schedule model is all in your head.  It may be valid and it may be invalid, but I, as an external observer have no way of identifying that.  Your schedule model is hidden to me, and therefore, unless I trust you implicitly, I can’t trust your model.

This is why we have schedule audits.  This is why we have DCMA checkpoints.  Because while it would be nice if we all just had a little more trust of each other, given the high cost of projects in the world today, that’s a luxury that many organizations simply cannot afford.  And all of those audits and quality assurance processes come to nothing when the schedule model is hidden and we’re only allowed to see the schedule.

So in the end, while you can show me a schedule that’s resource loaded and has all of the key organizational milestones attached to it, you can’t show me your schedule model.  You see, it’s all in your head, it’s all mental.  And that’s why  I can’t make a judgment about whether or not you have a valid schedule.

Your Schedule is Totally Mental

The Schedule is the Thing

Organizations are often unable or unwilling to estimate project effort or cost.  I get it.  Detailed estimating is hard and requires work.  A lot of times, our project has already been approved to enter execution – and we had to do estimates to get through the approval process.  What would be the point of preparing any further estimates?  After all, isn’t the budget sufficient for executing the project?

Taking that one step further, I find myself having to explain the difference between a budget and an estimate – or the functional difference between a high level estimate and a low level estimate.  It puts me in mind of this great quote I saw the other day in Mintzberg, Ahlstrand, and Lampel’s rockskip over the surface of strategy, Strategy Safari:

“Somewhere…between the specific that has no meaning and the general that has no content, there must be…for each purpose and each level of abstraction, an optimal degree of generality.”

-Kenneth Boulding, “General Systems Theory: The Skeleton of Science,” Management Science (1956:197-198) – reprinted in Strategy Safari.

From an academic standpoint, budgets are developed from the top down and define the limits within which you can spend.  As I discussed in The Importance of Cost Abstraction, estimating is required for project control.  But how do you know when you’ve estimated enough?  How do you know when the estimate is thorough enough for project control purposes?   Sometimes we go through several iterations of budgeting – until the final product is actually quite developed.  How do I know when I’m done?

The answer is that you know the estimate is thorough enough when it is detailed enough to support your control purposes.  Huh?  To paraphrase the PMBOK, “The process is detailed enough when it generates an output sufficient to support the next process.”  This means that the estimate is sufficient when it supports the next process downstream, or the project control process.

image

Now, here’s the trick.  One process’ estimate is the next process’ budget.  Meaning that the estimate I developed in pre-approval, which might allocate work by role and by month is insufficient in detail for project execution, where I am focused on elements of near term resource contention.  Hence, I need to take my estimates by role and by month, feed them into my scheduling process, and develop a detailed estimate of resources by name and by day.  Repeat as required.

With each successive process, I take the estimate from the last process and refine it even further.  When it gets to the schedule, specifically a schedule created to drive resource management, the estimate is key.  And at this point, the estimate should not be driven by resource allocation requirements.  At this point, the estimate needs to be driven by the work.  That work may then be mapped back to the allocation to define project feasibility.

The estimate needs to be driven by the thing.  The specific deliverables of the project must be decomposed into a WBS, and then the tasks required to generate that defined (using an SDLC) and mapped to each of those deliverables.  Work estimates may then be placed on each of those tasks.  If you took another approach, you are wrong.  You’ve created a reporting schedule, i.e. a simple  construct that shoehorns the expected data into the expected shape – not something that accurately models the future.

Where is a problem identified in this estimating chain?  A problem will exist when we can map the detailed process output back to the less-detailed process input and determine that there’s an issue.  Either the detailed estimate exceeds the amount allocated at the beginning of the process or the detailed estimate comes in significantly below the amount allocated at the beginning.  Either of those results indicate that the process is working – and that we’re developing our estimates properly.  If any of the steps in the project approval process cannot either validate or invalidate the original allocation, then they should be revisited.

How do I know when I’m done?  When the estimate is driven by the work.  When I’ve decomposed the deliverables of the project into specific tangible things.  And when I can tie each task back to one of those things, and then to a specific work estimate.  That’s when I know I’ve reached the end of the estimating road: when the schedule is the thing.

The Schedule is the Thing

UMT in the News

Don’t know if you saw this, but on behalf of everyone at UMT, I am proud to announce that we’ve placed in Gartner’s Magic Quadrant for Integrated IT Portfolio Analytics.

Curious what that might actually mean to you?  Check out our upcoming Webinar on January 29:

http://www.umt.com/en-us/education/upcoming-events/222/umt-webinar-get-more-value-from-your-application-portfolio-and-control-100-of-it-spend.aspx

Get More Value from Your Application Portfolio and Control 100% of IT Spend

With IT budgets reaching hundreds of millions of dollars, CIOs find themselves under increasing pressure to reduce operating costs, improve performance and better communicate the value of IT to the business. IT leaders often inherit a complex environment comprised of systems that have been homegrown, purchased or added through various mergers. They are then faced with the daunting task of evolving the IT portfolio to align with changing business priorities and emerging technical trends.

UMT IT 360 platform and combines application portfolio management (APM), IT financial management and project portfolio management (PPM) best practices to provide a comprehensive Application Centric Planning framework. UMT IT 360 consolidates asset, project, resource and financial data to provide a dynamic blueprint of the entire IT environment. The solution helps CIOs, IT Portfolio Managers, PMOs and CFOs effectively collaborate to identify, model and implement transformation strategies to deliver current and future state architectures that align with business priorities and deliver a competitive advantage.

During this session we will discuss how UMT IT 360’s Application Centric planning framework can help you:
– Visualize relationships across IT Domains to create One IT Portfolio
– Standardize and streamline data collection across the Application Portfolio
– Use the five principles of enduring APM success
– Effectively analyze the portfolio and communicate transformation roadmaps
– Derive and control higher quality IT Demand
– Proactively measure performance and take corrective action

UMT in the News

EPM Bingo, or A Structured Approach to PPM Analysis

One of my colleagues used to explain to me that there are three kinds of people that attend any meeting:

  • There are those who feel an emotional need to share how they’re feeling about the topic.
  • There are those who need a specific list of assigned tasks to come out of the meeting.
  • And lastly, there are those people who don’t feel the meeting is complete until they’ve walked up to the white board and drawn a couple of circles and squares and have put an intellectual framework around the topic at hand.

This post is for the latter group.  It comes out of some recent discussions I’ve had where we’ve stepped back and taken a more macro view of the current enterprise work management system.  In fact, the diagrams below grew out of a couple of doodles I ended up drawing in my notes after a recent discussion.

Before going much further, let’s borrow a couple of terms from the ITIL lexicon.  Love it or hate it, at least it does provide us with some commonly accepted terminology.  The first day of ITIL training usually begins with the definition of these two terms (from the official ITIL glossary):

  • Function: A team or group of people and the tools they use to carry out one or more Processes or Activities.
  • Process: A structured set of Activities designed to accomplish a specific Objective. A Process takes one or more defined inputs and turns them into defined outputs.

I’ll add my own term to this:

  • Work Types– A specific type of work that shares a common lifecycle and set of processes.  (Yes, I know it kind of sounds like PMI’s definition of a portfolio, but since a work portfolio comprises multiple work types, I figured this would be less confusing.  In technical terms, think Enterprise Project Types.)

So we’ve got functions that utilize multiple processes.  Hence one of the goals of process definition is to define the outputs required to support the functions.  At the enterprise level, we’re talking about the outputs required from local processes to support enterprise functions – such as resource management and allocation.

image

So far so good, but how does that apply to the EPM context?  Typically, you take the project lifecycle, break it down into processes, and then map the output of the processes to the required functions.  Most often, those outputs take the form of reports on process data.

Applying this to the enterprise as a whole, here’s a simple three(ish) step process to perform analysis on an existing or nascent PPM system*:

  1. Identify the inventory of work types within the system.
  2. Identify the processes required to support each work type.
  3. Identify the required enterprise functions to support governance.
  4. Map the outputs of the processes to the required inputs of the enterprise functions.

image

At the end, it should look something like this – with each line representing an organizational control process, data flow, or report.

image

…and since the data all needs to be pulled from the processes, this can be turned around as input into the process design to ensure that the appropriate control points have been inserted into the process to ensure the function has what it needs to, well, function.

*Not to be confused with my patented 3 step process to implement portfolio management tools:

  1. Identify all of the work in the organization.
  2. Identify the total work capacity of the organization.
  3. Identify how the organization makes decisions and codify this into a set of automated rules.

See, it’s actually quite easy to implement PPM tools…

EPM Bingo, or A Structured Approach to PPM Analysis