Folks like me often get a lot of push back from project managers as we work with their PMO to ramp up the quality of their schedules. Most often, I see complaints not about the schedule itself, but about the seemingly arbitrary list of arcane rules required to ensure that the schedule prediction is underpinned by established modeling best practices. Examples of such rules might be that every task should have at least one predecessor and successor, tasks should not exceed a specific duration, or that the update methodology must be strictly adhered to. You know, simple DCMA 14 Point Assessment stuff.
A schedule developed without following these rules may still be accurate, insofar as the dates may actually be realized as reported, and the data all supports the organizational reporting requirements. The model however, the underlying logic, is invisible. It’s all happening in the PM’s head and then being reported through the schedule mechanism.
What we have at this point is a reporting schedule. It’s a schedule created to meet the minimal organizational requirements and to document the key dates of the project. It does not, however, capture the underpinning logic of the schedule. It is missing the schedule model.
A couple of years ago, PMI introduced these two concepts, the concept of the “schedule model,” or the logical predictive model of a schedule, and the concept of the schedule itself, which is a static snapshot of the schedule model at a specific point in time. For example, I create the schedule model in my favorite scheduling application, with all of the dependencies and sparing use of constraints, etc. Then, every week, after updating my model, I generate my prediction of what the future will look like. That prediction is my schedule. The schedule is refreshed each week with the output of my updated schedule model.
Are these predictions correct? I don’t know that anyone can ever say a prediction of the future is correct. The more accurate question is whether these predictions are valid. Are they an accurate reflection of everything we know about the work to date? In fact, that’s my litmus test for validity. Can I look at your schedule, and ask you, point blank, “Is this the most accurate prediction of the future based on what you know today?” If the answer is anything other than yes, I would consider the schedule to be invalid.
Let’s take that and apply it to a typical audit scenario. In this scenario, you, the project manager, are telling me that the dates are all correct and valid per your latest understanding of the project. This statement is something that I cannot challenge. Well, I cannot challenge this statement – with the possible exception of calling out tasks completed in the future or incomplete work still scheduled in the past. What I can do is ask how the model was developed, to which the response is invariably, “It’s all up here.” with a finger tapping the temple.
In essence, what you’re telling me is that your schedule model is all in your head. It may be valid and it may be invalid, but I, as an external observer have no way of identifying that. Your schedule model is hidden to me, and therefore, unless I trust you implicitly, I can’t trust your model.
This is why we have schedule audits. This is why we have DCMA checkpoints. Because while it would be nice if we all just had a little more trust of each other, given the high cost of projects in the world today, that’s a luxury that many organizations simply cannot afford. And all of those audits and quality assurance processes come to nothing when the schedule model is hidden and we’re only allowed to see the schedule.
So in the end, while you can show me a schedule that’s resource loaded and has all of the key organizational milestones attached to it, you can’t show me your schedule model. You see, it’s all in your head, it’s all mental. And that’s why I can’t make a judgment about whether or not you have a valid schedule.