“I am struggling to see the overall value, or benefit for the investment, especially considering the effort required to build, manage and maintain the overall system,” he writes. “Having used these systems for many years, I don’t believe they are the right fit or the right answer for any enterprise. Project, program, and portfolio reporting should be handled more simply. Proper governance is the right answer, which doesn’t require sophisticated aggregation of data across the enterprise using EPM, or PPM technologies. In a nut shell, it’s overkill.”
Couldn’t agree more. In fact, as I was just pointing out to someone the other day, one of the main failure modes in an EPM tool implementation is that the initial implementation is overengineered and way too complex. After initial implementation, a tool like that invariably falls apart….only to be replaced a couple of years later by a kinder, gentler tool implementation.
That’s why I actually prefer to work with organizations that have failed to do this a couple of times. They’re the organizations that know what they don’t know. They’re the ones that realize complexity doesn’t work for them, and simplicity is the order of the day.
The Work Taxonomy, or ‘Waxonomy’
The comment also hit right on another theme that I’ve spent some time noodling about as of late, i.e. the work lifecycle. More on that in a bit. First let’s unpack this statement:
Proper governance is the right answer, which doesn’t require sophisticated aggregation of data across the enterprise using EPM, or PPM technologies. In a nut shell, it’s overkill.
From a portfolio level, I will agree with this statement provided we have gone through the effort of creating a standard data taxonomy throughout the portfolio. This is a topic that will undoubtedly be in my soapbox going forward. What I see as a requirement for successful end to end work and financial management is a common taxonomy of work, i.e. all work within the organization can be tagged with the following metadata:
- Resource performing the work.
- Business unit/cost center to which the resource belongs.
- The asset that the work is supporting (this is the hard one).
- Whether the work is building new stuff or maintaining existing stuff.
- Whether the work is planned or unplanned.
- Other really important metadata….
As long as all work (and investment if the organization is mature enough for it) is mapped to a common taxonomy, all of the data can be rolled up to the portfolio level so that I can get an overview of what my assets cost – and where that cost is coming from within the organization. That’s the same story whether I’m managing drilling platforms or IT applications.
Since the likelihood of all of that source data being stored in a single location is effectively nil, I would say that any inquiry into the TCO of an asset would almost certainly require at least a high level aggregation of data across the enterprise – whether it be in an EPM tool or in an overall data warehouse.
Authorization vs. Allocation
But let’s focus downward at the project level, which is what I was discussing in that post – which was originally prompted by the new Microsoft Project Server – TFS demo image. Do we require sophisticated aggregation of data across the enterprise at the project level? The answer here, I would posit, is it depends. It depends on what the organization is (rightly or wrongly) looking for. It depends on the organizational viewpoint on where in the lifecycle of work authorizations happen.
There’s an old sales story that gets bandied about. It’s a bit cliché at this point, but in the interest of being thorough, I will repeat it here:
When you have breakfast (assuming you have no religious or ethical issues with eating pork or other animal products), you have bacon and eggs. The chicken was involved in the making of the breakfast. The pig was committed.
The general gist is that the sales prospect is just thinking about committing, but they need to make that mental leap and commit entirely. Let’s think about that in the context of work. Let’s assume that work can be identified and traced through a lifecycle:
- Work begins its life as capacity. That capacity appears on paper the minute a functional manager commits next year’s annual staffing plan to the staffing system.
- The work gets closer to realization when a resource is hired into the organization. Now we don’t have planned capacity; we have actual capacity.
- Along comes a project business case. The business case may not specify the exact resource, but once it is authorized and becomes a project, that work is now converted to demand. The work begins to take shape and be associated to the organizational work taxonomy.
- The project is authorized.
- The work is assigned to an individual. The proposed work has become an allocated task. In a traditional project, this might occur when the project is authorized to enter execution. In an agile project, this allocation would occur long after the project was authorized and only after the appropriate user story has been prioritized into the next iteration. This is putting us pretty close to the chicken/pig tipping point.
- The individual then performs the task. This is what I would call the “work conversion event,” i.e. the planned work has now been unalterably converted to historical work. The work has become the pig. (This also begs the question of whether or not some sort of conversion rate metric could be applied to identify how much planned work actually becomes historical work – which would be a post for a different time.)
So essentially, we have had that work unit progress from planned capacity to actual executed work. That is the work lifecycle. That is what drives PPM system complexity in organizations intent on managing resources within the system. The farther down the work lifecycle that the organization tries to push before authorizing the project, the more complex the tool becomes.
The Road to Kanban
Let me try a different tack…..how about these two statements?
- In an agile planning methodology, availability drives the work.
- In a traditional planning methodology, work drives the availability.
There are at least two things about agile that significantly simplifies the planning process:
- We allocate dedicated resources.
- We don’t define the detailed tasks until they’re ready to be performed.
The inevitable result of this is that indeed planning for an agile project requires a lot less detail. I won’t get into the debate of whether or not it is less rigorous, or where the quality focus is, but at this point, when the project is authorized or even in the planning stages, there is a lot less detail developed. As a result, we can employ a kanban approach at the project level and pull the tasks into the next iteration as availability allows it. Availability drives the work.
Now let’s take a look at a traditional waterfall planning process:
- We generally assume multithreaded resources working on multiple projects.
- We define the detailed tasks long before they’re ready to be performed.
Those statements are both different aspects of the same issue. If we assume that our resources will be working on multiple projects, then we must plan their tasks out to the nth degree to ensure the work can be performed on schedule. This pushes our planning farther down into the work lifecycle. The further down the lifecycle we get from a planning perspective, the more complicated our system ends up being. Work drives the availability.
The inverse statement is also true. If we assume that our resources are only working on a single project, then we only have to plan their commitment to the project at a high level. As a result systems embracing this planning methodology can be much simpler in structure.
It’s our old forecasting bogeyman, multitasking, that drives the complexity.
Back to the Portfolio Level
Going back to the statement I saw in Mike Cottmeyer’s presentation about enabling a kanban approach to PPM, it’s all now starting to come together for me. If we revisit these two observations:
- In an agile planning methodology, availability drives the work.
- In a traditional planning methodology, work drives the availability
.…and apply them to the portfolio level of planning. Then truly, almost any conventional portfolio planning methodology would essentially follow the agile approach:
- We’re not planning named resources and therefore effectively are using a dedicated resource model.
- We’re only allowing work into the pipeline when it corresponds to available (or forecast) capacity.
Portfolio optimization essentially becomes an exercise in constraint identification and optimization. It’s based on estimates of capacity and demand, and then building a backlog of work to optimize our resource constraints. The complexity lies in the data that supports the constraint analysis, and the fact that this data collection has been pushed far down the work lifecycle.
At this point, I’m willing to concede that I probably totally misread Mike K.’s original comment, but it did launch me on some interesting tangents, specifically around kanban at the PPM level and why EPM systems become so complex. So definitely, thanks Mike. It’s the comments that make blogging less like work and more like an exercise in long form improv.
I also point out that ironically, the TFS integration that actually launched the original blog post is actually designed to remove complexity from an EPM system and push it into a more appropriate tool.