I’m gradually working my way through Charles Betz’s excellent book, Architecture and Patterns for IT Service Management, and parts of it are resonating to me. I’ve only gotten to the section on Demand Management, but it’s definitely starting to correlate to what I’m observing in our clients. Let me see if I can put my own spin on it….
The basic gist is that demand comes in many different forms. In IT, demand shows up as formal project requests, service requests (i.e. minor changes that don’t rise to the level of projects), and the output of various systems monitoring services. The latter category in an IT setting would include all of the outputs of availability and capacity monitoring. In a non-IT setting such as capital equipment, I would extend that to include the output of our maintenance scheduling systems, which spit out tickets for required maintenance. As ITIL is really the extension of capital equipment management best practices to the relatively immature field of IT, that logic seems to make sense.
So that’s the work intake process, or if you want the sensing mechanism that determines what work the organization could do. Let’s go to the other side of the intake process, the work queuing mechanism. This is the viewpoint from the technician in the field, the person who must actually perform the work that’s coming through the intake funnel. In a perfect world, this is all funneled into the same assignment queue. That way, I can see my queue, and see all of the work assigned to me, whether it originated as a project task, a service request, or the output of a maintenance system.
In a perfect world, all of that work is prioritized – either by the work, or through some sort of prearranged prioritization mechanism, and every time I finish a task, I can go back into my queue and determine what the next task in order of priority is. I might also throw other characteristics into the task selection, such as how long it would take to perform the task. If I only have an hour, I’ll pick the next task that can be completed within a single hour.
The reality is that this rarely happens. In the organizations we see, there are fragmented intake and queuing mechanisms. As a result, we have different work streams that end up in different queues tracked with different metrics but assigned to the same individual. As a member of IT, I will have tasks assigned in the ticketing queue, the release queue, and the project task queue. Each of those tasks are competing for my bandwidth.
This takes us to the holy grail of enterprise work management. In essence, the goal of an organization, IT or otherwise, whether they realize it or not, is to centralize all of that demand into a central location, prioritize it, and then spit it out the other end into an appropriate work management system. I won’t say a consolidated work management system, as that may not make sense – especially when I have resources that are dedicated to performing preventive maintenance and can easily live within a single queue. However, when I have resources that are pulled across multiple queues, then that requires a more rationally designed work management system. (More on that in another post.)
Which leads us to the logical fallacy that has shaped the world of portfolio management for years. There’s been this underlying assumption that we can take all of the work that comes through the demand intake and prioritize it against other work. That’s the fundamental premise of many PPM systems, i.e. that I can throw all of my projects into a single bucket, define their priorities, and then map the priorities against the cost to optimize my portfolio. As long as I am prioritizing similar work streams, this more or less works.
The problem comes when I try to compare apples and oranges, when I try to compare a project to support Application A to a ticket to support Application B. At that point, the portfolio optimization breaks down. The inevitable result is either a logjam of work that kills the adoption of the PPM system, or a regression to the multiple siloed work intake and management systems with their own on board prioritization and fragmented queuing systems.
Enter the world of portfolio analytics. In portfolio analytics, we’re not looking to prioritize individual work, but instead, we’re looking to tie that work to a specific organizational asset. In IT, each project, ticket, or release supports an application, which in turn, supports a service. In a non-IT scenario in Oil and Gas, for instance, each project or ticket or release supports an asset such as a rig or well, which then can be quantified in terms of production and profit. If I can identify the business criticality of the service, then I can assess the priority of each element of work in supporting the service, and therefore derive a cohesive, comprehensive framework for work prioritization. I don’t look at the individual work items, but instead at the work in aggregate in terms of the assets it supports.
The first step in performing this analysis is to map the costs of the work to the assets. While that sounds simple, it gets complicated when you throw in the fact that we have to model shared services, work that supports multiple assets, outsourced and insourced models, etc. By mapping the relationship between our logical work entities and our logical assets, we can identify the total cost of ownership of the asset. That’s the first step in portfolio analysis.
The next step would be in defining the value of the asset, whether it be in quantitative profitability terms, or in qualitative benefits. Once the benefit cost ratio can be determined, that prioritization can be fed back into our demand intake structure – provided each of the demand entities can be coded appropriately back to a prioritized asset – either in financial or material terms. This gets us that much closer to being able to prioritize all work that comes into the system….prioritization through association.