In PPM, Situational Awareness Trumps Operational Excellence

Consider, for a moment, a cautionary tale of two organizations: Organization A and Organization B.  Each organization takes a significantly different approach to delivering the project portfolio management (PPM) capability.

Organization A has a relatively immature approach to project portfolio management and performs annual planning on a, well, annual basis.  Each business unit compiles a list of project requests which are submitted to a central repository.  Months of back channel negotiations and positioning ensue.  At the end of the process, the organization has a list of approved initiatives for the next fiscal year…which may then be summarily ignored in favor of last minute project requests thrown out by the business.  This is what I would classify as a typical “pull” approach to PPM, where each group submits a request to pull the project through the approval cycle and into execution.

Organization B, on the other hand, takes a different approach to portfolio execution and starts a bit farther back in the value chain with an exercise in defining what organizational performance actually means.  This is broken down into strategic programs comprised of projects that are identified by the program management structure.  This is more of a push model where the PMO organization is responsible for identifying projects and pushing them into the portfolio.  If we apply this to the traditional v-model validation concept, we would end up with something like the graphic below where the traditional PPM measures actually fall at the lowest level, i.e. the “Execution Plan.”

image

Where do these goals come from?  These days, those goals are increasingly being driven by gains in the field of analytics.  As analytics platforms mature and the volume of data generated by the Internet of Things expands, analytics are expected to drive ever more decisions as to what projects need to be executed and where capital should be prioritized.  Are assets providing the expected return?  Is equipment performing as it should?  Increasingly, analytics will drive these assessments.

Once gaps are identified, i.e. once we identify reality is not performing according to specifications, what is left but to charter an intervention in the form of a project?  Now, however, the project is not driven by a request or saddled by the lack of an ability to track benefits.  The project is born of analytics and will support a specific measurable goal.

image

This then becomes the key indicator for the maturity of a PPM system….not how many strategic drivers are leveraged to assess projects, or how structured the process is.  Instead, the single indicator that will drive PPM maturity into the future is how much data, or situational awareness at the highest level, is actually being used to drive the project identification process.

In PPM, Situational Awareness Trumps Operational Excellence

Balancing Speed and Efficiency: Effectively Segmenting the Portfolio (Part 2)

In the portfolio management space, we seem to live on a continuum.  On one side, we segment the portfolio and treat the different portfolios as separate entities.  On the other side, we combine all of our work and prioritize against the overall organizational goals.

image_thumb[4]

In the last post, I presented a framework for creating at least three different work portfolios within an organization:

  1. Mandatory – work required to keep our executives out of prison…generally considered a solid reason for any project.
  2. Discretionary – work that someone thought would be a good idea that may / may not be evaluated within a specific silo and/or against the needs of the overall organization.
  3. Baseline – work that may be rationalized against assets to ensure it supports multiple silos within the organization.  In this case, the asset is prioritized, and the work inherits the prioritization at the direction of the asset management team.  (Note that I know of at least one organization where “baseline” work is referred to as “mandatory,” which may add to some confusion.)

In this post, I’d like to present a framework for understanding when we want to simply lump everything into a single portfolio – or break it out into multiple silos.  One of the concepts from the Gartner IT PPM Summit a couple of weeks ago comes to mind, i.e. the concept of the bimodal IT department (or, as we jokingly have been referring to it, “bipolar IT”).  Per Gartner, in the next several years, up to 50% of IT spend will be directly controlled by the business.  In short, IT will be split between two operating models:

  1. Mode 1 – the “traditional” approach of planned, risk averse projects.  Projects are planned and approved, and executed to support the needs of the business, as interpreted by IT.
  2. Mode 2 – the accelerated approach of shrinking the request to fulfillment lifecycle and getting projects done in the fastest way possible to directly respond to the needs of the business.

The reason I bring this up is because it’s absolutely applicable to the portfolio discussion.  The reality is that segmenting our portfolio allows us to do two things:

  1. Accelerate the project approval lifecycle.  I no longer have to review the benefits of the project against all of the other projects – which in turn allows us to avoid an inevitable formal cadence where projects are suggested each year or quarter and then approved as part of structured review process.  (Try enforcing something like that in an Exploration & Production company and you’ll see how quickly the business will revolt).  When the portfolio is segmented, we don’t need all of the approvals and formal review schedules.
  2. Implement a learning strategy.  As  Henry Mintzberg writes in his many books on the topic, strategy is a learning process, a series of continuous experiments where ideas are tried out, feedback is gained, and that is fed back into the strategy of the organization.  In this case, segmenting the portfolio allows us to build that feedback loop into the project sensing and prioritization mechanism.  If all projects within an organization are prioritized against all other projects, that may impair the learning process – as each step towards understanding the strategy is hampered by an interdependence on projects approving all other strategies.

Hence, my initial stab at capturing this thought process in a visual model yields something like this (which I am sure will evolve over time).

image

A quick definition of terms:

  1. Learning Strategy –the ability to work on a strategy and prioritization mechanism that has not been perfectly articulated, i.e. to build an accelerated feedback loop into the strategy to assess how well we’re hitting our goals – and in fact whether or not we’re working towards the right goals.  (Note that I considered swapping this out for market volatility, i.e. how quickly the organization needs to adapt to changing market conditions – but figured that was essentially saying the same thing.  Think the stability of large utility IT vs. the short term timelines of exploration IT – and then throw in a sprinkling of how utilities are attempting to accommodate new technologies such as smart grids and solar generation.)
  2. Defined Strategy – a strategy that is well articulated and measured.  For example, this year, our goal may be cost cutting overall.  Hence, we will charter projects targeting established mechanisms of cutting costs (consolidation, rationalization, divestment, etc.)
  3. Speed – the need to shorten the request to fulfillment value chain and generate value in the form of deliverables in the absolute shortest time possible.  Often, this results in inefficiencies as the organizations sprints to keep up with market demand – but those efficiencies are tolerated.  The alternative, after all, would be losing out to the competition.  Segmenting the portfolio enhances speed as it essentially pre-authorizes the funding decision….as long as the project falls within the purview of the specific portfolio segment, the portfolio owners can make their own decisions.
  4. Efficiency – the need to make the best use of a limited set of enterprise constraints.  When efficiency reigns supreme, it’s important to slow down the approval cycle and validate all projects in the context of all other projects.

Hence, when I try to plot the conversations we’re having in the energy sector this year (depressed prices) against the conversations last year (higher prices), I end up with something like this picture.

image

The conclusion then is that while there is an appropriate portfolio structure for each market condition – that structure needs to shift when the market does.  If we lock ourselves into a specific portfolio model and ignore external change, we won’t be able to perform our roles as effective investment advisor to the organization.

Balancing Speed and Efficiency: Effectively Segmenting the Portfolio (Part 2)

Balancing Speed and Efficiency: Effectively Segmenting the Portfolio (Part 1)

(Catching up on some thoughts after getting through a couple of recent client interactions, conferences, and books.)  In the portfolio management space, we seem to live on a continuum.

image

On one side of the continuum, we have this vision of the perfectly optimized portfolio.  In this vision, every project has a business case and can be prioritized against every other project – perhaps in the form of an evolving backlog that gets released into execution on a quarterly basis.

On the other side of the continuum, each stakeholder group is allowed to manage its own funding, i.e. the siloed funding approach.  Prioritization occurs within the silo, and little if any lip service is paid to how the project for one group should align with the project in another group.

So the question comes up….when does it make sense to segment and when does it make sense to share?  What are the hybrid scenarios where it makes sense to have both models?  This blog post is my initial attempt to answer that question.

Before we get too far into that discussion, however, let’s take a look at this diagram we came up with recently for an IT PMO.  Essentially, we analyzed the work in a “standard” IT department and segmented it out into three proposed portfolios.  Note that this schematic probably applies outside of IT as well, i.e. oilfield management, facility management, etc.

image

Per our analysis, we had three main portfolios.  In this case, we use the word portfolio to indicate a specific governance process, i.e. a process to capture the request, approve it, pass it into execution, and then manage the benefits realization – the request to fulfillment value chain.

Portfolio #1: Mandatory – these projects are required due to regulatory reasons.  Hence they must be performed.  I am still working through the funding model for regulatory projects, but let’s simply assume these projects are approved from the initial definition stage – and then funded either from a shared funding pool, from a specific business unit, or in some sort of cost allocation against each of the relevant BUs.  (Note that in our experience, the definition of “mandatory” varies significantly from organization to organization.)

Portfolio #2: Discretionary – these projects are what typically fall into the model of shared prioritization.  These are projects that don’t necessarily have to be performed, but do have a valid business case that ties back to the goals of the organization.  This is the portfolio I will primarily be focusing on in the next post, i.e. when and how to break this into separate funding streams.

Portfolio #3: Baseline – Baseline work in this sense is defined as the work required to maintain the asset portfolio.  In IT, we would define the asset portfolio as a series of capabilities or applications.  In energy, we might define an asset portfolio as a series of pipelines, wells and facilities.  Essentially, the concept is that for every asset, I will have performance metrics, and a set of projects required for maintenance and enhancements.  For an application, I will have a project to build it, enhance it every year, and then upgrade or retire it.

The importance of classifying something within the baseline portfolio is that this portfolio typically plays the part of utility services, i.e. providing shared services to support multiple business units.  In this case, we don’t want to have to perform a detailed strategic business case on every project that comes through the pipeline – as typically the projects are too small and too focused on a specific asset to justify spending the time of tying it back to the overall organizational goals.  Instead, we rationalize the portfolio, and identify the capabilities that are shared across multiple business units.  Then we identify the assets that support the capabilities, prioritize the assets, and defer the project approval process to the asset management teams – what in ITIL parlance would be the Application Owner and Manager.

So that’s the overall framework we are going to start with.  The next question (and the next post) is how do we determine when to treat a portfolio as a single entity across the enterprise and when do we decide to break it out and make prioritization (and funding) decisions by the BU?

Balancing Speed and Efficiency: Effectively Segmenting the Portfolio (Part 1)

Does Your Project Matter?

At the recent PMO Symposium, I attended a couple sessions on reporting and metrics, and each session mentioned something in passing that I thought was actually quite relevant.  The basic gist was that at the strategic level of the organization, only 10 or so projects really matter.  Everything else is just noise.

Not only that, but people within the organization should be able to list those top ten projects – and they should know that if there is any resource contention between those 10 projects and anything not on the list, the top projects will win.  If you can achieve this within your organization, you’re doing well in terms of communicating your values.

From an executive dashboard perspective, incorporate a mechanism to flag those projects, and provide a high level overview of how they’re doing.  As for all of the other projects within the organization…..aggregate them up to a more meaningful entity: service, asset, program, or strategic theme.

Does Your Project Matter?

Programs as Benefit Measurement Frameworks

There’s this myth pervading the portfolio management space that there’s a single benefit measurement model to rule them all.  Whether it be NPV, ROI, IRR, user satisfaction surveys or any other measure of tangible benefits, the reality is that any project may be measured according to any number of benefit measurement models stretching in a spectrum from the tracking of objective data to subjective data.

image

The trick is to identify the project type, and map that project type to an appropriate benefit measurement model.  For example, an IT project to support a specific application may be designed to reduce down time, to enhance user satisfaction, or to increase the speed of operations.  Each of those benefits are measurable, insofar as they can be assessed through service metrics or satisfaction surveys.

The challenge, however, is that oftentimes we have never implemented the monitoring systems to track these metrics – despite the fact that they’re in the gospel of ITIL as part of Service Design.  Hence, if we want to track against these metrics, we need to incorporate the creation of the baseline and tracking system into the budget of the project.

A PMO can slip that monitoring into the budget of one project, but sooner or later, sheer inertia negates this, as the stakeholders begin to rebel against repeatedly approving extra budget for monitoring project success.  Due to simple budgeting constraints, we can’t incorporate a metrics monitoring system on the front end of every project.  From a logistical standpoint, this won’t work as well, as it typically takes a fair amount of time to build up meaningful performance baseline data, and by the time we put everything on pause to collect this data, the project would have slipped far past the most forgiving stakeholder deadline.

The solution is not to incorporate metrics tracking into every project that passes through the approval process, but as the front end design of every program that passes through our portfolio.  If we treat an application as a program, we can set a threshold, i.e. any application with X number of users or that we will use for longer than Y years will need a metrics tracking system incorporated into the program design.  The metrics tracking system will generate performance data to assess if we are hitting our service performance goals, and our financial goals.

Those metrics then become the performance metrics for every project that is incorporated into the program.  This gives us another answer to the question of how we define programs within our portfolio.  A program is a framework for benefits measurement that may applied to the projects that reside within the program.

This also makes our investment decisions easier as we can budget to the program, and then allow the metrics within the program to dictate the actual composition of projects within the program.  How to assess the value of the program?  By identifying how it ties into the overall operations of the company – which is far easier than doing so at the project level.

Taking this concept one more level, a properly designed program should be tied to ongoing business conditions.  A periodic assessment should be conducted of the business environment to confirm that the program itself remains viable based on certain predefined measures of program success.  When the business conditions change, the program funding spigot can be turned off, effectively killing all projects within that program.  Alternately, additional funding can be added to the program, which in turn is directed to the proposed projects that meet the programmatic success measures.

Programs as Benefit Measurement Frameworks

Everything is Discretionary

On my last(ish) post, The Road to Portfolio Analytics, fellow Buckeye and overall great guy Prasanna, asked the following question:

“Are you saying that by arriving at the costs of assets, and corresponding benefits, and a ratio of both, one could somehow arrive at common ground for prioritization?…..if yes, I would like to disagree.  For example, if I am trying to decide whether to spend money buying apples or oranges, cost (my expense), benefit (hunger taken care of), form only part of the equation. I think the real driver there is ‘Value’ (which is Satisfaction/Joy in this case). However, if it can be done, then that’s what the driver should be for project selection. And the prioritization should be based on the ‘Speed of value attainment’.”

I started responding, and it turned into this blog post.

Let Me Explain; No, There Is Too Much; Let Me Sum Up….

To rehash a bit of last week’s post, there were a couple of points I made there:

  1. Work is associated with a specific asset or program. The priority of the work is then inherited (to an extent) from the asset…..as are potential risks.
  2. The next question is how to rank specific assets or programs against each other. The easiest way is to quantify the value somehow, and then to map the value against the TCO for the asset.
  3. The real question is how to quantify a mandatory program such as Hunger, which in this case would be considered equivalent to a regulatory mandated program, i.e. things we do to keep our executives out of jail.

Classifying the Asset And/Or Program

Thinking through this, I think the real question is how to quantify the priority within the pool of mandatory work versus within the pool of discretionary work. For example, I have work that supports an asset that is not mandatory. The asset drives financial benefits or value. I also can calculate the cost of the asset. This allows me to identify which assets merit further investment versus those that do not.

Mandatory work is different. In the example, we used “Hunger,” which I would say is equivalent to a regulatory driven program. Sure, at this point the program may be considered mandatory, and the prioritization occurs a bit further down the work chain, i.e. if we can define our minimum requirements well enough (per identified business drivers), we can then assess new work as it comes through the pipeline to see if it gets us to the minimum acceptable standards, or if it exceeds those standards. If it exceeds the basic standards, great…..but we don’t want to pay for that if it takes investment away from other assets.

Assessing Mandatory Work

If we step back a level though, everything is discretionary. If the costs of the regulatory mandated program exceed our ability to make a profit or get capital (say in the case of a non-profit), then that would drive a decision to get out of the industry entirely and focus on more viable areas of business.  Mandatory programs need to be assessed against a higher level of benefits aggregation, i.e. the line of business which they specifically support.

Hence, while mandatory programs may not require the same level of rigor in prioritizing against discretionary programs, they still sum up to the unavoidable costs of doing business, which must then be assessed against our overall strategic goals of whether or not we want to be in that business. In a sense, it’s the same benefit-cost strategy, but as part of a more holistic approach.  Arguably, we would still need to track the TCO of these mandatory programs in order to drive any continual improvement initiatives such as reducing the overall cost of meeting regulatory requirements.

image

I would also contend that there’s no definite ruling as to what constitutes mandatory vs. discretionary. In the Texas oil patch, there are plenty of examples of companies flirting regulations and accepting fines instead of investing in being compliant in the first place. Arguably, that’s driven by the same cost benefit calculations discussed above.

In fact, I would further posit that simply labeling work as “mandatory” immediately prompts many organizations to simply cease doing due diligence and good governance on the work in question.  A mandatory designation is not a free pass to avoid governance….it’s merely a label that implies a higher level of benefit aggregation in the benefit-cost ratio.

…And Now For Some Navel Gazing

Of course, we could extend this metaphor to the breaking point.  If discretionary work can be measured against other discretionary work, and mandatory work can be assessed against the overall benefits of being in the business we are in, then if we truly reviewed “Hunger” as an example, that would imply there’s a third category of work with its own unique prioritization mechanism, existential.  Needless to say, I’d prefer not to got down that philosophical rabbit hole.

Everything is Discretionary

The Road to Portfolio Analytics

I’m gradually working my way through Charles Betz’s excellent book, Architecture and Patterns for IT Service Management, and parts of it are resonating to me.  I’ve only gotten to the section on Demand Management, but it’s definitely starting to correlate to what I’m observing in our clients.  Let me see if I can put my own spin on it….

The basic gist is that demand comes in many different forms.  In IT, demand shows up as formal project requests, service requests (i.e. minor changes that don’t rise to the level of projects), and the output of various systems monitoring services.  The latter category in an IT setting would include all of the outputs of availability and capacity monitoring.  In a non-IT setting such as capital equipment, I would extend that to include the output of our maintenance scheduling systems, which spit out tickets for required maintenance.  As ITIL is really the extension of capital equipment management best practices to the relatively immature field of IT, that logic seems to make sense.

So that’s the work intake process, or if you want the sensing mechanism that determines what work the organization could do.  Let’s go to the other side of the intake process, the work queuing mechanism.  This is the viewpoint from the technician in the field, the person who must actually perform the work that’s coming through the intake funnel.  In a perfect world, this is all funneled into the same assignment queue.  That way, I can see my queue, and see all of the work assigned to me, whether it originated as a project task, a service request, or the output of a maintenance system.

In a perfect world, all of that work is prioritized – either by the work, or through some sort of prearranged prioritization mechanism, and every time I finish a task, I can go back into my queue and determine what the next task in order of priority is.  I might also throw other characteristics into the task selection, such as how long it would take to perform the task.  If I only have an hour, I’ll pick the next task that can be completed within a single hour.

The reality is that this rarely happens.  In the organizations we see, there are fragmented intake and queuing mechanisms.  As a result, we have different work streams that end up in different queues tracked with different metrics but assigned to the same individual.  As a member of IT, I will have tasks assigned in the ticketing queue, the release queue, and the project task queue.  Each of those tasks are competing for my bandwidth.

This takes us to the holy grail of enterprise work management.  In essence, the goal of an organization, IT or otherwise, whether they realize it or not, is to centralize all of that demand into a central location, prioritize it, and then spit it out the other end into an appropriate work management system.  I won’t say a consolidated work management system, as that may not make sense – especially when I have resources that are dedicated to performing preventive maintenance and can easily live within a single queue.  However, when I have resources that are pulled across multiple queues, then that requires a more rationally designed work management system.  (More on that in another post.)

image

Which leads us to the logical fallacy that has shaped the world of portfolio management for years.  There’s been this underlying assumption that we can take all of the work that comes through the demand intake and prioritize it against other work.  That’s the fundamental premise of many PPM systems, i.e. that I can throw all of my projects into a single bucket, define their priorities, and then map the priorities against the cost to optimize my portfolio.  As long as I am prioritizing similar work streams, this more or less works.

The problem comes when I try to compare apples and oranges, when I try to compare a project to support Application A to a ticket to support Application B.  At that point, the portfolio optimization breaks down.  The inevitable result is either a logjam of work that kills the adoption of the PPM system, or a regression to the multiple siloed work intake and management systems with their own on board prioritization and fragmented queuing systems.

Enter the world of portfolio analytics.  In portfolio analytics, we’re not looking to prioritize individual work, but instead, we’re looking to tie that work to a specific organizational asset.  In IT, each project, ticket, or release supports an application, which in turn, supports a service.  In a non-IT scenario in Oil and Gas, for instance, each project or ticket or release supports an asset such as a rig or well, which then can be quantified in terms of production and profit.  If I can identify the business criticality of the service, then I can assess the priority of each element of work in supporting the service, and therefore derive a cohesive, comprehensive framework for work prioritization.  I don’t look at the individual work items, but instead at the work in aggregate in terms of the assets it supports.

The first step in performing this analysis is to map the costs of the work to the assets.  While that sounds simple, it gets complicated when you throw in the fact that we have to model shared services, work that supports multiple assets, outsourced and insourced models, etc.  By mapping the relationship between our logical work entities and our logical assets, we can identify the total cost of ownership of the asset.  That’s the first step in portfolio analysis.

The next step would be in defining the value of the asset, whether it be in quantitative profitability terms, or in qualitative benefits.  Once the benefit cost ratio can be determined, that prioritization can be fed back into our demand intake structure – provided each of the demand entities can be coded appropriately back to a prioritized asset – either in financial or material terms.  This gets us that much closer to being able to prioritize all work that comes into the system….prioritization through association.

The Road to Portfolio Analytics

The Ends Dictate the Means: Defining a Program Management Lifecycle

Bob is managing five projects.  Each of those projects support a different aspect of a specific enterprise application.  Several of those projects are pretty advanced in execution and have entered a deployment stage.  Other projects have just barely gotten off the runway and are still onboarding their staff.  One of Bob’s projects is still in the early proposal stage and hasn’t had any serious funding authorized.  Every week, Bob’s management asks him to fill out a single status report for this program.  Every week, Bob looks at the “Stage” drop down option in his status report and has no clue how to fill it out.  He eventually closes his eyes, spins the mouse wheel not unlike the arm of a slot machine and randomly selects whatever stage sounds good at the time.

Anyone who’s ever tried to shoehorn a collection of projects into a single workflow has gone through the realization that programs require their own lifecycle, effectively serving as a macro version of the lifecycle for each project.  If I am building a pipeline, I might split the pipeline into multiple sections.  Each section would have a planning, permitting and construction stage.  Some sections might require horizontal drilling.  Other sections may include relatively even terrain.  Each of those sections will be managed by a different project manager and may be built in sequence or in parallel.

The question then comes up….what stage is the overall program in?  Is it in planning?  Construction?  Permitting?  None of the answers make sense because the question doesn’t make sense.

A Simple Program Lifecycle

The solution is to step back and look at the program lifecycle.  In the PMI standards, that’s depicted as a simple three stage model – a starter workflow, if you will.

  • Planning – where we define the program goals and how those will be tied to the authorized projects.
  • Execution – where we identify, authorize and execute the work.  Note that this is a catch all stage that rolls up many of the standard project lifecycle stages.
  • Closeout – where we identify if our program met its goals and whether or not we achieved the anticipated benefits.

image

A More Nuanced Lifecycle Model

Clearly, this lifecycle wouldn’t work for all programs.  What about the open ended programs such as Health Safety and Environment (HSE) initiatives?  An organization will charter an HSE program and staff it with resources looking for ways to continually improve and maintain HSE performance by limiting the number of reportable incidents.  These are the sort of programs that never really end, they just keep going on and on and spawning an endless series of projects.

So how does one come up with an appropriate program lifecycle?  I don’t have all of the answers, but it seems to me that the correct approach would be to start at the end.  What is the end goal of the program?  Once that’s identified, we can work backwards from there to create a management model.

In my world, that pretty much means we have the following potential program lifecycle models:

  • Asset Lifecycles
  • Continual Improvement Lifecycles
  • Contracted or Defined Lifecycles
  • Product Development

An asset lifecycle is pretty much our standard lifecycle in the IT domain where an asset is almost always  defined as a service, which is then supported by applications, which are then supported by infrastructure.  Projects either add or remove elements from this asset, and operations ensure the asset performs its functions.  In this model, the end of the asset dictates the end of the program, hence the projects and priorities spawned by the program will vary based on where the asset is in its lifecycle and investment analysis on when to improve the asset.  As near as I can tell, wells and drilling platforms follow roughly the same model.

I see continual improvement lifecycles more in the business domain, where we might have continuing initiatives to increase efficiency, or reduce HSE incidents.  These lifecycles will almost never end and often have (or should have) clearly defined metrics of success that were identified during the planning stages.

Contracted or defined lifecycles include definite ends.  Our goal is to decommission this site, or to develop a complicated solution to a pressing problem.  These programs may also exist as subprograms within an overall program framework.

Finally, we have the product development lifecycle, which takes a product into the market and then sustains it while it’s in the market.  This lifecycle might perhaps be considered a subset of the asset management lifecycle insofar as there are a number of similarities, but there are also a significant number of differences related to the relative newness of the product’s underpinning elements and the challenges of supporting a market of external consumers as opposed to a more controlled group of internal consumers.

That’s not intended to be an exhaustive list of program management lifecycles – more an off the cuff analysis of the ones I come into contact with most frequently.

The Context Provides the Purpose

What do we do with this knowledge?  The next time you’re forced to answer the question of “which single stage are all five of these projects in?” take a step back.  Look at whether or not these projects are all related.  If someone’s asking you that question, chances are that they are, in fact, related.  Then look at what program goal these projects are supporting.  Finally, develop an appropriate lifecycle for the program.

Only after you’ve done that, can you then begin to assess the reporting requirements of your individual projects, and provide that information in a meaningful and relevant manner.

The Ends Dictate the Means: Defining a Program Management Lifecycle