5 Steps to Application Portfolio Management [Webinar]

Ben Chamberlain will be presenting on one of the hottest topics we see in the IT portfolio management world these days…..Application Portfolio Management.

Ad-hoc, event-driven approaches to Application Portfolio Management (APM) simply aren’t delivering results. Important application data and metrics simply can’t be effectively maintained using spreadsheets and homegrown tools. This makes it a challenge to sustain a continuous APM process that supports annual IT planning and budgeting.

Tune into this webinar to see how APM can help you:

  1. Consolidate applications in one inventory
  2. Derive key assessment metrics (Value, Risk,
    Architectural Fit etc.)
  3. Calculate  Total Cost of Ownership
  4. Collaborate to record lifecycle decisions
  5. Build and execute multi-year transformation roadmaps

For more details, or to register…..

5 Steps to Application Portfolio Management [Webinar]

Does Your Project Matter?

At the recent PMO Symposium, I attended a couple sessions on reporting and metrics, and each session mentioned something in passing that I thought was actually quite relevant.  The basic gist was that at the strategic level of the organization, only 10 or so projects really matter.  Everything else is just noise.

Not only that, but people within the organization should be able to list those top ten projects – and they should know that if there is any resource contention between those 10 projects and anything not on the list, the top projects will win.  If you can achieve this within your organization, you’re doing well in terms of communicating your values.

From an executive dashboard perspective, incorporate a mechanism to flag those projects, and provide a high level overview of how they’re doing.  As for all of the other projects within the organization…..aggregate them up to a more meaningful entity: service, asset, program, or strategic theme.

Does Your Project Matter?

IT Portfolio Optimization Leads to….the Cloud?

I’ve spent some blogging cycles this week talking about the evolution of IT portfolio selection processes – specifically that there is a natural maturity curve from what I am calling a tightly coupled model of fragmented IT delivery and a decoupled model of consolidated IT delivery.  I’ve also pointed out that there’s almost a chicken-egg relationship between IT portfolio optimization and the IT funding model.

Illustrating the Coupled Model

Let’s take a look, shall we?  To illustrate this point, let’s say that our software vendor, in this case Microsoft, follows a tightly coupled funding model.  In this model, the customer may have a need, say, to manage agile projects in Microsoft Project Professional.  The customer goes to his Microsoft account representative and says, “Can you create an agile function, specifically tailored to our requirements?”

The Microsoft account representative goes back to Redmond and chats with the Product Team.  She goes back to our customer and with pinky firmly planted next to her lips does her best Dr. Evil impression by telling the customer that this can be done, “for 1 million dollars!” Chortle, chortle.

Now, our customer is no dummy.  He knows that while he needs this agile functionality, he doesn’t “need need” it.  He can wait.  He knows some other customer will step up and pay to develop the functionality he needs, and until then he’ll just make do.  In fact, he’s read books on Game Theory, and knows that if he just waits long enough, he can reap the benefits of someone else’s investment.  In the meantime, maybe he goes and buys something off the shelf that sort of meets his needs and doesn’t need to hit the IT radar.

What’s the moral of this story? 

  • New features won’t get implemented until long after they’re actually desired. 
  • Business units play games to try to push the costs off on other units. 
  • The net result is that IT is playing catch up to the business.

…And the Alternative

What about the alternative?  In this case, and this is purely hypothetical, Microsoft has a large customer base for the product and bills by the user, say in some sort of arcane licensing scheme that only the high priests of the Temple of LAR understand.  Microsoft can relatively accurately predict licensing revenue for the next year, and strategically decide to set aside a percentage of that revenue for further investments in the product.  That sets the stage for the investment portfolio.

Our customer again needs agile functionality, and again approaches the Microsoft account rep.  This time, when she goes back to Redmond, the team agrees to throw the request into the feature backlog.  When the annual planning cycle rolls around, they perform a survey of their top clients and prioritize the feature according to the results.  The priority dictates where the feature falls in terms of absorbing a portion of the available investment portfolio.  Assuming the feature is ranked high enough, the team develops it, and makes it available through the SA ritual practiced by the priests of LAR.

This latter example is what I am calling a decoupled model.  Features are developed and prioritized within the constraints of an investment portfolio, and not individually as requested by the business. 

Here’re a couple characteristics of a decoupled IT portfolio selection model:

  • Projects are bundled into logical programs that leverage economies of scale to deliver capabilities across multiple business units.
  • The program bundling supports consolidated investment in the platforms upon which processes are enabled.
  • IT is proactively identifying what the business will need in the future, and incorporating that in all ongoing service design activities

To the Cloud

…which means that to accommodate this economy of scale, we need to find a more mature IT funding model.  Slicing and dicing our portfolio at the project level and funding at that level from our business units doesn’t enable us to get the consolidated benefits that a decoupled portfolio can deliver. 

Perhaps a user based revenue model doesn’t apply as many users end up not using the solution, or end up using the solution less than more advanced power users.  In this world, to make things more equitable, we may need to find a different funding model, something like transactional costing.   In transactional costing, the cost of developing and delivering the service is recovered using metering.  The respective business units pay per use, and in return get a solid service and don’t have to worry about more complex cost models.

Sound familiar?  The logical conclusion of that decoupled model is the cloud based model.  IT portfolio optimization maturity inevitably leads to the cloud or at least a cloud-like decoupled service model.

IT Portfolio Optimization Leads to….the Cloud?

Strategically Decoupling Your Second Tier Portfolio (Part 2)

In yesterday’s post, I introduced the concept of a tier 2 portfolio, and how the concept of strategic coupling works in a tiered portfolio.  This post continues that discussion with an overview of a decoupled portfolio.

image_thumb1

A decoupled portfolio means that IT understands where the business is going, and instead of reacting to that, makes plans hand in hand with the business to achieve those goals.  In a decoupled portfolio, the IT organization identifies a series of strategic drivers that guide investment programs to support the business.  These investment programs, in turn, provide the governance framework to a series of projects that better position IT to deliver effective, efficient services to the business.

Some characteristics of a decoupled portfolio are that:

  • IT is proactively identifying what the business will need, and preparing those capabilities in advance.
  • These proactive efforts end up being bundled into programs, where each program is designed to enhance specific capabilities.
  • As a secondary benefit of bundling efforts into programs, IT is better positioned to invest in options assessment, i.e. to invest in alternatives analysis to determine the optimal approach to developing a specific capability.  The budget for such efforts may be attached to the program as opposed to waiting for business funding.
  • There is significant investment in shared platforms to support business projects.  Coordinated investment replaces piecemeal investment in shared platforms.

The Role of Programs in Decoupled Environments

In the absence of a defining business program structure, a decoupled portfolio naturally lends itself to the bundling of projects into logical programs defined by IT.  These programs, in turn, provide a structure to the IT portfolio that enables it to both define funding priorities, and to approve these priorities within the program – thus providing a more agile governance structure.

The challenge inherent in this rebundling of IT project work is that it necessitates a reassessment of the traditional IT funding model.  In a tightly coupled IT environment, each project is owned by a business unit, and hence funding is easy to determine.  In a decoupled portfolio, more investments tend to be made in an entire platform – that is then leveraged by multiple business units to achieve economies of scale.

Thus, as IT organizations move to this decoupled model, a new funding model is necessary – either splitting the costs equally across business units, or more logically, moving to a transaction based costing model.  This is the inevitable result of moving away from a coupled IT portfolio.

Yes, but……

Of course the issue with any framework is that it is overly simplistic.  I’d no sooner finished writing this post when I started poking holes in it.  For example, if tier 1 portfolios are defined in terms of what makes us a profit, and tier 2 portfolios support the people that provide the tier 1 services, then where does new product development (NPD) fit in?  In alignment with the proposed framework, NPD is another tier 2 portfolio that is designed to generate the grist that goes into the tier 1 portfolio and makes a profit.

The entire tier 1 and tier 2 thing may be a bit of a hazy line at some times.  For example, what if I have an IT consulting shop that shares resources between internal and externally facing projects?  Are those projects all one portfolio?  Do I have two portfolios?

I’ll defer to later posts for clarifying that question.

Strategically Decoupling Your Second Tier Portfolio (Part 2)

Strategically Decoupling Your Second Tier Portfolio (Part 1)

Frameworks can be very useful things.  They allow folks like myself and my colleagues to walk into almost any organization and quickly size them up in terms of maturity, capabilities, and well, let’s face it, consulting opportunities.  The risk of course is that we size up the organization incorrectly – or worse yet, we don’t bother to size up the organization and apply some sort of standard best practice drawn straight from the latest consulting memeplex.  (That never happens.)

Many times, we don’t even realize that we’re applying a classification scheme – until we take a step back and realize that we’re taking a totally different approach to the current organization than we did to the last organization.  When those sorts of epiphanies happen, I like to sit back and identify intellectually why I adapted the approach and see if I can turn that into a theoretical framework.  This post is the result of such an exercise.

Portfolio Types

I tend to cross operational domains about once a year, typically bouncing back and forth between pipeline construction and IT portfolio management.  Needless to say, these two domains seem to operate in entirely different worlds – with different priorities, operating goals, and differing concepts of velocity.  When was the last time you heard an IT shop talk about dropping a developer into the project via helicopter to ensure you make your deadlines?

When you talk to pipeliners about missions and portfolio optimization, you tend to get a blank stare.  That’s usually because project decisions are often made by the Marketing folks.  They work with customers to identify whether or not a new pipeline would be profitable, conduct feasibility studies, and then after all of that, they issue the marching orders to go build the pipeline.

IT tends to struggle with its fundamental mission, often not able to articulate why, in fact, a project has been selected.  That doesn’t mean though that all of IT operates significantly different than a pipeline company.  For example, try going to an IT consulting shop one day and asking them how they select the projects they do.  You’ll probably get the same answer as the pipeline company above: The salesperson made the sale.  Delivery did a feasibility estimate.  The consulting shop has chartered the project and expects to make a profit on it.

Tiered Portfolios

The difference in IT (support) having a tougher time articulating why it selects projects vs. IT (consulting) stems to one extent from the fact that IT is typically regarded as a cost center.  In other terms, if I go back to my ITIL training days, the two portfolios represent different service tiers within an organization:

image

A tier 1 organization is responsible for providing services directly to the organization’s customers, the people who pay money to the organization for the services it provides.  Tier 2 organizations provide services to the tier 1 group, i.e. tier 2 provides the tools to enable tier 1 to generate value for the customers.

I submit that a tier 1 portfolio, being directly responsible for the provision of profitable services to the external customers of the organization, has a much easier time of defining strategy.  Essentially that’s whatever’s profitable and fits within the long term goals of the firm.

It’s the tier 2 portfolios that are a bit more challenged.  The tier 2 portfolios are charged with supporting the mission of the organization, i.e. facilitating the job of the tier 1 service provider.  Hence, tier 2 portfolio optimization tends to suffer when the organizational strategy is poorly articulated.  When you look at a traditional project portfolio selection process such as AHP, you’ll see that it almost invariably is targeting the tier 2 portfolios.  You almost never see portfolio selection guidance applied to tier 1 portfolios.

To Couple or Not to Couple…?

Now we have the concept of a tiered portfolio defined, let’s take a look at another concept: the coupled portfolio vs. the decoupled portfolio.  Many IT portfolios are tightly coupled with the business (tier 1) portfolio.   This means that IT projects are chartered to directly support a business project.  If the business plans Project A as part of the annual planning, IT plans an IT Project A.  If business plans Project B, IT plans an IT Project B.

I’m almost certainly oversimplifying here, but the idea is that IT has a poorly defined strategy, and is simply following the tier 1 lead.  That’s not necessarily a bad thing, but it does have a couple of implications:

  1. IT is always reacting to the business needs and not proactively driving to meet forecasted needs.
  2. IT tends to not be able to invest in options analysis, i.e. it’s more difficult to set aside money to identify alternatives to achieve a specific goal, as the business is often not interested in investing in these optimization exercises.
  3. IT programs are poorly defined and often fragmented.  There is often only a transitory acknowledgement that projects belong to programs, and those programs are the ones defined by the business.
  4. There is minimal investment in shared platforms to support business projects.  Coordinated investment is replaced with piecemeal investment where the shared platform is only upgraded when the business is willing to attach the funding to a specific project.

That’s what a coupled portfolio looks like.  In my next post, I’ll talk about the implications of a decoupled portfolio.

Strategically Decoupling Your Second Tier Portfolio (Part 1)

UMT 360: It’s Here and It’s Awesome

We’re thrilled to announce the Release to Manufacturing and General Availability of UMT 360, the latest iteration of our flagship product offering.

For more information, please check out our official blog:

http://umtblog.com/2013/08/06/release-announcement-umt-360/

Watch this and other UMT spaces for more information over the coming months.  If you’re in the Houston neighborhood, come on by our shindig on August 22 to personally kick the tires.

UMT 360: It’s Here and It’s Awesome

Everything is Discretionary

On my last(ish) post, The Road to Portfolio Analytics, fellow Buckeye and overall great guy Prasanna, asked the following question:

“Are you saying that by arriving at the costs of assets, and corresponding benefits, and a ratio of both, one could somehow arrive at common ground for prioritization?…..if yes, I would like to disagree.  For example, if I am trying to decide whether to spend money buying apples or oranges, cost (my expense), benefit (hunger taken care of), form only part of the equation. I think the real driver there is ‘Value’ (which is Satisfaction/Joy in this case). However, if it can be done, then that’s what the driver should be for project selection. And the prioritization should be based on the ‘Speed of value attainment’.”

I started responding, and it turned into this blog post.

Let Me Explain; No, There Is Too Much; Let Me Sum Up….

To rehash a bit of last week’s post, there were a couple of points I made there:

  1. Work is associated with a specific asset or program. The priority of the work is then inherited (to an extent) from the asset…..as are potential risks.
  2. The next question is how to rank specific assets or programs against each other. The easiest way is to quantify the value somehow, and then to map the value against the TCO for the asset.
  3. The real question is how to quantify a mandatory program such as Hunger, which in this case would be considered equivalent to a regulatory mandated program, i.e. things we do to keep our executives out of jail.

Classifying the Asset And/Or Program

Thinking through this, I think the real question is how to quantify the priority within the pool of mandatory work versus within the pool of discretionary work. For example, I have work that supports an asset that is not mandatory. The asset drives financial benefits or value. I also can calculate the cost of the asset. This allows me to identify which assets merit further investment versus those that do not.

Mandatory work is different. In the example, we used “Hunger,” which I would say is equivalent to a regulatory driven program. Sure, at this point the program may be considered mandatory, and the prioritization occurs a bit further down the work chain, i.e. if we can define our minimum requirements well enough (per identified business drivers), we can then assess new work as it comes through the pipeline to see if it gets us to the minimum acceptable standards, or if it exceeds those standards. If it exceeds the basic standards, great…..but we don’t want to pay for that if it takes investment away from other assets.

Assessing Mandatory Work

If we step back a level though, everything is discretionary. If the costs of the regulatory mandated program exceed our ability to make a profit or get capital (say in the case of a non-profit), then that would drive a decision to get out of the industry entirely and focus on more viable areas of business.  Mandatory programs need to be assessed against a higher level of benefits aggregation, i.e. the line of business which they specifically support.

Hence, while mandatory programs may not require the same level of rigor in prioritizing against discretionary programs, they still sum up to the unavoidable costs of doing business, which must then be assessed against our overall strategic goals of whether or not we want to be in that business. In a sense, it’s the same benefit-cost strategy, but as part of a more holistic approach.  Arguably, we would still need to track the TCO of these mandatory programs in order to drive any continual improvement initiatives such as reducing the overall cost of meeting regulatory requirements.

image

I would also contend that there’s no definite ruling as to what constitutes mandatory vs. discretionary. In the Texas oil patch, there are plenty of examples of companies flirting regulations and accepting fines instead of investing in being compliant in the first place. Arguably, that’s driven by the same cost benefit calculations discussed above.

In fact, I would further posit that simply labeling work as “mandatory” immediately prompts many organizations to simply cease doing due diligence and good governance on the work in question.  A mandatory designation is not a free pass to avoid governance….it’s merely a label that implies a higher level of benefit aggregation in the benefit-cost ratio.

…And Now For Some Navel Gazing

Of course, we could extend this metaphor to the breaking point.  If discretionary work can be measured against other discretionary work, and mandatory work can be assessed against the overall benefits of being in the business we are in, then if we truly reviewed “Hunger” as an example, that would imply there’s a third category of work with its own unique prioritization mechanism, existential.  Needless to say, I’d prefer not to got down that philosophical rabbit hole.

Everything is Discretionary

The Road to Portfolio Analytics

I’m gradually working my way through Charles Betz’s excellent book, Architecture and Patterns for IT Service Management, and parts of it are resonating to me.  I’ve only gotten to the section on Demand Management, but it’s definitely starting to correlate to what I’m observing in our clients.  Let me see if I can put my own spin on it….

The basic gist is that demand comes in many different forms.  In IT, demand shows up as formal project requests, service requests (i.e. minor changes that don’t rise to the level of projects), and the output of various systems monitoring services.  The latter category in an IT setting would include all of the outputs of availability and capacity monitoring.  In a non-IT setting such as capital equipment, I would extend that to include the output of our maintenance scheduling systems, which spit out tickets for required maintenance.  As ITIL is really the extension of capital equipment management best practices to the relatively immature field of IT, that logic seems to make sense.

So that’s the work intake process, or if you want the sensing mechanism that determines what work the organization could do.  Let’s go to the other side of the intake process, the work queuing mechanism.  This is the viewpoint from the technician in the field, the person who must actually perform the work that’s coming through the intake funnel.  In a perfect world, this is all funneled into the same assignment queue.  That way, I can see my queue, and see all of the work assigned to me, whether it originated as a project task, a service request, or the output of a maintenance system.

In a perfect world, all of that work is prioritized – either by the work, or through some sort of prearranged prioritization mechanism, and every time I finish a task, I can go back into my queue and determine what the next task in order of priority is.  I might also throw other characteristics into the task selection, such as how long it would take to perform the task.  If I only have an hour, I’ll pick the next task that can be completed within a single hour.

The reality is that this rarely happens.  In the organizations we see, there are fragmented intake and queuing mechanisms.  As a result, we have different work streams that end up in different queues tracked with different metrics but assigned to the same individual.  As a member of IT, I will have tasks assigned in the ticketing queue, the release queue, and the project task queue.  Each of those tasks are competing for my bandwidth.

This takes us to the holy grail of enterprise work management.  In essence, the goal of an organization, IT or otherwise, whether they realize it or not, is to centralize all of that demand into a central location, prioritize it, and then spit it out the other end into an appropriate work management system.  I won’t say a consolidated work management system, as that may not make sense – especially when I have resources that are dedicated to performing preventive maintenance and can easily live within a single queue.  However, when I have resources that are pulled across multiple queues, then that requires a more rationally designed work management system.  (More on that in another post.)

image

Which leads us to the logical fallacy that has shaped the world of portfolio management for years.  There’s been this underlying assumption that we can take all of the work that comes through the demand intake and prioritize it against other work.  That’s the fundamental premise of many PPM systems, i.e. that I can throw all of my projects into a single bucket, define their priorities, and then map the priorities against the cost to optimize my portfolio.  As long as I am prioritizing similar work streams, this more or less works.

The problem comes when I try to compare apples and oranges, when I try to compare a project to support Application A to a ticket to support Application B.  At that point, the portfolio optimization breaks down.  The inevitable result is either a logjam of work that kills the adoption of the PPM system, or a regression to the multiple siloed work intake and management systems with their own on board prioritization and fragmented queuing systems.

Enter the world of portfolio analytics.  In portfolio analytics, we’re not looking to prioritize individual work, but instead, we’re looking to tie that work to a specific organizational asset.  In IT, each project, ticket, or release supports an application, which in turn, supports a service.  In a non-IT scenario in Oil and Gas, for instance, each project or ticket or release supports an asset such as a rig or well, which then can be quantified in terms of production and profit.  If I can identify the business criticality of the service, then I can assess the priority of each element of work in supporting the service, and therefore derive a cohesive, comprehensive framework for work prioritization.  I don’t look at the individual work items, but instead at the work in aggregate in terms of the assets it supports.

The first step in performing this analysis is to map the costs of the work to the assets.  While that sounds simple, it gets complicated when you throw in the fact that we have to model shared services, work that supports multiple assets, outsourced and insourced models, etc.  By mapping the relationship between our logical work entities and our logical assets, we can identify the total cost of ownership of the asset.  That’s the first step in portfolio analysis.

The next step would be in defining the value of the asset, whether it be in quantitative profitability terms, or in qualitative benefits.  Once the benefit cost ratio can be determined, that prioritization can be fed back into our demand intake structure – provided each of the demand entities can be coded appropriately back to a prioritized asset – either in financial or material terms.  This gets us that much closer to being able to prioritize all work that comes into the system….prioritization through association.

The Road to Portfolio Analytics

Taking Resource Plans to a Whole New Dimension

How does one track resource allocation?  What does resource “allocation” even mean?  These are the sorts of questions that project managers often struggle with when mandated to account for resources within an enterprise project management tool.  On the surface, this would seem like a simple problem:  I simply define the tasks in sufficient detail to support resource estimates, slap a couple of resources on said tasks, and call it a day.

The question arises however….what about consultants?  What if I have consultants that are dedicated to my project that charge a specific daily or hourly rate?  In that case, I can pretty much assume that I will be paying these consultants for 40 hours / week over the lifetime of the project.  The question these PMs typically ask me is how do we track these consultants?  Do we track them at a set 40 hours / week – because that’s what we’re budgeting them at?  Or do we track them at the actual number of hours we’ve got them assigned to tasks in any given week?

image

The correct answer is that you should be tracking both.  This is not a one or the other situation.  Instead, we’re looking at two different dimensions of the resource puzzle.  One dimension is the budget for the resources – either in dollars or in hours.  The other dimension is the allocation of these resources – which is typically measured in hours.

Think of the question more in these terms: Given that I’ve budgeted for a 100% of this consultant, why do I only have them allocated for the next four weeks at 65% of their availability?  Does this mean that they’re really working on unplanned work for the remaining 35% of their time or does this mean that they’ll be sitting around waiting for the work to proceed through the queue?  Why would an expensive consultant be working on unplanned work in the first place?

For employees that are not dedicated to a project, this may or may not be a problem – as it is typically assumed the employee is returning to their day job to support the organization.  Hence, their resource allocation really does represent the true cost of the project.  If the employee’s costs are carried in whole or part by the project though, this could become an issue.

The Resource Plan as the Budgeted Dimension

Project Server gives us the ability to track budgeted work through the much maligned resource plan.  To use the resource plan, I simply have to navigate to the project within PWA and add resources.  (Microsoft has posted a lot of guidance on the use of resource plans, so I won’t rehash it here.)

image

In the example above, I am assigning resources by FTE by quarter.  That’s the cost that will hit my project budget.

The Project Plan as the Allocated Dimension

I then allocate the resources to specific tasks within the project plan.  This allocation may or may not target 100% allocation.  In fact, I would generally recommend an 80% target, as that allows for some schedule buffer.

image

Comparing the Two

Unfortunately, there’s nothing native that compares the resource plan and project plan values within Project Server.  There are however some quick reports that can be generated with relatively simple SQL code, such as the query below:

SELECT P.ProjectName, TBD.TimeByDay, R.ResourceName, 
ABD.AssignmentResourcePlanWork AS Budgeted, 
ABD.AssignmentWork AS Allocated, 

CASE WHEN ABD.AssignmentResourcePlanWork > ABD.AssignmentWork
THEN ABD.AssignmentResourcePlanWork - ABD.AssignmentWork
ELSE 0 END AS Unallocated
FROM dbo.MSP_EpmAssignmentByDay_UserView ABD

INNER JOIN dbo.MSP_EpmAssignment_UserView A ON ABD.AssignmentUID = A.AssignmentUID
INNER JOIN dbo.MSP_EpmResource_UserView R ON A.ResourceUID = R.ResourceUID
INNER JOIN dbo.MSP_EpmProject_UserView P ON A.ProjectUID = P.ProjectUID
RIGHT OUTER JOIN dbo.MSP_TimeByDay TBD ON ABD.TimeByDay = TBD.TimeByDay
WHERE R.ResourceName IS NOT NULL
AND TBD.TimeByDay > GETDATE()

Throw that into an ODC and open up Excel to generate something like the following report:

image

…which if I were managing that PMO would indicate that we’re either underutilizing expensive external labor or my project managers aren’t adequately planning the tasks that the resources will be performing.

Throwing Financials into the Mix

That accounts for the hours assigned to a resource.  How do we convert all of that back into dollars for incorporating into the larger financial picture?  Check out this feature from UMT’s flagship product, UMT 360.  I can import my resource plan back into my budget to map my financial estimates that much closer to my resource estimates.

image

Taking Resource Plans to a Whole New Dimension

Financial Governance Made Simple

Readers of this blog are probably familiar in concept with UMT’s flagship product, Project Essentials.  You’ve probably heard the elevator pitch and are aware that it has something to do with financial governance.  Maybe you’ve seen the demos.  Maybe you’re currently using it.  But what does financial governance actually mean in the context of project portfolio management?

In a previous post, I talked about how Project Essentials can complement your corporate accounting system.  In this post, I wanted to introduce a couple of fundamental concepts related to financial governance as it is implemented within the Project Essentials tool.  My hope is that this may demystify some of the bits that are under the hood and give my readers a better entrance point to conceptualizing what capabilities this software can actually enable.

Enterprise Financial Types

The fundamental building block of the solution is the Enterprise Financial Type, or EFT.  An EFT is a collection of financial settings and dimensions that an organization wants to track.  For example, Cost is an EFT.  Cost is typically tracked in terms of the approved budget, current estimates, and my actual cost to date.

image

But what about financial benefits?  Benefits also can be tracked in terms of the same dimensions, i.e. approved benefits, current benefit estimates, and my actual benefits to date.  Benefits are typically tracked along a different timeline than Costs though, and as such we would treat Benefits as another EFT.

With Project Essentials, you can essentially create as many EFTs as you require.  For example, cash flow, contracted costs, resource capacity planning….all of these can be created and tracked in terms of the original estimate, current forecast, and actuals to date.

Financial Dimensions

How do we break down the EFT into more detail?  As discussed above, one of the ways we break down the EFTs is through the use of Financial Dimensions, the standard ones being original budget, current forecast, and actuals to date.

image

We can easily add dimensions as well.  For example, let’s say I have two levels of approved budget: the version that’s entered in SAP for work that is part of my annual budgeting cycle, and the version that is not in SAP, as the project was not allocated funds from the annual planning cycle.

In this case, I can create a new financial dimension.  I can have one that captures the allocated funds within SAP – and another that tracks the amount approved for the project, i.e. the unplanned approved budget.  For these projects, for the Cost EFT, I can now have four dimensions: Planned Budget, Unplanned Budget, Current Estimate, Actuals to Date.

image

Financial Trees

So far so good.  We have Financial Types and we have Financial Dimensions, how do we break it down even further?  I can now capture each of those financial dimensions within each time period per an assigned Cost Breakdown Structure.  For instance, I can capture my approved CapEx and OpEx for May 2013 in the budget dimension – and compare it to the matching CapEx and OpEx entries in my Forecast Estimate for the same time period.

image

The above example is just a simple one.  We regularly work with many organizations who define their cost breakdown structure to a far more detailed level with hundreds of specific nodes to track cost against.

Everything Else

…which brings us to workflow.  Project Essentials takes all of the above and ties it to your project development workflow, so we can define what level of granularity is required at each workflow stage, or whether the estimates should be entered in multiple currencies and aggregated into Euros, or when the budget should be locked down and subject to a change approval process…or any one of a host of other typical financial governance activities that take place routinely in a project lifecycle.

image

That, in a nutshell is the financial governance that we enable: simple concepts adapted to match the complexity of real world operations.

For more information, or to catch one of our Webinars, check out the following link:

http://umt.com/en-us/solutionsandproducts.aspx

Financial Governance Made Simple