Does Your Project Matter?

At the recent PMO Symposium, I attended a couple sessions on reporting and metrics, and each session mentioned something in passing that I thought was actually quite relevant.  The basic gist was that at the strategic level of the organization, only 10 or so projects really matter.  Everything else is just noise.

Not only that, but people within the organization should be able to list those top ten projects – and they should know that if there is any resource contention between those 10 projects and anything not on the list, the top projects will win.  If you can achieve this within your organization, you’re doing well in terms of communicating your values.

From an executive dashboard perspective, incorporate a mechanism to flag those projects, and provide a high level overview of how they’re doing.  As for all of the other projects within the organization…..aggregate them up to a more meaningful entity: service, asset, program, or strategic theme.

Does Your Project Matter?

Taking Resource Plans to a Whole New Dimension

How does one track resource allocation?  What does resource “allocation” even mean?  These are the sorts of questions that project managers often struggle with when mandated to account for resources within an enterprise project management tool.  On the surface, this would seem like a simple problem:  I simply define the tasks in sufficient detail to support resource estimates, slap a couple of resources on said tasks, and call it a day.

The question arises however….what about consultants?  What if I have consultants that are dedicated to my project that charge a specific daily or hourly rate?  In that case, I can pretty much assume that I will be paying these consultants for 40 hours / week over the lifetime of the project.  The question these PMs typically ask me is how do we track these consultants?  Do we track them at a set 40 hours / week – because that’s what we’re budgeting them at?  Or do we track them at the actual number of hours we’ve got them assigned to tasks in any given week?

image

The correct answer is that you should be tracking both.  This is not a one or the other situation.  Instead, we’re looking at two different dimensions of the resource puzzle.  One dimension is the budget for the resources – either in dollars or in hours.  The other dimension is the allocation of these resources – which is typically measured in hours.

Think of the question more in these terms: Given that I’ve budgeted for a 100% of this consultant, why do I only have them allocated for the next four weeks at 65% of their availability?  Does this mean that they’re really working on unplanned work for the remaining 35% of their time or does this mean that they’ll be sitting around waiting for the work to proceed through the queue?  Why would an expensive consultant be working on unplanned work in the first place?

For employees that are not dedicated to a project, this may or may not be a problem – as it is typically assumed the employee is returning to their day job to support the organization.  Hence, their resource allocation really does represent the true cost of the project.  If the employee’s costs are carried in whole or part by the project though, this could become an issue.

The Resource Plan as the Budgeted Dimension

Project Server gives us the ability to track budgeted work through the much maligned resource plan.  To use the resource plan, I simply have to navigate to the project within PWA and add resources.  (Microsoft has posted a lot of guidance on the use of resource plans, so I won’t rehash it here.)

image

In the example above, I am assigning resources by FTE by quarter.  That’s the cost that will hit my project budget.

The Project Plan as the Allocated Dimension

I then allocate the resources to specific tasks within the project plan.  This allocation may or may not target 100% allocation.  In fact, I would generally recommend an 80% target, as that allows for some schedule buffer.

image

Comparing the Two

Unfortunately, there’s nothing native that compares the resource plan and project plan values within Project Server.  There are however some quick reports that can be generated with relatively simple SQL code, such as the query below:

SELECT P.ProjectName, TBD.TimeByDay, R.ResourceName, 
ABD.AssignmentResourcePlanWork AS Budgeted, 
ABD.AssignmentWork AS Allocated, 

CASE WHEN ABD.AssignmentResourcePlanWork > ABD.AssignmentWork
THEN ABD.AssignmentResourcePlanWork - ABD.AssignmentWork
ELSE 0 END AS Unallocated
FROM dbo.MSP_EpmAssignmentByDay_UserView ABD

INNER JOIN dbo.MSP_EpmAssignment_UserView A ON ABD.AssignmentUID = A.AssignmentUID
INNER JOIN dbo.MSP_EpmResource_UserView R ON A.ResourceUID = R.ResourceUID
INNER JOIN dbo.MSP_EpmProject_UserView P ON A.ProjectUID = P.ProjectUID
RIGHT OUTER JOIN dbo.MSP_TimeByDay TBD ON ABD.TimeByDay = TBD.TimeByDay
WHERE R.ResourceName IS NOT NULL
AND TBD.TimeByDay > GETDATE()

Throw that into an ODC and open up Excel to generate something like the following report:

image

…which if I were managing that PMO would indicate that we’re either underutilizing expensive external labor or my project managers aren’t adequately planning the tasks that the resources will be performing.

Throwing Financials into the Mix

That accounts for the hours assigned to a resource.  How do we convert all of that back into dollars for incorporating into the larger financial picture?  Check out this feature from UMT’s flagship product, UMT 360.  I can import my resource plan back into my budget to map my financial estimates that much closer to my resource estimates.

image

Taking Resource Plans to a Whole New Dimension

More Timephased Reporting with OData

I realized that yesterday’s post may have given the incorrect impression that you need to go through all sorts of mind bending contortions to get timephased reporting out of Project Online.  The reality is that you’d only have to limber up if you wanted to do reporting like I showed yesterday, i.e. a detailed view of all projects and capacity by resource and time.

image

The reporting gets a lot simpler when you’re looking for less data.

image

For example, if I don’t want to parcel out each project separately, I could use the same set of feeds I created in yesterday’s post:

Resources:

http://demo/pwa/_api/projectdata/Resources()?$select=ResourceId,ResourceName

Assignments (or Demand):

http://demo/pwa/_api/projectdata/AssignmentTimephasedDataSet()?$filter=TimeByDay ge datetime’2013-01-01T00:00:00′ and TimeByDay lt datetime’2014-01-01T00:00:00’&$select=ResourceId,ProjectName,TimeByDay,AssignmentCombinedWork

Capacity:

http://demo/pwa/_api/projectdata/ResourceTimephasedDataSet()?$filter=TimeByDay ge datetime’2013-01-01T00:00:00′ and TimeByDay lt datetime’2014-01-01T00:00:00’&$select=ResourceId,TimeByDay,Capacity

…from there, I would need to add an additional query to get the TimeByDay dimension:

http://demo/pwa/_api/projectdata/TimeSet()?$filter=TimeByDay ge datetime’2013-01-01T00:00:00′ and TimeByDay lt datetime’2014-01-01T00:00:00’&$select=TimeByDay

Put those together in PowerPivot, and you should have a data model that looks like this:

image

To simplify the charting process, I’ll add a field to the TimeSet table summarizing the data by month:

image

From there, it’s just a simple matter of creating a PivotTable and a bar chart:

image

Moral of the story: simplify the reports to reduce the effort required to create them.  This report was pretty easy to develop, but doesn’t have all of the individual projects displayed in the stacked bar chart.

To simplify things, I can add that detail as a second chart on the same worksheet using the data feeds we already created.  Overall, I’d argue that we don’t lose a whole lot in terms of experience by reducing the information density of the overall dashboard.

image

More Timephased Reporting with OData

Timephased Capacity Reporting with OData

Catching up on the blog comments after a couple of weeks of vacation and post-vacation wheel spinning.  One of the comments that caught my eye was Bram de Vries’ comment on a previous reporting post:

Do you think I need to jump through all these kinds of hoops to create a report where timephased assignment work is compared with timephased capacity for (filtered) list of resources?

I admit, my initial response was that it should be quite easy.  Simply pull a couple tables together into PowerPivot and work up a report.  As I progressed through the mental exercise though, I found it to be a bit harder than I anticipated.  So Bram, I agree with you, there are a couple of hoops to jump through to get the standard timephased capacity report out of Project Online.

That being said, it is possible.  I’m not sure I’d want to do this in a real production environment, but I suppose it’s a good lesson.

The Challenge

So the challenge I set before myself was to see if I could create a stacked bar chart showing projects and capacity using OData.  Essentially, I am recreating the Resource Center view.

image

The first question you should be asking yourself is “Why? Why would I recreate out of the box functionality?”  The general answer I would give is because the Resource Center is dumb.  That’s not dumb as in it has a poor design.  No it’s dumb because it ignores nuances.  For instance, I might have a bunch of projects in discovery that I want to filter out of this view – or I might have cancelled and archived a couple of projects.  At present, there’s no real way to be selective about which projects appear in this view.

Hence, it’s actually quite common to develop custom views that emulate the out of the box Resource Center functionality.

Without OData…

Before OData, you would have had to write a query against the SQL databases to get this data.  There’re a couple of ways to do this, but the easiest method I’ve found is to basically treat Capacity as a project in its own right, i.e. to use a UNION ALL command to join project and capacity data.

That query would look something like this:

image

With OData…

The problem I found with OData is that it’s quite difficult to get the equivalent of a UNION ALL command to work.  Essentially what you have to do is to create a master table with all of the possible rows in your table, i.e. all of the resources and days, then map either Capacity or Demand to each of the respective rows.

That master table is tough.  I tried it in Excel 2013 with a reasonably powered PC, and kept running into performance limitations – probably because I was trying to create a table consisting of 100 projects X 140 resources X 365 days (in 2013), yielding a master table of approximately 5 million rows.  No matter how I tried it, I couldn’t get it to work.

Hence the hack.  What I realized is that I just needed to use Project Server and hack the source data by adding rows to the Assignments table that could be used to hold capacity calculations for each of the resources during each of the days I would like to analyze.  That reduces the total number of extra rows I’d have to create to 140 resources X 365 days = 51,000 rows – in addition to the existing assignment rows.

What’s the best way to create those rows in the Assignment table?  Well, by creating fake assignments for each of the resources.  To do this, I created a new enterprise project called “Capacity.”  On this project I assigned 1 task and set it to 1 year in duration.

image

I then assigned every resource in my resource pool to the task.

image

To keep these assignments out of the Resource Center, I go into the Resource Usage view and flag everything as Proposed.

image

Publish the project and fire up Excel.  Feel free to take a moment and peruse some of my other posts on reporting against OData if you’re not entirely comfortable with PowerPivot quite yet.

Building the Data Connections

At a minimum, we’re going to need the following three OData feeds:

  • Resources:

http://demo/pwa/_api/projectdata/Resources()?$select=ResourceId,ResourceName

  • Assignments (or Demand):

http://demo/pwa/_api/projectdata/AssignmentTimephasedDataSet()?$filter=TimeByDay ge datetime’2013-01-01T00:00:00′ and TimeByDay lt datetime’2014-01-01T00:00:00’&$select=ResourceId,ProjectName,TimeByDay,AssignmentCombinedWork

  • Capacity:

http://demo/pwa/_api/projectdata/ResourceTimephasedDataSet()?$filter=TimeByDay ge datetime’2013-01-01T00:00:00′ and TimeByDay lt datetime’2014-01-01T00:00:00’&$select=ResourceId,TimeByDay,Capacity

You’ll note that I am filtering the timephased data to only give me 2013 data, as with OData, it’s important to take performance into consideration and minimize the sheer amount of data imported.

Building the Data Model

To build the data model, I borrowed a trick from this blog post by Kaspar de Jonge and created my own merge column.

For the AssignmentTimePhasedDataSet, I created a new field called “Merge” and set it to calculate with the following formula:

=[ResourceId]&[ProjectName]&[TimeByDay]

image

For the ResourceTimephasedDataSet, I created a new field, also called “Merge” and set it to calculate with a different formula:

=[ResourceId]&”Capacity”&[TimeByDay]

image

From there, it’s a simple matter of building the relationships between the three tables:

image

We need to build a custom column to swap out the work calculation for our Capacity project, and replace it with the appropriate Capacity calculations from the ResourceTimephasedDataSet.  I create a new column called “Work” in the AssignmentTimePhasedDataSet and assign it the following formula:

=IF([ProjectName]<>”Capacity”,[AssignmentCombinedWork],RELATED(ResourceTimephasedDataSet[Capacity]))

image

This formula displays the appropriate work column – except for the Capacity project, wherein it will pull the correct row from the ResourceTimephasedDataSet.  The RELATED function is basically playing the role of a Vlookup.

To minimize the charting overhead, I’ll also simplify the date field to render by month.

=[TimeByDay]-Day([TimeByDay])+1

image

…And Now the Report….

Back in Excel, create a PivotTable using the following settings:

image

..which will look like this:

image

Turn that into a stacked bar chart, right click on the Capacity bar and convert into a line chart, and you’ll get something like this:

image

Add some slicers and you’ll get something a bit nicer:

image

…although the Capacity line keeps reverting back to a bar whenever a filter is applied – which is kind of annoying.  This goes back to something I read in someone’s blog post while researching this, which is starting to resonate with me – and validate Bram’s comment from the beginning of this post: PowerPivot is really designed for the ad hoc analyst pulling data together periodically.  It still has some way to go before it’s an enterprise tool for creating stable repeatable reports.

After going through this exercise, my modified response to the initial comment: sure, it’s possible to do timephased reporting, but you’d be much better off skipping capacity and just sticking to the demand part of the equation.  That would be quite easy to do – and would require none of the hoop jumping I just detailed above.

Timephased Capacity Reporting with OData

The Release Manager and the Maintenance Scheduler

Teach scheduling to any class of IT project managers, and the number one request is invariably “How do I make my schedules more predictable?  How do I accurately predict important events such as application releases?”  There’re really two answers to that question – neither of which are mutually exclusive:

  1. Only use dedicated resources on your projects.  It’s the interplay between break fix and new builds that dooms project schedules in your “normal” IT department.
  2. Incorporate buffer into your schedules.  Buffer reduces volatility in date predictions.

Needless to say, option #1 is a nonstarter in most organizations.  The challenge with the latter is that it requires an organizational shift.  It requires different communication techniques and different reporting.  Instead of reporting on a forecast release date, I report on a target release date and the probability that I might actually hit it.  That’s the cultural shift required to move towards more predictable scheduling.

Nowhere in IT is this cultural shift more evident than in the relationship between the project manager and the release manager….

What Do You Mean I Can’t Release Tomorrow?

Release managers are incredibly annoying with their change processes, their release reviews and their insufferable insistence on having actual documentation.  They represent the worst of organizational process and present an obstacle to getting my application in production and letting me finally wipe my hands of my current project and move on to something new and interesting.

image

Not only that, but I think they only care about increasing the costs of any project I’m on – whether it be by requiring an inordinate amount of hoops to jump through to actually get an approved release window, or forcing my team to spin cycles performing extra validation and testing on our application.  I mean, we know it will work.  It worked in Dev, right?

These are natural sentiments for a project manager.  Over the months and years of a project, the project manager learns to look out for his project.  He becomes focused on his project.  And as he knows, his projects are always far more important, far more critical to the organization at large than any other project within the portfolio.

Look to Big Oil for the Answers

A couple of years ago, I had the opportunity to dip my toes into the water of oilfield maintenance scheduling.  It  was actually my first engagement upon moving to the Houston oil patch.  For those of you not familiar with maintenance scheduling, it looks something like this:

  1. Whenever a new facility or piece of equipment is commissioned, a series of maintenance routines are loaded into a Computerized Maintenance Management System (CMMS).  These routines might call for quarterly inspections, monthly cleaning procedures, filter changes, belt changes, lubrication, etc.  Think of your local Starbucks, and all of the equipment they must maintain – and then multiply that by all of the Starbucks in your market, and you’ll get the picture.
  2. The CMMS spits out tickets whenever they’re ready to be worked.  This feeds into the scheduled maintenance backlog.  Each time a ticket is created, a timestamp is recorded, and the age of the tickets begins to be tracked.  I don’t know what a “typical” backlog is, and assume that varies by industry, but figure a backlog of anywhere from 6 months to several years is reasonable – depending on the priority of the ticket and the industry.
  3. Those tickets go to the various work crews that do the work.  As they have a set of specialized skills and equipment, the same work crew would be responsible for supporting multiple facilities throughout the oilfield.  Going back to our Starbucks example, these are the tier 2 support team that are responsible for the hard core equipment maintenance (assuming they exist.  I know next to nothing about coffee house logistics).  It’s up the work crew to optimize their schedule so that they can meet their performance metrics.
  4. Enter the facility operator role.  The facility operator is in charge of the safe and productive operation of his facility.  His job is to ensure the facility meets its production goals and reduces or eliminates any reportable health, safety or environmental incidents.  The facility operator has the authority to turn away a work crew that arrives on any given day if the work they are to perform is considered too dangerous….i.e. a simultaneous operations (SimOps) issue, where one crew is performing work in close physical proximity to another crew.  For example, one crew might be welding while another crew is venting flammable gasses right next door.  One crew might be slinging steel overhead while another crew is swapping out equipment belts below.  For obvious reasons, SimOps issues are to be closely monitored and avoided at all costs.   Turnaway metrics are also monitored, as they entail cost: the work group geared up, schlepped out to the facility, and then got turned away – which could potentially kill an entire morning or a full day of productivity.
  5. Hence a lot of organizations have moved to a system where the CMMS tickets are pushed into a scheduling optimization system.  Trained schedulers then review all of the tickets and assign target dates based on priority, geographic proximity, risk and other factors.  They “bundle” maintenance by the same crew in the same area to ensure increased productivity.  This system optimizes maintenance ticket throughput, reduces turnaways (inefficiencies), and mitigates risk both of individual harm and the systemic risk of a high maintenance backlog.

In essence, what we have here is a number of systems being optimized for their own goals:

  • The CMMS is identifying the tickets that must be performed and tracking the aging metrics.
  • The work teams are focusing on their own daily marching orders.
  • The facility operators are focusing on the safe operations of their facility.

image

The schedulers’ role is to balance the needs and wants of those groups with the overall priorities of the organization – which in an oilfield is typically mitigating risk of both reportable incidents and production stoppage.

And IT Folks Should Care Because….

Release management is essentially the same series of systems, writ small.  The project manager is focused on the local optima, getting that release out the door as soon as possible.  The release manager is looking at all of the releases coming down the pipeline and optimizing across the entire infrastructure.  Everyone is just performing their natural role in the systems, and the release managers represent the organizational check of a project manager’s native optimism.

Generally, the way we see this playing out is that the project manager, through the development of the schedule model, creates a prediction of when the application will be ready for release.  That date is then negotiated with the stakeholders and the release manager to pin down an approved release window.

image

That release window is then inserted back into the schedule as a target or constraint on the actual release event.  Mechanically, within Microsoft Project, that looks something like this.

image

See how I’ve inserted the release window as a Release Target – which is basically a repurposed Deadline field?  I then add a Start No Earlier Than (SNET) constraint on the successor tasks.  In effect, this adds buffer to my schedule, as I can now track the Total Slack on the Release Activity.  The lower the Total Slack, the greater the chance that I’ll miss my release window.

I’d point out that adding buffer is nothing specific to IT project management.  I’ve recommended a similar approach to predicting pipeline completion dates in a drilling scenario.  That helps us avoid the dreaded WOPL, where the well is completed, but Waiting on Pipeline.

…Bringing it Full Circle…

So, what we have in the interplay between project and release management is the same interplay we see in oilfield maintenance.  We’ve got two different systems that interact but are focused on different optima.  The project system is focused on getting the release out as soon as possible.  The release system is focused on ensuring the enterprise infrastructure doesn’t break.  The goal then is to incorporate into our project scheduling the feedback to and from the release management process.

image

The Release Manager and the Maintenance Scheduler

EPM Bingo, or A Structured Approach to PPM Analysis

One of my colleagues used to explain to me that there are three kinds of people that attend any meeting:

  • There are those who feel an emotional need to share how they’re feeling about the topic.
  • There are those who need a specific list of assigned tasks to come out of the meeting.
  • And lastly, there are those people who don’t feel the meeting is complete until they’ve walked up to the white board and drawn a couple of circles and squares and have put an intellectual framework around the topic at hand.

This post is for the latter group.  It comes out of some recent discussions I’ve had where we’ve stepped back and taken a more macro view of the current enterprise work management system.  In fact, the diagrams below grew out of a couple of doodles I ended up drawing in my notes after a recent discussion.

Before going much further, let’s borrow a couple of terms from the ITIL lexicon.  Love it or hate it, at least it does provide us with some commonly accepted terminology.  The first day of ITIL training usually begins with the definition of these two terms (from the official ITIL glossary):

  • Function: A team or group of people and the tools they use to carry out one or more Processes or Activities.
  • Process: A structured set of Activities designed to accomplish a specific Objective. A Process takes one or more defined inputs and turns them into defined outputs.

I’ll add my own term to this:

  • Work Types– A specific type of work that shares a common lifecycle and set of processes.  (Yes, I know it kind of sounds like PMI’s definition of a portfolio, but since a work portfolio comprises multiple work types, I figured this would be less confusing.  In technical terms, think Enterprise Project Types.)

So we’ve got functions that utilize multiple processes.  Hence one of the goals of process definition is to define the outputs required to support the functions.  At the enterprise level, we’re talking about the outputs required from local processes to support enterprise functions – such as resource management and allocation.

image

So far so good, but how does that apply to the EPM context?  Typically, you take the project lifecycle, break it down into processes, and then map the output of the processes to the required functions.  Most often, those outputs take the form of reports on process data.

Applying this to the enterprise as a whole, here’s a simple three(ish) step process to perform analysis on an existing or nascent PPM system*:

  1. Identify the inventory of work types within the system.
  2. Identify the processes required to support each work type.
  3. Identify the required enterprise functions to support governance.
  4. Map the outputs of the processes to the required inputs of the enterprise functions.

image

At the end, it should look something like this – with each line representing an organizational control process, data flow, or report.

image

…and since the data all needs to be pulled from the processes, this can be turned around as input into the process design to ensure that the appropriate control points have been inserted into the process to ensure the function has what it needs to, well, function.

*Not to be confused with my patented 3 step process to implement portfolio management tools:

  1. Identify all of the work in the organization.
  2. Identify the total work capacity of the organization.
  3. Identify how the organization makes decisions and codify this into a set of automated rules.

See, it’s actually quite easy to implement PPM tools…

EPM Bingo, or A Structured Approach to PPM Analysis

Filtering Excel WebParts in Office 365 (Part 3)

Just wanted to close out this discussion with filtered Excel reports in Project Online with a last use case.  In yesterday’s post, I talked about how to create an automatically filtered project dashboard.  In this post, I’ll talk about another common example, creating an Excel report that filters by user.

I’ll use one of the reports I created in yesterday’s post.  This one shows project work, and allows the user to slice by resource name.

image

I’ll add this via an Excel Webpart to a page in my PWA site collection.  Pretty much the same as the last exercise.  The main exception is that we have a dependency on the User Profile Service being provisioned – which is the case by default in Project Online.

When passing the parameter from the Current User Filter Webpart then, we need to select the Name field, and not the user account field to be passed into the Excel report.

image

Throw that onto a page and login with a valid user account to confirm it all works…

image

…and well that’s pretty much it, but stay tuned for how we can have a bit more fun with filtered reports….and VBA…

Filtering Excel WebParts in Office 365 (Part 3)