Timephased Capacity Reporting with OData

Catching up on the blog comments after a couple of weeks of vacation and post-vacation wheel spinning.  One of the comments that caught my eye was Bram de Vries’ comment on a previous reporting post:

Do you think I need to jump through all these kinds of hoops to create a report where timephased assignment work is compared with timephased capacity for (filtered) list of resources?

I admit, my initial response was that it should be quite easy.  Simply pull a couple tables together into PowerPivot and work up a report.  As I progressed through the mental exercise though, I found it to be a bit harder than I anticipated.  So Bram, I agree with you, there are a couple of hoops to jump through to get the standard timephased capacity report out of Project Online.

That being said, it is possible.  I’m not sure I’d want to do this in a real production environment, but I suppose it’s a good lesson.

The Challenge

So the challenge I set before myself was to see if I could create a stacked bar chart showing projects and capacity using OData.  Essentially, I am recreating the Resource Center view.

image

The first question you should be asking yourself is “Why? Why would I recreate out of the box functionality?”  The general answer I would give is because the Resource Center is dumb.  That’s not dumb as in it has a poor design.  No it’s dumb because it ignores nuances.  For instance, I might have a bunch of projects in discovery that I want to filter out of this view – or I might have cancelled and archived a couple of projects.  At present, there’s no real way to be selective about which projects appear in this view.

Hence, it’s actually quite common to develop custom views that emulate the out of the box Resource Center functionality.

Without OData…

Before OData, you would have had to write a query against the SQL databases to get this data.  There’re a couple of ways to do this, but the easiest method I’ve found is to basically treat Capacity as a project in its own right, i.e. to use a UNION ALL command to join project and capacity data.

That query would look something like this:

image

With OData…

The problem I found with OData is that it’s quite difficult to get the equivalent of a UNION ALL command to work.  Essentially what you have to do is to create a master table with all of the possible rows in your table, i.e. all of the resources and days, then map either Capacity or Demand to each of the respective rows.

That master table is tough.  I tried it in Excel 2013 with a reasonably powered PC, and kept running into performance limitations – probably because I was trying to create a table consisting of 100 projects X 140 resources X 365 days (in 2013), yielding a master table of approximately 5 million rows.  No matter how I tried it, I couldn’t get it to work.

Hence the hack.  What I realized is that I just needed to use Project Server and hack the source data by adding rows to the Assignments table that could be used to hold capacity calculations for each of the resources during each of the days I would like to analyze.  That reduces the total number of extra rows I’d have to create to 140 resources X 365 days = 51,000 rows – in addition to the existing assignment rows.

What’s the best way to create those rows in the Assignment table?  Well, by creating fake assignments for each of the resources.  To do this, I created a new enterprise project called “Capacity.”  On this project I assigned 1 task and set it to 1 year in duration.

image

I then assigned every resource in my resource pool to the task.

image

To keep these assignments out of the Resource Center, I go into the Resource Usage view and flag everything as Proposed.

image

Publish the project and fire up Excel.  Feel free to take a moment and peruse some of my other posts on reporting against OData if you’re not entirely comfortable with PowerPivot quite yet.

Building the Data Connections

At a minimum, we’re going to need the following three OData feeds:

  • Resources:

http://demo/pwa/_api/projectdata/Resources()?$select=ResourceId,ResourceName

  • Assignments (or Demand):

http://demo/pwa/_api/projectdata/AssignmentTimephasedDataSet()?$filter=TimeByDay ge datetime’2013-01-01T00:00:00′ and TimeByDay lt datetime’2014-01-01T00:00:00’&$select=ResourceId,ProjectName,TimeByDay,AssignmentCombinedWork

  • Capacity:

http://demo/pwa/_api/projectdata/ResourceTimephasedDataSet()?$filter=TimeByDay ge datetime’2013-01-01T00:00:00′ and TimeByDay lt datetime’2014-01-01T00:00:00’&$select=ResourceId,TimeByDay,Capacity

You’ll note that I am filtering the timephased data to only give me 2013 data, as with OData, it’s important to take performance into consideration and minimize the sheer amount of data imported.

Building the Data Model

To build the data model, I borrowed a trick from this blog post by Kaspar de Jonge and created my own merge column.

For the AssignmentTimePhasedDataSet, I created a new field called “Merge” and set it to calculate with the following formula:

=[ResourceId]&[ProjectName]&[TimeByDay]

image

For the ResourceTimephasedDataSet, I created a new field, also called “Merge” and set it to calculate with a different formula:

=[ResourceId]&”Capacity”&[TimeByDay]

image

From there, it’s a simple matter of building the relationships between the three tables:

image

We need to build a custom column to swap out the work calculation for our Capacity project, and replace it with the appropriate Capacity calculations from the ResourceTimephasedDataSet.  I create a new column called “Work” in the AssignmentTimePhasedDataSet and assign it the following formula:

=IF([ProjectName]<>”Capacity”,[AssignmentCombinedWork],RELATED(ResourceTimephasedDataSet[Capacity]))

image

This formula displays the appropriate work column – except for the Capacity project, wherein it will pull the correct row from the ResourceTimephasedDataSet.  The RELATED function is basically playing the role of a Vlookup.

To minimize the charting overhead, I’ll also simplify the date field to render by month.

=[TimeByDay]-Day([TimeByDay])+1

image

…And Now the Report….

Back in Excel, create a PivotTable using the following settings:

image

..which will look like this:

image

Turn that into a stacked bar chart, right click on the Capacity bar and convert into a line chart, and you’ll get something like this:

image

Add some slicers and you’ll get something a bit nicer:

image

…although the Capacity line keeps reverting back to a bar whenever a filter is applied – which is kind of annoying.  This goes back to something I read in someone’s blog post while researching this, which is starting to resonate with me – and validate Bram’s comment from the beginning of this post: PowerPivot is really designed for the ad hoc analyst pulling data together periodically.  It still has some way to go before it’s an enterprise tool for creating stable repeatable reports.

After going through this exercise, my modified response to the initial comment: sure, it’s possible to do timephased reporting, but you’d be much better off skipping capacity and just sticking to the demand part of the equation.  That would be quite easy to do – and would require none of the hoop jumping I just detailed above.

Timephased Capacity Reporting with OData

The Release Manager and the Maintenance Scheduler

Teach scheduling to any class of IT project managers, and the number one request is invariably “How do I make my schedules more predictable?  How do I accurately predict important events such as application releases?”  There’re really two answers to that question – neither of which are mutually exclusive:

  1. Only use dedicated resources on your projects.  It’s the interplay between break fix and new builds that dooms project schedules in your “normal” IT department.
  2. Incorporate buffer into your schedules.  Buffer reduces volatility in date predictions.

Needless to say, option #1 is a nonstarter in most organizations.  The challenge with the latter is that it requires an organizational shift.  It requires different communication techniques and different reporting.  Instead of reporting on a forecast release date, I report on a target release date and the probability that I might actually hit it.  That’s the cultural shift required to move towards more predictable scheduling.

Nowhere in IT is this cultural shift more evident than in the relationship between the project manager and the release manager….

What Do You Mean I Can’t Release Tomorrow?

Release managers are incredibly annoying with their change processes, their release reviews and their insufferable insistence on having actual documentation.  They represent the worst of organizational process and present an obstacle to getting my application in production and letting me finally wipe my hands of my current project and move on to something new and interesting.

image

Not only that, but I think they only care about increasing the costs of any project I’m on – whether it be by requiring an inordinate amount of hoops to jump through to actually get an approved release window, or forcing my team to spin cycles performing extra validation and testing on our application.  I mean, we know it will work.  It worked in Dev, right?

These are natural sentiments for a project manager.  Over the months and years of a project, the project manager learns to look out for his project.  He becomes focused on his project.  And as he knows, his projects are always far more important, far more critical to the organization at large than any other project within the portfolio.

Look to Big Oil for the Answers

A couple of years ago, I had the opportunity to dip my toes into the water of oilfield maintenance scheduling.  It  was actually my first engagement upon moving to the Houston oil patch.  For those of you not familiar with maintenance scheduling, it looks something like this:

  1. Whenever a new facility or piece of equipment is commissioned, a series of maintenance routines are loaded into a Computerized Maintenance Management System (CMMS).  These routines might call for quarterly inspections, monthly cleaning procedures, filter changes, belt changes, lubrication, etc.  Think of your local Starbucks, and all of the equipment they must maintain – and then multiply that by all of the Starbucks in your market, and you’ll get the picture.
  2. The CMMS spits out tickets whenever they’re ready to be worked.  This feeds into the scheduled maintenance backlog.  Each time a ticket is created, a timestamp is recorded, and the age of the tickets begins to be tracked.  I don’t know what a “typical” backlog is, and assume that varies by industry, but figure a backlog of anywhere from 6 months to several years is reasonable – depending on the priority of the ticket and the industry.
  3. Those tickets go to the various work crews that do the work.  As they have a set of specialized skills and equipment, the same work crew would be responsible for supporting multiple facilities throughout the oilfield.  Going back to our Starbucks example, these are the tier 2 support team that are responsible for the hard core equipment maintenance (assuming they exist.  I know next to nothing about coffee house logistics).  It’s up the work crew to optimize their schedule so that they can meet their performance metrics.
  4. Enter the facility operator role.  The facility operator is in charge of the safe and productive operation of his facility.  His job is to ensure the facility meets its production goals and reduces or eliminates any reportable health, safety or environmental incidents.  The facility operator has the authority to turn away a work crew that arrives on any given day if the work they are to perform is considered too dangerous….i.e. a simultaneous operations (SimOps) issue, where one crew is performing work in close physical proximity to another crew.  For example, one crew might be welding while another crew is venting flammable gasses right next door.  One crew might be slinging steel overhead while another crew is swapping out equipment belts below.  For obvious reasons, SimOps issues are to be closely monitored and avoided at all costs.   Turnaway metrics are also monitored, as they entail cost: the work group geared up, schlepped out to the facility, and then got turned away – which could potentially kill an entire morning or a full day of productivity.
  5. Hence a lot of organizations have moved to a system where the CMMS tickets are pushed into a scheduling optimization system.  Trained schedulers then review all of the tickets and assign target dates based on priority, geographic proximity, risk and other factors.  They “bundle” maintenance by the same crew in the same area to ensure increased productivity.  This system optimizes maintenance ticket throughput, reduces turnaways (inefficiencies), and mitigates risk both of individual harm and the systemic risk of a high maintenance backlog.

In essence, what we have here is a number of systems being optimized for their own goals:

  • The CMMS is identifying the tickets that must be performed and tracking the aging metrics.
  • The work teams are focusing on their own daily marching orders.
  • The facility operators are focusing on the safe operations of their facility.

image

The schedulers’ role is to balance the needs and wants of those groups with the overall priorities of the organization – which in an oilfield is typically mitigating risk of both reportable incidents and production stoppage.

And IT Folks Should Care Because….

Release management is essentially the same series of systems, writ small.  The project manager is focused on the local optima, getting that release out the door as soon as possible.  The release manager is looking at all of the releases coming down the pipeline and optimizing across the entire infrastructure.  Everyone is just performing their natural role in the systems, and the release managers represent the organizational check of a project manager’s native optimism.

Generally, the way we see this playing out is that the project manager, through the development of the schedule model, creates a prediction of when the application will be ready for release.  That date is then negotiated with the stakeholders and the release manager to pin down an approved release window.

image

That release window is then inserted back into the schedule as a target or constraint on the actual release event.  Mechanically, within Microsoft Project, that looks something like this.

image

See how I’ve inserted the release window as a Release Target – which is basically a repurposed Deadline field?  I then add a Start No Earlier Than (SNET) constraint on the successor tasks.  In effect, this adds buffer to my schedule, as I can now track the Total Slack on the Release Activity.  The lower the Total Slack, the greater the chance that I’ll miss my release window.

I’d point out that adding buffer is nothing specific to IT project management.  I’ve recommended a similar approach to predicting pipeline completion dates in a drilling scenario.  That helps us avoid the dreaded WOPL, where the well is completed, but Waiting on Pipeline.

…Bringing it Full Circle…

So, what we have in the interplay between project and release management is the same interplay we see in oilfield maintenance.  We’ve got two different systems that interact but are focused on different optima.  The project system is focused on getting the release out as soon as possible.  The release system is focused on ensuring the enterprise infrastructure doesn’t break.  The goal then is to incorporate into our project scheduling the feedback to and from the release management process.

image

The Release Manager and the Maintenance Scheduler

EPM Bingo, or A Structured Approach to PPM Analysis

One of my colleagues used to explain to me that there are three kinds of people that attend any meeting:

  • There are those who feel an emotional need to share how they’re feeling about the topic.
  • There are those who need a specific list of assigned tasks to come out of the meeting.
  • And lastly, there are those people who don’t feel the meeting is complete until they’ve walked up to the white board and drawn a couple of circles and squares and have put an intellectual framework around the topic at hand.

This post is for the latter group.  It comes out of some recent discussions I’ve had where we’ve stepped back and taken a more macro view of the current enterprise work management system.  In fact, the diagrams below grew out of a couple of doodles I ended up drawing in my notes after a recent discussion.

Before going much further, let’s borrow a couple of terms from the ITIL lexicon.  Love it or hate it, at least it does provide us with some commonly accepted terminology.  The first day of ITIL training usually begins with the definition of these two terms (from the official ITIL glossary):

  • Function: A team or group of people and the tools they use to carry out one or more Processes or Activities.
  • Process: A structured set of Activities designed to accomplish a specific Objective. A Process takes one or more defined inputs and turns them into defined outputs.

I’ll add my own term to this:

  • Work Types– A specific type of work that shares a common lifecycle and set of processes.  (Yes, I know it kind of sounds like PMI’s definition of a portfolio, but since a work portfolio comprises multiple work types, I figured this would be less confusing.  In technical terms, think Enterprise Project Types.)

So we’ve got functions that utilize multiple processes.  Hence one of the goals of process definition is to define the outputs required to support the functions.  At the enterprise level, we’re talking about the outputs required from local processes to support enterprise functions – such as resource management and allocation.

image

So far so good, but how does that apply to the EPM context?  Typically, you take the project lifecycle, break it down into processes, and then map the output of the processes to the required functions.  Most often, those outputs take the form of reports on process data.

Applying this to the enterprise as a whole, here’s a simple three(ish) step process to perform analysis on an existing or nascent PPM system*:

  1. Identify the inventory of work types within the system.
  2. Identify the processes required to support each work type.
  3. Identify the required enterprise functions to support governance.
  4. Map the outputs of the processes to the required inputs of the enterprise functions.

image

At the end, it should look something like this – with each line representing an organizational control process, data flow, or report.

image

…and since the data all needs to be pulled from the processes, this can be turned around as input into the process design to ensure that the appropriate control points have been inserted into the process to ensure the function has what it needs to, well, function.

*Not to be confused with my patented 3 step process to implement portfolio management tools:

  1. Identify all of the work in the organization.
  2. Identify the total work capacity of the organization.
  3. Identify how the organization makes decisions and codify this into a set of automated rules.

See, it’s actually quite easy to implement PPM tools…

EPM Bingo, or A Structured Approach to PPM Analysis

Filtering Excel WebParts in Office 365 (Part 3)

Just wanted to close out this discussion with filtered Excel reports in Project Online with a last use case.  In yesterday’s post, I talked about how to create an automatically filtered project dashboard.  In this post, I’ll talk about another common example, creating an Excel report that filters by user.

I’ll use one of the reports I created in yesterday’s post.  This one shows project work, and allows the user to slice by resource name.

image

I’ll add this via an Excel Webpart to a page in my PWA site collection.  Pretty much the same as the last exercise.  The main exception is that we have a dependency on the User Profile Service being provisioned – which is the case by default in Project Online.

When passing the parameter from the Current User Filter Webpart then, we need to select the Name field, and not the user account field to be passed into the Excel report.

image

Throw that onto a page and login with a valid user account to confirm it all works…

image

…and well that’s pretty much it, but stay tuned for how we can have a bit more fun with filtered reports….and VBA…

Filtering Excel WebParts in Office 365 (Part 3)

Filtering Excel WebParts in Office 365 (Part 2)

In this post, I’ll continue discussing how to promote parameters into SharePoint and then automatically filter reports using the SharePoint Filter Webparts.  Note that this solution should pretty much work both in on-prem and online – with the caveat that you might want to construct your data model differently based on the solution you’re using.  Check the last post for guidance on that (and check the bottom of this post to see if things might have changed since I wrote it).

Creating the Data Model (Option 2 – With Filters)

See the chart below to (hopefully) alleviate any confusion…

image_thumb1[1]

 

Picking up from the last post, I have two OData connections that I will add to Excel.

http://demo/PWA//_api/ProjectData/Assignments()?$select=ProjectName,ProjectId,ResourceName,AssignmentWork

…and this…

http://demo/PWA//_api/ProjectData/Projects()?$select=ProjectId,EnterpriseProjectTypeName

This time, when I add them to the Excel workbook, I select the option to simply add them as tables.

image

..which yields a workbook with two tables.

image

Instead of using PowerPivot, I can then move fields between the tables using a vlookup formula.

image

…then I create a PivotTable from the table itself by selecting the option to “Summarize in a PivotTable.”   From here, it’s simply a matter of turning the table into a couple bar charts to get something that looks like this…

image16[1]

I’m now going to add a slicer to each chart.  I’ll add a ProjectID slicer to the top chart and a Resource Name slicer to the bottom chart.  Click on the slicers to validate the results.  At this point, I’d also name the charts to ensure we display the correct ones.

image32[1]

I load this report to SharePoint.  When I save to SharePoint, I select the browser options button and configure both of the slicers as parameters.

image_thumb13

Displaying in SharePoint

After uploading to SharePoint, we now can display it.  In the first case, we’ll add our Resource Report to a PDP that I’ve created called the Project Dashboard.  I add two webparts….one Query String Webpart and one Excel Webpart.

image28[1]

I configure the Excel Webpart to display the Resource Report.

image36[1]

…then I configure the Query String Webpart to pull the ProjUID parameter from the URL.  The ProjUID is a 36 character identified for each of the projects in the system.

image40[1]

Add a connection from the Query String filter to the Excel Webpart.

image44[1]

…and I now have a filtered report showing project specific data.

image

…in tomorrow’s post….the same, but filtered by resource name.

Filtering Excel WebParts in Office 365 (Part 2)

Filtering Excel WebParts in Office 365 (Part I)

Following Mike McLean and my triumphant debut as Project Server 2013 BI masters at the recent SharePoint conference, I wanted to document one of the demos – specifically the one where we filtered Excel reports automatically based on external parameters.  I’ve more or less covered this before in Project Server 2010 and PerformancePoint, so if you’re looking for more information, please feel free to check out this link.

This post is focused specifically on implementing the same solution with the Excel Services Webparts, which are pretty much your obligatory go-to option when working in Project Online as PerformancePoint Webparts are not available.

On OData and OData Refresh

The main issue I ran into when trying to filter data sets was around the OData refresh issue.  Specifically, as of the time of this writing, OData refresh is not supported in Excel Services.  That’s not as big a deal as some might think, as I can always just open the Excel document to refresh it in the desktop….or if all else fails, write some VBA to cycle through my reports and refresh them on a daily basis.

Since I’ve already seen some discussion of this in the online forums, I’d point out that I’m pretty confident (as a non-Microsoft employee) that the fact that OData will not refresh in Excel Services is a temporary issue that will get resolved in the future.  Needless to say, lack of online OData refresh is a significant obstacle to establishing parity between the online and on-prem reporting world.

Now how does that impact the specific use case we’re talking about today, i.e. the ability to filter Excel charts dynamically within SharePoint?  As near as I can tell, we can still filter OData based reports – provided they’re not using a PowerPivot data model.  My current hypothesis is that the filter mechanism somehow involves an interaction with PowerPivot which is not at present supported within Excel Services.  Hence, to support this scenario, I’ll use vlookups and other Excel trickery rather than resorting to using PowerPivot and DAX expressions to build my data model.

As a disclaimer, I’ll point out that this post is all based on observations of how the tool works and are not based on in-depth technical analyses of the underpinning code.  I’ll defer to the readers of this blog for that.  I’d also stress that if you’re reading this post more than several months after it was posted, check the documentation to confirm my conclusions are still valid.  If things should change, and I be so motivated, I’ll add a postback in the comments section to any updates.

Assembling the Data Set

Here you have a number of options.  If you’re using an on-prem deployment, you could use the “traditional” method of creating Office Data Connection (ODC) files with embedded SQL queries.  In this case, we simply create the data model as part of the SQL query.  As far as I can tell, we can create reports from this, promote the parameter, and filter to our heart’s content.

In the interest of artificially creating a challenge, I’ll be using OData for my reports in this example.

First off, I am going to create a new report.  Using my new OData skills, I fire up LINQPad and generate a URL to create a simple project work report.  This URL will provide the assignment data:

http://demo/PWA//_api/ProjectData/Assignments()?$select=ProjectName,ProjectId,ResourceName,AssignmentWork

…to add some difficulty, we can also pull in a field from the Project table.

http://demo/PWA//_api/ProjectData/Projects()?$select=ProjectId,EnterpriseProjectTypeName

As I mentioned in this post, I’m using an on-prem version of Project Server to generate and validate the URLs, then I’ll append everything from the …/_api bit onwards onto my O365 instance to ensure it works.  In the case of the first one, that will look like this:

http://o365instance.sharepoint.com/sites/pwa/_api/ProjectData/Assignments()?$select=ProjectName,ProjectId,ResourceName,AssignmentWork

Creating the Data Model (Option 1 – No Filters)

I wrote this part of the post before I figured out what worked in the filtering mechanism and what didn’t.  Since I had it written, I figured I’d include it for thoroughness and with the hope that it’s not too confusing.  Note that this method will not work with the filtering mechanism I am demonstrating.  It is an effective way of building reports however…..so if you don’t plan to filter your reports automatically using SharePoint filter Webparts, I’d definitely recommend using PowerPivot to create your data model.

Hopefully this should clarify:

 

image_thumb1[1]

Per one of the comments I got at SharePoint Conference, feel free to use OData in your on-prem reporting as well.  Personally, I feel that since I’m comfortable in SQL, I’ll just continue using that – but it’s probably important to note that it’s available and useful in on-prem as well.

Anyway, I add both of those feeds into Excel 2013 and flip to PowerPivot to ensure they are joined properly.  (Check some of my prior posts for more information on that.)

image

Now, back in the main Excel interface, I add a PivotTable based on the PowerPivot data model.

image

…which looks like this…

image

…to illustrate a second concept, I’ll go ahead and create a second pivot table that looks like this…

image

Turn them both into a bar chart to get something that looks like this…

image

I’m now going to add a slicer to each chart.  I’ll add a ProjectID slicer to the top chart and a Resource Name slicer to the bottom chart.  Click on the slicers to validate the results.  At this point, I’d also name the charts to ensure we display the correct ones.

image

…and well that’s pretty much it for this option.  Upload to SharePoint to allow your users to click the slicers and filter the Excel reports manually.  In my next post, I’ll talk about how to connect these (or rather similar) reports to SharePoint filter webparts to get that automatically filtered experience that we demo’d in Vegas a couple of weeks ago.

Filtering Excel WebParts in Office 365 (Part I)

SharePoint Conference Project Server BI Linkfest

Thanks everyone for coming out to #SPC171 just now, where Microsoft’s Mike McLean and I skimmed a rock over what’s new in Project Server 2013 BI.  Looking for more?  Well, you came to the right place.

As we mentioned, we’ve got a whole slew of demos that we couldn’t quite fit into this presentation, about 150 minutes worth.  Feel free to catch these recordings from the Project Conference last March.  As you’ll see, this was presented on Project Server 2010 – but they’re equally applicable to Project Server 2013 on-premises – and mostly applicable to Project Server 2013 in the cloud.

  1. Session #1: Intro to Project Server BI
  2. Session #2: Even More of the Same
  3. Special Bonus Track: the same presentation at TechEd in June

Additionally, here’s an inventory of reports, demos, and the step by step instructions to reproduce them all:

  1. Linkfest #1: General Stuff, ODC, Visio Services
  2. Linkfest #2: Excel Services & the REST API
  3. Linkfest #3: PerformancePoint and Fun with VBA

Looking for OData reporting against Project Server?   Stay tuned for more content, but for now, this should get you started…

  1. Reporting on OData with LINQPad
  2. Creating a Major Milestones Report
  3. Creating a Cumulative Milestone Report (Part 1, Part 2, Part 3)
  4. Maps, Maps, Maps

Maybe some general Project Server information is more to your liking?  Yep, we’ve got some of that too.

  1. Becoming a Project Server Administrator
  2. 10 Things SharePoint Admins Should Know About Project Server
  3. 10 Things Project Server Admins Should Know About SharePoint

…and last, but not least, here’s a white paper I wrote a while back talking about the options to report against Project Server 2010.  Stay tuned to this blog and others for more Project Server 2013 fun…

SharePoint Conference Project Server BI Linkfest

Project Server and the Obligatory Geospatial Report

Well, a great first day at SharePoint Conference already.  Yesterday, I attended Rafal Lukawiecki’s entertaining pre-conference session on the latest in Microsoft BI where he walked through many of the latest and greatest offerings, and included a demo of the new PowerView map report.  His prediction was that every BI demo at the conference would likely include that specific use case, as apparently in this day and age, BI = maps.

image

Well, ever one to be a trend follower, I figured I’d contribute my part and offer you this Project Server example.

Before going much farther, I must point out that I’ve never been enthusiastic about creating map reports out of Project Server data.  I’m not against those sort of representations, and neither are my clients.  In fact, most of my clients have large GIS departments in charge of such things.  I just generally have not seen much value from a project management perspective in adding latitude and longitude to project data, and then plotting the project on a map.  Well, with PowerView, and the ease of creating a geographic report, I might just be tempted to change my mind on this.

So to create the data, I added a custom field in Project Server called “Office.”  This field represents the geographic office that will own each of the projects in my portfolio.

image

From there, I open Excel 2013 and add a data connection either directly back to SQL or through an OData feed using the following URL: http://demo/PWA//_api/ProjectData/Projects()?$filter=Office ne null&$select=ProjectName,TotalCost,Office

Next, I navigate to the PowerPivot manager.  If it’s not visible, ensure it’s activated as a COM add in in the Excel options menu.  Click on the Office column, select the Advanced tab, and note that you can classify the data as “City.”  This is part of a new host of features where Excel can actually classify your data and recommend actions based on the specific data type.

image

Go back to the normal Excel window and insert a PowerView sheet.

image

From here, it’s pretty easy.  Select the fields you want to appear in your PowerView report.  Note the globe icon next to our Office field indicating that it is in fact geographic data.

image

Click on the newly created table, and then select the map option in the top left.  This transmits the Office column data to Bing and reconciles it with an online map.

image

…and well, that’s pretty much it.  Add another table below the map to show a filtered list for clicking and drilling-down purposes.

image

…as you click on the dots on the map, the table below will filter.

I sense a new demo added to the repertoire.

Project Server and the Obligatory Geospatial Report

The Major Milestone Report Meets OData

I’ve been leveraging this report in my demos for a couple of years now – basically an overview of all of the major milestones in the project portfolio color coded by schedule status.  Figured I’d test my newfound OData skills and see how hard it would be to recreate in the new world of Project Server in the cloud.

image

Turns out not hard at all, with perhaps a minor caveat that we’ll need to modify the data model in PowerPivot.  If you’re looking at recreating this report in an on-premise instance, feel free to check out the instructions in my previous post.  This post is only on how to recreate the report using a cloud based instance and OData feeds.

To prepare for this report, we’ll need to create a custom field called “Major Milestone.”  This will be a task text field tied to a look up table.

image

Next, add the field to your project templates and map the project milestones to specific stage gates in the lookup table.

image

The following URL will  now generate the required data to generate the report.

http://demo/PWA//_api/ProjectData/Tasks()?$filter=MajorMilestone ne null&$select=ProjectId,ProjectName,MajorMilestone,TaskFinishDate,TaskFinishVariance,TaskPercentCompleted

Open the Other Connections option in the Excel Data tab, and paste the link into the OData feed option.

The only minor issue is that when you directly use the OData feed to generate the PivotTable, you can’t get the dates to render in the PivotTable properly.  As far as I can tell, when you select the “max” option in the PivotTable to display the date, you get the following error message: “We can’t summarize this field with Max because it’s not a support calculation for Date data types.”

image

Two solutions for this as far as I can tell:

1) Insert the data into a table, and then summarize the table into a Pivot Table – at which point, the date field rolls up just fine.

2) Go into the PowerPivot data model, and set the field in question to be a decimal number.

image

Insert the PivotTable using the PowerPivot table as a source, and you’re back in business.

Drop that into Excel and format per these instructions to get the report….

image

The Major Milestone Report Meets OData

Porting the Cumulative Milestone Report into OData (Part 3)

Now we have an Excel workbook with the appropriate data connections to Project Server….

image

Let’s break out PowerPivot to modify the data model.  Click on the Manage button in the PowerPivot tab.

image

You should see something that looks like this:

image

Rename the tabs to something a bit more useful.

image

Now let’s add a custom field to the Forecast and Baseline tables.  We’ll use this field to map the data back to the TimeSet table.

I used this formula for the Forecast table:

=[TaskFinishDate]-Day([TaskFinishDate])+1-TimeValue([TaskFinishDate])

..and name the new field “Total Date.”

image

I repeat the process for the Baseline data set, but use the TaskBaselineFinishDate value to generate my Total Date field.

image

Now let’s tie it all together.  In the Home tab, select the Diagram view.

image

Add the following joins to the data model.

image

If I select the option to Manage Relationships in the Design tab, I should see the following:

image

At this point, we simply need to add a pivot table to Excel and point it at our data model.

image

Our data model appears as a table in the External Connections list.

image

…from there, you need to add the appropriate fields to the pivot table.

image

Note that I added each of the date fields twice.  I’m going to flip the second instance of those fields to display a running total and give them a more user friendly name…

image

From here, it’s a matter of following these instructions to create a chart.

image

Not entirely sure the juice was worth the squeeze on this particular chart, but still a worthwhile exercise.  Add the slicer and timeline control for extra cred as an Excel 2013 master.

image

Porting the Cumulative Milestone Report into OData (Part 3)