Architecting a PMIS for the Cloud

Let’s face it.  On premises system architectures have long been an enabler for both lazy and amateur enterprise architects.  Need to move some data around?  Simply add a couple of fields to store it.  Need to add some integration just so we can see all of the data in a single place?  Build some batch jobs to push data hither and yon – and replicate it in multiple places just to make it easy to get at.

Nowhere is this more evident than in a Project Management Information System (PMIS) that has grown organically to the needs of the enterprise.  Invariably, a PMIS has grown as a collection of disparate systems and silos, all oriented to the needs of a specific functional group that may be involved in the execution of a project.


The integration eventually evolves into a veritable spaghetti diagram of lines moving data from one system to another.  Often, the data schema for the scheduling tool is expanded or repurposed to collect all of the data into a single, convenient data repository.  Generally speaking, this sort of works for many organizations.  They still manage to get their work done, albeit with a fair bit of grumbling around project managers having to access multiple tools to get their jobs done.

The main challenge with this tool architecture is that it doesn’t support a nimble process framework.  As PM processes mature (pro-tip: they will) or adopt to the changing organization, the organization needs to maintain a strict focus on a capabilities infrastructure.  As the capabilities develop in maturity, the underlying tools must also adopt, and this is where the traditional organic PMIS fails… a large extent due to the technical debt incurred in the evolution of the overall system.

Traditionally, with on-premises systems, this tendency towards entropy is addressed every couple of years in the form of an upgrade.  The vendor releases new versions of the software, and as part of the inevitable upgrade process, the organization performs a subbotnik to reassess and simplify the overall system.  This cycle naturally prevents the overall system from getting too complex.

Today’s subbotnik is not your father’s subbotnik however.  Today, most of the discussions we have around upgrades involve a consideration of moving to the cloud.  Often times, this is driven by the desire to take advantage of the cost savings that enterprise cloud offerings now bring to the table.  Twice in a week, I’ve been in conversations with clients about how they could save significant infrastructure overhead costs through moving to the cloud – but have been prevented from doing so by the architecture of their existing PMIS.  This means that they’ve been stuck with expensive on-premises environments with feature sets that grow increasingly outdated with each day.

Furthermore, additional care must be taken when moving to the cloud as we will lose that convenient safety valve of reassessing the entire thing from the ground up as part of an upgrade within the next several years.  Whatever we design today, we will have to live with tomorrow – and the day after tomorrow.

Over the course of multiple discussions, the architecture I see evolving around the PMIS looks a lot like this:


Instead of pulling data backwards and forwards, we are consolidating it.  The data is either consolidated in real time with any number of reporting tools such as Microsoft Excel, PowerBI, Tableau or Spotfire…..or it’s being pulled into a data warehouse such as SQL or HANA using an ETL tool.  Doing so greatly reduces the complexity of the overall system and allows each part to feed data in such a way that the company can rapidly mature in a specific capability as needed.

Hence, for those organizations with a significant existing investment in on-premises PMIS and the associated integration that this usually entails, a cloud discussion almost always needs to start from the process discussion.  What is your process?  How do the tools you currently have enable this process?  How do the tools we currently have block the process?  Does the process truly support the project management needs of the enterprise?  It’s only after having this discussion, that a new system architecture may be designed… which enables the nimble enterprise.

Here, we run into some of the typical challenges of a cloud discussion.  Typically, the cloud agenda is part of the IT agenda – as a means of reducing operational costs associated with maintaining rooms full of big iron.  The processes we are supporting are owned almost always by a PMO or PMOs.  To have this discussion effectively, i.e. of how to move to the cloud, the PMOs must be engaged by IT to reassess process and ensure that the results can be moved to the cloud effectively and efficiently.

Prior postings on architecting a PMIS (here and here)…..and a couple of posts on mapping EPM tools to process architecture (here and here)

Architecting a PMIS for the Cloud

If It’s Mardi Gras Time, Let’s Talk Construction Scheduling and Project Controls

That’s right, excited to be heading out to New Orleans for the Construction CPM Conference…..where I will attempt to strike an appropriate balance between “Three Full Days of Study and Training in the Critical Path Method of Planning & Scheduling Analysis” and enjoying the run up to the annual festivities (in a restrained and tasteful fashion, as always).

Come look for me in the vicinity of either the Microsoft booth or the bar – depending on what time of day it is.

Looking for some more posts on leveraging Microsoft Project for construction scheduling and/or implementing project controls in an environment that supports the same?

  1. When Deterministic Scheduling Meets Kanban
  2. Defining an Update Methodology (Parts I-V)
  3. The Importance of Cost Abstraction (Parts I, II)
  4. The Major Milestone Report in Project Online
  5. Cumulative Milestone (or Task) Reporting in Project Online
  6. Generating a Baseline Execution Index Report
  7. First Look at Geographic Reporting with Microsoft Excel 2013
If It’s Mardi Gras Time, Let’s Talk Construction Scheduling and Project Controls

Reporting in Project Online…..or…..How to Make a Technical Presentation Fun and Interesting

Apparently, it takes lots of salty Danish licorice.  Take a look at this presentation from Microsoft Ignite last week where Allan Rocha and Mike McLean do all the heavy lifting, and I sit in the back and make snide comments.

Maybe that’s the path forward for technical presentations.  One person to present, and the other person to make fun of the presenter.  Not sure all of the jokes worked (for instance, the joke about the Danish band fell a bit flat to the audience), but in the spirit of continual improvement, let us know what you think.

Reporting in Project Online…..or…..How to Make a Technical Presentation Fun and Interesting

Querying Project Online with Natural Language

One of the most useful things to surface in the latest Microsoft BI offerings is the Natural Language Query (NLQ) functionality inherent in the Power BI online offering.  NLQ allows users to simply type questions into the interface to generate dynamic reports.


The beauty of this is that it’s not limited to Project Online data – or even a single source of data.  I can add multiple sources of data, create a data model and then go to town on asking questions of the data, literally.

This post will walk you through how to get this up and running in Project Online.  This assumes that as part of your Project Online tenant, you have at least one license for the Power BI offering.

Create the Data Source

Creating the data source is relatively simple.

  1. Open Excel
  2. Add an OData connection.  In this case, I actually added a couple of tables (Project, Assignment, Resource).  Feel free to check out prior blog posts on the topic here.
  3. Ensure the tables have relationships.

Navigate to your PWA site and add the report to the Power BI app.


Set Up the Refresh

This took a couple steps and some troubleshooting before I got it all working.  The basic instructions can be found in the following three blog posts.

  1. Get the Site Collection Feature turned on to allow dynamic refreshes
  2. Background Refresh Your Project Online Reports (Peter Charquero Kestenholz)
  3. Setting up the Scheduled Refresh (Peter again)

To check if the refresh is working, try refreshing the data connections in Excel Online.  If that works, the scheduled refresh should work.

Set up NLQ

Click on the ellipses next to the report, and add the report to Q&A.


From there click on the Ask with Power BI Q&A in the top right of the page.


Throw some queries into the search interface to confirm data is being returned.  You may note that it’s showing the wrong fields by default, i.e. it’s showing the Project ID instead of the Project Name.  We’ll take care of this in the next step.

Training The Language Model

Open the data source in Excel again, and navigate to PowerPivot.  You’ll see that you can right click on the column headers to select the option to Hide from Client Tools.


The other thing you can do to train the language model is to change the default field set.  You’ll see this option in the Advanced tab in PowerPivot.


Finally, you can go back in and add aliases for many of the fields.  Note that for this option to even be available in PowerPivot, you need to have installed Excel from the Office Online Click to Run installation option.  I don’t have that available right now, so I’ll provide you with the link to the reference: 

Luckily, there’s also a Web based option for training your Power BI model.  Buried in the Power BI Site Settings, underneath the Q&A tab, you get the option to Optimize for Q&A.


That yields an interface to add synonyms and other key elements to help your users connect with their data.


Finishing Touches

Add a link to Q&A onto your Project Web Access and feel free to fry all of your report developers’ squid.


Querying Project Online with Natural Language

Business Intelligence: Project Online vs. Project Server 2013 On-Premises

This question comes up on such a regular basis, that I decided to blog it up this weekend.  What’s the difference between the business intelligence story in Project Online vs. Project Server 2013 On-Premises?  To be honest, answering this question goes a long way to answering the question of what the difference is between Project Online and Project Server, period.  The main difference is business intelligence.

First off, a bit of term definition:

  • Project Online – the offering provided by Microsoft that’s really Project Server 2013 + whatever upgrades have happened in the Online product between when 2013 was released to market and the date of this blog post.  (Watch the comments for updates).
  • Project Server 2013 – well, Project Server 2013, as deployed on premises – again, assuming all of the updates as of this blog posting.

Note that the reason I have to be careful about term definitions is that much of the v.Next functionality for Project Server on premises is being released right now in the online environment.  I would expect many of these features to first appear online, then migrate over to the on-premises world – which will make this post outdated pretty soon.

Before reading any further, feel free to check out some of my other blog posts on reporting options available both online and on-premises.

BI Components

The following components come into play when discussing the difference between online and on-premises reporting:

  1. Data Source – the underlying raw data related to projects, resources, etc.  This data source may be real time, or for various reasons, replicated to a secondary data source using a timed process.
  2. Data Model – the relationships and hierarchy between the various data points.  For example, I may have three resources assigned to a project that is supporting my drilling operations.  That would necessitate pulling data from the resource table, assignment table, project table, and potentially additional tables related to workflow.
  3. Reporting Layer – the tools used to actually generate the reports.
  4. Presentation Layer – the platform used to surface the reports to the general users.

The Basics: Data Access

Project Server stores everything in a set of tables within a SQL database.  Hence, any reporting tool that can access SQL (which is pretty much all of them) can access and report on Project Server data.

Project Online doesn’t allow direct access to SQL data, as the database instance is a shared environment, and thus, users could theoretically impact other tenants if they had direct access.  For Online, the preferred reporting is OData, which is an RSS-like data feed.  Open an OData compliant reporting tool (like Excel 2013), point it at the feed URL, and you have the data (albeit not the data model – which is a different story).

As many clients still prefer to report against SQL, Microsoft has recently released (via Cumulative Update) an SSIS upgrade that will allow organizations to set up timed reporting jobs to import OData feeds into an on-premises SQL database – thus allowing the use/migration of existing reports that look for SQL data.

The issue, of course, is that the SSIS integration reporting isn’t quite real time – or it’s a mix of real time and not-so-real time.  Reports requiring a limited data set can pull from the live online OData.  Reports requiring a lot of data, i.e. timesheet reporting, etc., can pull from the SQL database.  The main caveat here is that if you’re the kind of organization that likes to generate reports, spot mistakes, republish projects, and regenerate reports, latent data may not work for you.

If on the other hand, you feel the savings from going online offsets the issues with not having real time data, the the Online offering may be appropriate for you.

Scenario Comparison

Hence, we actually have three scenarios to compare.  Note that these may not be mutually exclusive – as you can mix and match some of the functionality depending on your needs.  You can also have hybrid online/on-prem environments if there’s something available in one that you absolutely need to support your organization.

  1. Project Server (On-Premises)
  2. Project Online (Native OData)
  3. Project Online (SSIS Integration)
Scenario On Premises Online (OData) Online (SSIS)
Data Source SQL (Real Time) OData (Real Time) OData (Real Time – Limited Data Set)SSIS (Sync’d via timer)
Data Model SQL PowerPivot (Excel) PowerPivot (Excel)SQL
Reporting Tools Excel, SSRS, Standard Reporting Tools Excel 2013Other tools that support OData (limited availability) Excel, SSRS, Standard Reporting Tools
Presentation Layer SharePoint SharePoint (add’l options to support iPad compatibility with HTML5) SharePoint (add’l options to support iPad compatibility with HTML5)
Natural Language Queries (Power BI) Not Available On-Prem Power BI Portal Power BI Portal

And Another Thing….Power BI Sites

The term Power BI is confusing as it is often used for both the Microsoft Power BI offerings (Power Pivot, Power View, Power Maps) and the Power BI sites.  As of this writing, most of the Power BI suite is availabe online or on-premises, but the Power BI sites are only available online.  Power BI sites are SharePoint portals available under an ala carte licensing scheme that provide the following functionality:

  1. Data Feeds – configurable to integrate both on-premises and online data.
  2. Power View reporting – similar to what’s available in SharePoint.
  3. Natural Language Querying – which is one of the more exciting things to be happening in the BI world, i.e. the ability to create data models and allow users to enter questions into the browser window – which the system answers.  For example, you can type something like “Show me all projects starting in Botswana in 2015,” and the system will automatically spit out a map of Botswana showing the location of all projects starting in 2015.

That’s the main round up of differences.  Did I miss anything?  Feel free to drop me a line in the comments section.

Business Intelligence: Project Online vs. Project Server 2013 On-Premises

Configuring OData Refresh in Project Online (Revisited)

When Project Online was first released, there was no mechanism to refresh Excel reports consuming OData feeds.  Users were encouraged to open the Excel reports and manually refresh the reports.  Macros were written.  Hair was pulled in frustration.

Somewhere after the release, Microsoft came up with a clever solution – which involved basically routing the Excel report to an Azure server, which then triggered the refresh and returned the data to the Excel report.  The instructions for configuring this were posted in Microsoft blog under a somewhat less than obvious title:

Somewhere after that was posted, the configuration became a site collection feature.  Hence, if you’re using Project Online right now, you need to activate this at the PWA site collection level.

To get there, go into site settings for the PWA main site, and select the option for Site Collection Features.


The one you’re looking for is entitled Project Web App Permission for Excel Web App Refresh.  Activate that and you should be good to go.


For more official architecture type stuff, take a look at this link:

Configuring OData Refresh in Project Online (Revisited)

Project Conference 2014 BI Linkfest

Thanks everyone for coming out to Mike McLean and my presentation on Business Intelligence in Project Server 2013.  Per some of the comments received, I wanted to provide a couple of links to blog posts supporting the demos we delivered:

  1. LINQPad and Office 365 (make sure to check the comments section)
  2. Major Milestones and OData
  3. Cumulative Milestone Report and OData
  4. Project Reporting in Power View Maps

…with additional blog posts forthcoming on Power View for SharePoint and SSIS against OData.

For those of you using on-premises versions of Project Server 2010 or 2013, I would encourage you to take a look at the following links for additional ideas, resources, and recordings of previous sessions with a different set of demos:

  1. SharePoint Conference 2012 BI Demo Links
  2. Project Conference 2012 BI Demo Links (Part 1)
  3. Project Conference 2012 BI Demo Links (Part 2)
  4. Project Conference 2012 BI Demo Links (Part 3)
Project Conference 2014 BI Linkfest

Metrics Create a Shared Problem Space

I’ve been on a bit of an Argyris reading jag after putting together materials for an upcoming collaboration presentation.  Last night, I was reading his book about how to assess the quality of management consulting advice, and a couple things clicked for me.

For those of you not familiar with Chris Argyris, he was a pioneer in the area of learning organizations.  As such, a lot of his work was done on internal defensive mechanisms, or obstacles to learning.  The basic premise is that when confronted, individuals tend to fall back on positional arguments, and ignore internal discrepancies in logic between espoused goals and demonstrable behavior.

What does that mean in my world?  One of the classic reasons to implement Project Server is to address the IT resource issue, i.e. the business keeps asking IT to do more and more, IT keeps underperforming and raising costs, and everyone gets sucked into a downward value spiral.

The argument tends to retreat to positions (paraphrased from the book):

IT says “We don’t have the resources.”

The business says, “You’re not delivering fast enough.”

….and IT rejoins with “You guys don’t even know what you want.”

These become positions.  They become divisions between “us” and “them.”  We retreat into a shell that states “They just don’t get it.”

These positions are what Argyris describes as untested assertions, and therefore subject to review and analysis.  This is when the EPM consultant comes into the picture.  One of the IT managers calls up their local consultant and within 3-6 months, they have a tool in place that captures the resource capacity and demand – and rolls it all up in a bunch of shiny, sexy reports.

So far so good.  We are moving from the world of positions and untested assertions to real data.  The question is what we do with this data.  Do we take the data into a confrontational meeting with the business and throw it on the table with an emphatic “See, I told you we don’t have the resources!”  Do that, and the business will naturally fall into their own defensive mechanisms and question the data validity, ignore the data wholesale, or pull the old “Well, if I can’t get what I want from you, I’ll build my own IT department.”

Again, these are positions.  The solution is to avoid positions and work to develop a common problem space.  Each group has its own problems and there are any number of individual solutions to these problems.  The trick is to identify a common set of these problems, and then to prioritize them as a group.  The data should inform this discussion.  The data creates this shared problem space by illuminating to all parties which statements are untested assertions and which statements are driven by actual data.

Does this change the data or the reports that get generated?  Probably not.  Really, this is an attitude adjustment.  The question the manager should be asking is not “how do I walk into that meeting and establish our shared goals?” (hint: supporting the value chain), but how do we use the data to develop a consensus as to what problem we’re actually trying to solve?  The shared goals discussion should have happened before the reports were even developed – as the reports illustrate the obstacles to achieving those goals.

Take that thought process a bit farther, and don’t wait till you have the shiny reports to show the business and frame that conversation.  Get them involved as early as possible.  Have them engage in the discussion as to what reports they would like to see to assess whether or not their behavior is driving the delivery issues.  Do that, and you’ll be much farther along in defining the shared problem space than if you wait till the very end of an engagement.  Do that, and the reports almost become superfluous as we’ve already developed consensus around what the problem actually is.

Once you’ve agreed on the problem, the solution is easy.

Metrics Create a Shared Problem Space

BEI Report with OData and PowerQuery

When Project Server 2013 first came out in an online and on-premises version, I began porting some of my demo reports over to the new online environment.  The goal was to see what worked, and what didn’t work, and what new skills I would have to acquire to achieve reporting parity between the online and on-premises world.


In general, I had good success, although I had to come up with a number of workarounds to integrate the data.  One of the reports I was working with, however, the Baseline Execution Index report, I could never get to work with the OData feed and Excel.  The basic issue was that the underlying query for this report requires a UNION ALL command – essentially taking two different data sets and merging them into a master data set.

PowerPivot vs. PowerQuery

In PowerPivot natively, this is actually quite hard.  What you need to do is assemble a master set of primary keys – then merge the two data sets into the master set.  It sounds simple(ish) in theory, but it always eluded me.  In fact, that seemed to be a flaw in PowerPivot (or in my understanding thereof) that UNION ALL procedures were tough, if not impossible to implement.

Hence, I decided that this report would make a good guinea pig for playing around with PowerQuery, a new tool for Excel that, as of this writing, is still in Beta.  I’m still wrapping my head around how PowerQuery plays with PowerPivot, but the basic concept appears to be as a front end, i.e. PowerQuery is what you use to get the data into your report – with some reasonable formatting and massaging.  PowerPivot is what you use to hook the information together into useful data models.  That being said, it’s a bit of a blurry distinction as PowerPivot also can import data directly – just not as effectively or with all of the bells and whistles to make your life easier.  (The UNION ALL issue being a case in point.)

Hacking PowerQuery Authentication

First thing I had to do was connect PowerQuery to the OData feed – which required a workaround using Fiddler documented by Peter Holpar here:  I’m hoping that as of RTM, this step would no longer be required.

Defining the Data Sets

The next thing we have to do is define our data sets.  To create this report, we essentially have two data sets:

  1. The actual finish dates for the completed tasks in our schedules.
  2. The baseline finish dates for the tasks in our schedules.

The report works by totaling the number of tasks that are supposed to finish in a given month and comparing that to the total number of tasks that actually finished in the month.

Here are the two URLs I used to create the data sets.  (ProjectNA is my somewhat unimaginatively named online tenant)  These URLS were generated from LINQPad.$filter=TaskActualFinishDate ne null&$select=ProjectName,TaskActualFinishDate$filter=BaselineNumber eq 0&$select=ProjectName,TaskBaselineFinishDate

Opening PowerQuery

I now fire up Excel and navigate to the PowerQuery.  I select the option to import data from an OData feed.  (Note the list of options to import data from – including Facebook….which is interesting but not entirely relevant for me at the moment.)


Paste the URLS we developed in the last section.  (You’ll have to do this twice – once for each data set.)


In this case, I’m using Windows authentication via the Fiddler hack.

That should yield something like this:


Right click on each of the data sources to the right to modify the underlying query.


We’re going to add another field – which will be used to group the data.  To do this, select the option to add a Custom Column in the top ribbon.


…which adds a new column to our query.  I’ll call this “TotalDate.”


Repeat the same for the Actual Finish date query.  Now we will combine the two tables into a single data set using the Append command on the ribbon.


It doesn’t really matter which table gets appended to which table.


And you now have a master data set.  I add another column to this data set to capture our Target data within the chart.


Summarize the resulting data set into a PivotTable.  Within the PivotTable, we’re going to add a new calculated field, BEISource:


This will generate the ratio of tasks actually finished against the tasks that were supposed to finish.  In this case, it’s comparing the date value for the two items instead of the actual item count – which is more or less the same for our purposes.


Throw it into a column chart and format per the specs here, and you should now have something like this.


BEI Report with OData and PowerQuery

PowerQuery and OData Workaround

Working on a new set of demos for Mike and my upcoming Project Conference presentation.  Figured I’d mention this little tidbit….

As I was trying to use PowerQuery to surface OData from a Project Online, I seemed to have run into the same issue that I’ve had with LINQPad, i.e. not being able to authenticate to an Office 365 client.

I tried the Fiddler trick as documented by Peter Holpar here, and everything worked just fine.  You will need to turn on the HTTPS decryption option he mentions in his post.

Note that PowerQuery’s still in beta, so perhaps they’ll have this issue resolved before RTM.


In the meantime, stay tuned for a PowerQuery post in the nearish future.

PowerQuery and OData Workaround