Clear Goals and Fuzzy Roles

Fresh out of college, shortly after starting a new job, I remember my old boss sitting me down and giving me some advice.  “To succeed in this job,” he said, “You’ll need a high tolerance for ambiguity.”  That’s a theme that keeps recurring of late as I have been preparing for a couple presentations at the upcoming Project Conference.  Specifically, it’s a theme I’ve been revisiting in preparation for co-presenting a talk on how the PMO can both be collaborative and help an organization become more collaborative.

As part of that preparation, I’ve been reading Chris Argyris’ Flawed Advice and the Management Trap: How Managers Can Know When They’re Getting Good Advice and When They’re Not.  I also picked up a used copy of Suresh Srivastva’s Executive Integrity: The Search for High Human Values in Organizational Life.  Specifically, when reading an essay in there by Steven Kerr on Integrity in Effective Leadership, a couple things clicked with some of the discussions I had at the PMO Symposium in San Diego last November.

One of the cornerstones of having integrity as a leader, Kerr writes, is to remove ambiguity around the organizational goals.  Everyone should understand what the organization wishes to achieve, and how it arrived at those decisions.  According to him, leaders should “clarify organizational values and priorities and individual rights and obligations.”  (He also writes about maintaining integrity between stated goals and performed actions – but that’s a different discussion).

So in this sense, clarity and definitions are good.  Argyris brings up an example of where clarity can actually be detrimental.  His book talks about the difference between internal and external motivation.  Internal motivation is driven by a need to achieve goals.  External motivation is driven from a desire to look good – which in the corporate world, often means hitting our target metrics.  The problem with role definitions, he writes, is that when they are overly precise, they remove the element of internal motivation from our work.  They end up reducing our jobs to specific processes that must be followed.  Inevitably, that replaces internal motivation with external motivation.

In reconciling those statements then, we can derive some guidance on how to construct a high performing organization.  We clarify what we are hoping to achieve.  We maintain ambiguity about how to get there and about how people will interact to achieve this.  We clarify the “what”  while deferring the “how” to the team.  This is how collaborative cultures are born.

Note however that this does not constitute a wholehearted endorsement of vague position descriptions.  That sort of thing would never fly in a team that is going through the “storming” stage of team development, i.e. when each individual is coming from a different organizational or departmental culture.  Vague position descriptions only work in a team where everyone agrees on what they’re trying to accomplish – and when the desire to achieve the goal (internal motivation) supersedes the petty disagreements on how to achieve it.

So what’s the PMO’s role in all of this?  That speaks to the changing role of the PMO.  Increasingly, I see discussions of the PMO as a cultural change agent.  The PMO’s role is one of communication – both vertically and horizontally within the organization.  The PMO’s job is to maintain the prioritization structure for new work – and to communicate how projects align with that prioritization structure.  The PMO then is chartered with ensuring consistency and clarity between the stated goals of the organization and the actual work of the organization.  This is achieved through project selection mechanisms, through strategic scorecards, and through simple things such as maintaining a top ten list of all projects within the organization.

The PMO often provides both the catalyst to define organizational clarity as well as the vehicle to communicate that clarity through clear project selection and prioritization mechanism.  That is one of the building blocks to achieve a high performing team – and by definition, a collaborative culture.

Clear Goals and Fuzzy Roles

Metrics Create a Shared Problem Space

I’ve been on a bit of an Argyris reading jag after putting together materials for an upcoming collaboration presentation.  Last night, I was reading his book about how to assess the quality of management consulting advice, and a couple things clicked for me.

For those of you not familiar with Chris Argyris, he was a pioneer in the area of learning organizations.  As such, a lot of his work was done on internal defensive mechanisms, or obstacles to learning.  The basic premise is that when confronted, individuals tend to fall back on positional arguments, and ignore internal discrepancies in logic between espoused goals and demonstrable behavior.

What does that mean in my world?  One of the classic reasons to implement Project Server is to address the IT resource issue, i.e. the business keeps asking IT to do more and more, IT keeps underperforming and raising costs, and everyone gets sucked into a downward value spiral.

The argument tends to retreat to positions (paraphrased from the book):

IT says “We don’t have the resources.”

The business says, “You’re not delivering fast enough.”

….and IT rejoins with “You guys don’t even know what you want.”

These become positions.  They become divisions between “us” and “them.”  We retreat into a shell that states “They just don’t get it.”

These positions are what Argyris describes as untested assertions, and therefore subject to review and analysis.  This is when the EPM consultant comes into the picture.  One of the IT managers calls up their local consultant and within 3-6 months, they have a tool in place that captures the resource capacity and demand – and rolls it all up in a bunch of shiny, sexy reports.

So far so good.  We are moving from the world of positions and untested assertions to real data.  The question is what we do with this data.  Do we take the data into a confrontational meeting with the business and throw it on the table with an emphatic “See, I told you we don’t have the resources!”  Do that, and the business will naturally fall into their own defensive mechanisms and question the data validity, ignore the data wholesale, or pull the old “Well, if I can’t get what I want from you, I’ll build my own IT department.”

Again, these are positions.  The solution is to avoid positions and work to develop a common problem space.  Each group has its own problems and there are any number of individual solutions to these problems.  The trick is to identify a common set of these problems, and then to prioritize them as a group.  The data should inform this discussion.  The data creates this shared problem space by illuminating to all parties which statements are untested assertions and which statements are driven by actual data.

Does this change the data or the reports that get generated?  Probably not.  Really, this is an attitude adjustment.  The question the manager should be asking is not “how do I walk into that meeting and establish our shared goals?” (hint: supporting the value chain), but how do we use the data to develop a consensus as to what problem we’re actually trying to solve?  The shared goals discussion should have happened before the reports were even developed – as the reports illustrate the obstacles to achieving those goals.

Take that thought process a bit farther, and don’t wait till you have the shiny reports to show the business and frame that conversation.  Get them involved as early as possible.  Have them engage in the discussion as to what reports they would like to see to assess whether or not their behavior is driving the delivery issues.  Do that, and you’ll be much farther along in defining the shared problem space than if you wait till the very end of an engagement.  Do that, and the reports almost become superfluous as we’ve already developed consensus around what the problem actually is.

Once you’ve agreed on the problem, the solution is easy.

Metrics Create a Shared Problem Space

Uncovering Your Latent Resource Capacity

‘Ware the resource hoarders in your organization.  You’ll know them because they’re the ones who refuse to provide detailed task estimates for their projects.  When pressed for details on their resource plans, they’re like as not to produce an Excel chart that shows the resource allocated evenly for 32.75 hours across the length of the project.

If you see a spike in a given week, you might ask something like “What are they doing in March?”…only to be told that they’ll be working on “Release stuff.”

“Great,” you say, “Where’s the schedule that shows when the release will occur so we know if that spike in resource demand will move out if the release moves?”

And the hoarder responds with, “Well, we’re not getting that detailed with our planning.”

The issue here is that the hoarder has been given resources by the organization.  “You will have X resources for 6 months,” the organization declares.  The response, logically, is to ensure that, on paper at least, we will use those resources.

The problem is several fold here:

  1. Resource hoarding becomes a self-fulfilling prophecy.  If I commit to a resource for 40 hours a week, and I don’t plan what that resource will work on…..I end up filling that resource’s time with make work and inanities.  Mind you, I’m not saying that committing to an FTE is bad….I’m just saying that it needs to be justified.  (And yes, I would consider a formal Agile methodology to be justification enough, as, when implemented properly, that methodology puts in place the controls required to ensure the resource is working on real work.)
  2. The hoarders are never pressured to link their resource plans to the actual schedule – meaning that changes to the schedule don’t connect to the resource plan.  As a result, resources may not be available when the project needs them – or worse, they have extra capacity that could be otherwise used while they wait for a delayed deliverable to appear in the queue.

The thing is, in organizations that tolerate and/or encourage resource hoarding, you’ll have all sorts of latent resource capacity sitting under the radar.  That capacity represents a tremendous opportunity cost in the work that is not getting done – because the resource is dedicated to a project that may not be really using them.  Start prying into resource visibility, and you’ll be shocked at how much excess capacity scurries out from under a rock.

Uncovering Your Latent Resource Capacity

BEI Report with OData and PowerQuery

When Project Server 2013 first came out in an online and on-premises version, I began porting some of my demo reports over to the new online environment.  The goal was to see what worked, and what didn’t work, and what new skills I would have to acquire to achieve reporting parity between the online and on-premises world.

image

In general, I had good success, although I had to come up with a number of workarounds to integrate the data.  One of the reports I was working with, however, the Baseline Execution Index report, I could never get to work with the OData feed and Excel.  The basic issue was that the underlying query for this report requires a UNION ALL command – essentially taking two different data sets and merging them into a master data set.

PowerPivot vs. PowerQuery

In PowerPivot natively, this is actually quite hard.  What you need to do is assemble a master set of primary keys – then merge the two data sets into the master set.  It sounds simple(ish) in theory, but it always eluded me.  In fact, that seemed to be a flaw in PowerPivot (or in my understanding thereof) that UNION ALL procedures were tough, if not impossible to implement.

Hence, I decided that this report would make a good guinea pig for playing around with PowerQuery, a new tool for Excel that, as of this writing, is still in Beta.  I’m still wrapping my head around how PowerQuery plays with PowerPivot, but the basic concept appears to be as a front end, i.e. PowerQuery is what you use to get the data into your report – with some reasonable formatting and massaging.  PowerPivot is what you use to hook the information together into useful data models.  That being said, it’s a bit of a blurry distinction as PowerPivot also can import data directly – just not as effectively or with all of the bells and whistles to make your life easier.  (The UNION ALL issue being a case in point.)

Hacking PowerQuery Authentication

First thing I had to do was connect PowerQuery to the OData feed – which required a workaround using Fiddler documented by Peter Holpar here: http://pholpar.wordpress.com/2013/03/08/accessing-office-365-rest-services-using-linqpad.  I’m hoping that as of RTM, this step would no longer be required.

Defining the Data Sets

The next thing we have to do is define our data sets.  To create this report, we essentially have two data sets:

  1. The actual finish dates for the completed tasks in our schedules.
  2. The baseline finish dates for the tasks in our schedules.

The report works by totaling the number of tasks that are supposed to finish in a given month and comparing that to the total number of tasks that actually finished in the month.

Here are the two URLs I used to create the data sets.  (ProjectNA is my somewhat unimaginatively named online tenant)  These URLS were generated from LINQPad.

https://projectna.sharepoint.com/sites/pwa/_api/ProjectData/Tasks()?$filter=TaskActualFinishDate ne null&$select=ProjectName,TaskActualFinishDate

https://projectna.sharepoint.com/sites/pwa/_api/ProjectData/TaskBaselines()?$filter=BaselineNumber eq 0&$select=ProjectName,TaskBaselineFinishDate

Opening PowerQuery

I now fire up Excel and navigate to the PowerQuery.  I select the option to import data from an OData feed.  (Note the list of options to import data from – including Facebook….which is interesting but not entirely relevant for me at the moment.)

image

Paste the URLS we developed in the last section.  (You’ll have to do this twice – once for each data set.)

image

In this case, I’m using Windows authentication via the Fiddler hack.

That should yield something like this:

image

Right click on each of the data sources to the right to modify the underlying query.

image

We’re going to add another field – which will be used to group the data.  To do this, select the option to add a Custom Column in the top ribbon.

image

…which adds a new column to our query.  I’ll call this “TotalDate.”

image

Repeat the same for the Actual Finish date query.  Now we will combine the two tables into a single data set using the Append command on the ribbon.

image

It doesn’t really matter which table gets appended to which table.

image

And you now have a master data set.  I add another column to this data set to capture our Target data within the chart.

image

Summarize the resulting data set into a PivotTable.  Within the PivotTable, we’re going to add a new calculated field, BEISource:

image

This will generate the ratio of tasks actually finished against the tasks that were supposed to finish.  In this case, it’s comparing the date value for the two items instead of the actual item count – which is more or less the same for our purposes.

image

Throw it into a column chart and format per the specs here, and you should now have something like this.

image

BEI Report with OData and PowerQuery

PowerQuery and OData Workaround

Working on a new set of demos for Mike and my upcoming Project Conference presentation.  Figured I’d mention this little tidbit….

As I was trying to use PowerQuery to surface OData from a Project Online, I seemed to have run into the same issue that I’ve had with LINQPad, i.e. not being able to authenticate to an Office 365 client.

I tried the Fiddler trick as documented by Peter Holpar here, and everything worked just fine.  You will need to turn on the HTTPS decryption option he mentions in his post.

Note that PowerQuery’s still in beta, so perhaps they’ll have this issue resolved before RTM.

image

In the meantime, stay tuned for a PowerQuery post in the nearish future.

PowerQuery and OData Workaround

Adding Users in a Test Project Server Environment

Ran into this problem a couple of weeks ago and figured it was worth a post.  I’d created a new instance in a test domain (tAD) to support training.  This domain had a one way trusted relationship to our production domain (pAD).

However, I kept running into an issue.  Whenever I’d go into Project Server (2010) and add a user from pAD, I would get the following error message.

image

For the search engines…..

The resource could not be saved due to the following reasons:

  • The NT account specified is invalid.  Check the spelling of the user name, verify that a valid domain name was included, and check that a duplicate domain was not used.

I’m sure there’s a more elegant solution, but the workaround we determined was to have each of the training students hit the new environment (which predictably resulted in them getting an Access Denied message).  Once they’d hit it at least once, I could safely add them to the Project Server user groups.

Adding Users in a Test Project Server Environment

Operating Models and Portfolio Segmentation

Catching up on some of the comments that have been left on a couple of blog posts recently:

Prasanna Advi wrote:

“I think the key is ‘how’ do you decide what those top 10 (projects in the organization) are, and make every Business Unit agree to the same critera. Especially, in the intersection of Business and IT. I think you should write about the “How” to achieve strategic criterion alignment across a multi-busines unit organization!!” on Does Your Project Matter?

And Josh Milsapps wrote:

“I am very interested in your upcoming post that ties in elements from what I assume is EA as Strategy by Ross and company,” on PMO Symposium 2013 Wrap Up: It’s a Translation Thing.

So there are a couple of concepts here that all appear interrelated.  The first one is portfolio segmentation.  The first step in portfolio management, writes Jeffrey Kaplan in Strategic IT Portfolio Management, is portfolio segmentation.  In this step, we take all of the projects and work authorized by the organization, throw it onto a clean table, and start to organize it by the logical decision making architecture required to optimize the work.  We may end up with a single portfolio.  More likely, we’ll end up with multiple portfolios.   The trick, Kaplan writes, is to ignore the current organizational structure and seek to discern the logical structure.

The second concept is the operating model, as depicted in Enterprise Architecture as Strategy (Ross, Weill, Robertson).  This was (and still is) a seminal book in defining organizational decision making.  The basic gist, as Josh Millsap discusses in his Webinar here, is that organizations tend to follow one of four operating models based on requirements to integrate and standardize business process across various business units.  (Standardize = follow the same process.  Integrate = different processes but resulting data must be married, i.e. different business units from the same company serving the same customer.)

The four models are defined here, and I’d recommend reading the book and/or the Webinar for more information as I won’t do it justice.

  • Coordination
  • Unification
  • Diversification
  • Replication

Now, how do we combine these two ideas?  The operating model provides the fundamental skeleton of the portfolio segmentation.  Depending on how we define our model, we will need to segment our portfolios to support it.  If we have a Unification model, then we would essentially have a single portfolio shared across BUs – or several portfolios by process, but also shared by BU.  If we adhere to a Diversification model, we would then have multiple portfolios spread within each of the BUs.

The trick is, however, and going to Prasanna’s question, anytime we split portfolios into multiple segments, we need to retain an uber-portfolio of some sort, the kind described in Kaplan and Norton’s Execution Premium.  This uber-portfolio, or STRATEX budget, or ePMO, or whatever we call it, exists to fill in the gaps between the portfolios.  This is where the big initiatives that change the organization and that must be implemented across each of the portfolios (note, not BUs) reside.  For example, implementing or standardizing on a new operating model would reside in this uber-portfolio.

Admittedly, that leaves us with a bit of a grey line as to whether or not smaller initiatives in the sub-portfolios that support the strategic design of the company should be part of the ePMO domain or relegated to the subportfolios – and I am sure there are different answers to that question.

Hence, we can always look to find the top ten list of impactful projects and programs in the uber-portfolio.  By definition, that’s where they reside.  I can also perhaps find a top ten list of impactful projects and programs in the sub-portfolios.  Again, by definition, they won’t be as much of a priority as my ePMO projects.

Operating Models and Portfolio Segmentation