The History of PPM – 2016 Q1 Edition

You know that video on Youtube?  The one where this guy presents the history of dance in 5 minutes?  This blog post kind of started as an homage to that, i.e. an attempt to take you through trends in the PPM space over the past several decades.  Part of the driver for writing this post is that I’m starting to see discussions turn to “the next big thing” in PPM which is an early indicator of a shift change in PPM thinking….and a helpful reminder of the last time I listened to the NBT discussion.

Stage 1 – Project Execution

It all started with the push to enhance project execution.  Focus was placed on enforcing a consistent delivery methodology, resource balancing and religious wars around which execution methodology was most appropriate for any given scenario.  I’m not discounting the importance of any of this…..execution is important.  What’s happened in tier 2 portfolios at least is the growing recognition that execution is not the end all be all of getting value from the project value chain.

After all…..why execute bad projects well when we can execute good projects poorly?  In the end, we still get more value.  And hey, if we can execute good projects well, then that’s pretty much the sweet spot.

Stage 2 – Project Selection (The “Pull” Model)

Hence we moved into the era of strategic drivers and pairwise comparison.  The goal was to take a passel of projects, throw them into an analytical engine and let them shake out into an optimized portfolio.  Couple this with a feedback mechanism from the project execution process, and we can ensure that our project portfolio is ever optimized.

Again….sounds great.  The model begins to erode however once we introduce benefits realization into the mix.  Benefits realization is they key to optimized portfolios.  Hence, this required that all projects must be tied to a benefits case.

The other challenge is that this required a regular cadence of project review and approval.  Many organizations don’t have the time to wait for a project to be proposed, go through a review, and then wait in a hopper with a number of other projects in order to get evaluated and approved.  After all, that’s the only way to account for the opportunity cost associated with the project.

Stage 3 – Program Based Allocations

Continuing in our search for the optimal project portfolio, some organizations have turned to program based allocation as a remedy to the slow pace of the traditional “Collect and prioritize” model.  Funds are allocated to the program, and in turn, are allocated to other projects that meet the needs of the program on a regular cadence with less dependency on evaluating all projects against all other projects.

Programs when used in this fashion, i.e. essentially a benefits realization framework, also provide the critical wrapper for project performance metrics.  We don’t need to track the benefits of each project, but rather the program in aggregate.

Arguably, ITIL came in ahead of its time with a concept ahead of its time by providing a way to wrap projects around services that are offered to the business.

Again, this is a great approach and a significant step forward.  Where this is lacking is the link to the corporate strategy.  The portfolio management organization at this point has reorganized to work flexibly and bimodally but is crucially still lacking in a way to tie the execution work with the overall strategy.

Stage 4 – Project Identification (The “Push” Model)

Remember back in stage 2 where our goal was to get a whole list of projects put together and prioritized?  Around that time was when the ideation vendors used to come in and pitch their products.  How do you prioritize a list of projects when you don’t even know whether the list was complete?  How do you methodically identify the gaps in performance and strategically develop the next innovative product offering?

Enter the world of capability based planning and enterprise architecture.  Identify the goals of the organization.  Identify the capabilities required to support those goals.  Invest in strategic capabilities – by treating each capability as what is in effect, a program.  This represents the next step in the evolution of portfolio management thinking, i.e. the critical link between the doing of portfolio management and the content of the portfolio itself.

Enter the next phase of the portfolio management evolution.  The portfolio management organization identifies the projects before they’re even requested. (Both hands leaving the temple in a “mind blown” expression.)

Stage 5 – Whither Next

So where do we go from here?  That’s a good question.  A couple thoughts…

  1. Identifying a flexible framework to allow us to adjust our portfolio management approach based on conditions in our industry or the economy.
  2. Incorporation of big data and the Internet of Things.  With ever cheaper access to analytics and benchmarking, organizations can shine a spotlight on their internal operations like never before, and use that data to create portfolios of prioritized projects designed to address those gaps.
  3. Moving away from the concept of projects entirely and focusing on making everything into a small collection of process improvement efforts managed as part of a continual flow.  (Think about this, maybe the move to the cloud may be the last big software project most organizations ever undertake.  From there on out, it’s all going to be a gradual evolution to support new processes punctuated by the occasional interruption caused by an acquisition or divestiture.)

Conclusion

As I was writing this, I realized something however.  These stages don’t necessarily represent an evolution in maturity.  Rather they represent a growing push to make work more relevant.  When that push starts at the bottom, i.e. at the level of the people doing the work, then what I’ve outlined above is a natural progression that many organizations move through.

What if we start at the top though?  What if we identify the strategy and then work our way downwards to design from scratch the optimal mechanism to operationalize that strategy?  Wouldn’t we end up pretty quickly with a design very similar to what I’ve outlined in Stage 4?

That then begs the question…..how often do companies take that top down approach?  If, as I’ve observed, it’s not very frequent, why?  Why is it always an approach of starting at the bottom, and then increasing our reach in larger and larger slices of the organization?  That’s a post for a different time.

Finally, yes, I know that image is still in your head.  Here’s a link to the Evolution of Dance video.  Always worth checking out.

Advertisements
The History of PPM – 2016 Q1 Edition

The Cloud Means Never Being on the Cusp Again

The cusp of a new platform is a dangerous place to be in.  In my world, it usually it means that we’re looking to upgrade or replace our EPM platform and therefore are scaling down our investment in it until we can make the great leap forward.  The problem is, of course, that our processes don’t stop evolving.  They keep changing and moving.

Once an organization determines it’s on the cusp, it’s more likely to have a process/tool mismatch, where the two get out of synch.  That then results in a drop of user adoption, a lack of process controls, and a gradual descent into the same chaos the organization started from.  Being on the cusp hurts organizational performance.

Enter the cloud.  Being in the cloud means that the organization will never really have to worry about big bang upgrades again.  Instead, the focus can move from the technical to ensuring the evolving processes are supported by the tools.  This tends to change the long term support discussion.  Instead of planning every several years to reset the tools to match the processes, organizations in the cloud move to a more constant, steady stream of tool changes that ensure compliance.

This, I suspect, will be one of the bigger changes of moving to the cloud….the change in thinking towards IT that comes from not having that periodic opportunity to remove the old and start with the new, that opportunity to get rid of the old architecture and move towards a brand new one.

The Cloud Means Never Being on the Cusp Again

Setting Default Desktop Scheduling Options

One of the takeaways from the recent Construction CPM Scheduling Conference in New Orleans last week (other than a couple of hurricanes on Bourbon street) was an acknowledgement that while Microsoft Project does indeed enable best practice scheduling, by default many of the required features are turned off.  This causes some frustration on behalf of the scheduler who must hunt deep into the Options panel to identify and select the appropriate options for each new project.

To assist in this endeavor, I wrote this little macro.  It basically goes through the options and sets them to optimize the scheduling interface for detailed construction scheduling (and therefore may have to be tweaked to support project scheduling in other domains).  In the past, I’ve triggered this macro whenever I run any other scripts on the schedule, i.e. I’ve written scripts to facilitate the update process….which call this routine up before going into collecting the update data.

Did I forget a key setting?  Let me know and I’ll update it accordingly.

Sub ApplyDefaultSettings()

    Dim Continue As String
    
    Continue = MsgBox("This macro will now apply the standard PMO settings to the project schedule.", vbOKCancel, "Confirm")
    
    If Continue = vbOK Then
        
        
        '1 - Project Settings
        With Application.ActiveProject
            .AutoTrack = True 'Sets the task status updates resource status setting
            .MoveCompleted = True 'Move completed work prior to the status date.
            .MoveRemaining = True 'Move incomplete work after the status date
            .SpreadPercentCompleteToStatusDate = True 'spread %Complete to the status date
            .NewTasksCreatedAsManual = False 'Turn off manual tasks
            .DisplayProjectSummaryTask = True 'Display project summary task
            .AutoLinkTasks = False 'Do not automatically link tasks when added to the schedule
            .MultipleCriticalPaths = True 'Calculate multiple critical paths
        End With
        
        '2 - Display Settings
        Application.NewTasksStartOn (pjProjectDate) 'Sets new tasks to default to the Project Start Date
        Application.DisplayEntryBar = True 'Displays the Entry Bar
        Application.Use3DLook = False 'Turns off 3D which used to cause printing issues
        
        '3 - Gantt Chart Settings
        GridlinesEditEx Item:=12, NormalType:=3 'Set the Status Date line
        GridlinesEditEx Item:=12, NormalColor:=192
        GridlinesEditEx Item:=4, NormalType:=0 'Turn off the Current Date line
        GridlinesEditEx Item:=0, Interval:=3 'Set dotted lines on every third Gantt row
        GridlinesEditEx Item:=0, IntervalType:=3
        GridlinesEditEx Item:=0, IntervalColor:=8355711
        GridlinesEditEx Item:=13, NormalType:=3 'Set dotted lines on every vertical top tier column
        GridlinesEditEx Item:=13, NormalColor:=8355711
        
    End If

End Sub

 

Setting Default Desktop Scheduling Options

Implementing EPM in Different Levels of Scheduling Maturity

Reading James O’Brien and Frederic Plotnick’s definitive book on construction scheduling, and I came across this passage that was rather thought provoking (well, it is for me, but I’m a bit of a scheduling nerd):

“….the primary and secondary purpose of the [schedule] must be to promote the project.  A tertiary purpose to support cost control, facilities management, or home office concerns should be acceptable as long as such do not detract from the primary purpose.

“Enterprise Scheduling software and implementation must be carefully managed to prevent increasing the burden of such on individual projects.”

First off, I’ll emphasize that Enterprise Project Management (EPM) is a rather broad term encompassing many of the capabilities required to deliver projects effectively: portfolio optimization, business case development, business architecture, etc.  In this case, we are focusing on a subset of these capabilities, specifically the field of enterprise scheduling.

Now, when it comes to enterprise scheduling, it seems to me that we are typically confronted with two different kinds of organizations in a “normal” engagement:

  1. Organizations with low scheduling maturity that are both trying to enhance maturity and enhance visibility into their project schedules.
  2. Organizations with high scheduling maturity that are only trying to enhance visibility into their project schedules, i.e. the “dashboard” scenario.

I would say that outside of simple question of visibility, most of the organizations I’ve worked with that fall into the latter category are also dealing with issues related to the ongoing “Great Shift Change,” where many of the schedulers who have highly mature (albeit unconsciously competent) processes are retiring, and enterprise scheduling is seen as a way of capturing their knowledge in the form of process and templates so that it may be passed on to the next generation of schedulers.

This also speaks to the role of the dashboard in an EPM engagement.  Specifically, as I always point out, there are two sets of controls being implemented here:

  1. Visibility into the actual dates of the deliverables within the schedule.  When will we hit key milestones?  When will the pipeline be ready to transport gas?  When will various crews be required to roll in and complete their work?  What are the risks that critical activities will get pushed into deer season and thus have to be delayed to ensure none of our personnel get shot accidentally?
  2. Visibility into the quality of the schedule.  How well are our schedulers following the basic precepts of solid CPM scheduling?  Do all tasks have predecessors and successors?  Are resource assignments correctly set?  Are constraints used correctly?

As annoying as they may be, the latter set of criteria are what’s important in driving up scheduling maturity.  They provide the reinforcement metrics required to take a group of engineers with limited scheduling experience and drive them up the maturity curve – which is most often what is called for when an organization decides to embark on an EPM journey.

The risk though, and I’ve seen this again and again, is that the organization gets lost in the wilderness of dashboard visibility into key milestones without spending the time and effort to build scheduling maturity.  That’s probably the main place where EPM engagements go off the rails.  In this case, the “schedulers” become obsessed with reporting the desired data – at the cost of developing valid schedule models.  (Your Schedule is Totally Mental)

So how do we avoid the dead end low-value result of having a bunch of simple reporting schedules?  Through education of the actual folks doing the scheduling.  Through emphasis of the right metrics.  Through the use of the correct information in driving the day to day decisions required to manage complex programs and portfolios of projects.  Most importantly, perhaps, through not skimping on the level of effort required to train and support these schedulers (and their bosses) as they make a mental leap from project managers who do some scheduling to actual bona fide schedulers.

Implementing EPM in Different Levels of Scheduling Maturity

Architecting a PMIS for the Cloud

Let’s face it.  On premises system architectures have long been an enabler for both lazy and amateur enterprise architects.  Need to move some data around?  Simply add a couple of fields to store it.  Need to add some integration just so we can see all of the data in a single place?  Build some batch jobs to push data hither and yon – and replicate it in multiple places just to make it easy to get at.

Nowhere is this more evident than in a Project Management Information System (PMIS) that has grown organically to the needs of the enterprise.  Invariably, a PMIS has grown as a collection of disparate systems and silos, all oriented to the needs of a specific functional group that may be involved in the execution of a project.

image

The integration eventually evolves into a veritable spaghetti diagram of lines moving data from one system to another.  Often, the data schema for the scheduling tool is expanded or repurposed to collect all of the data into a single, convenient data repository.  Generally speaking, this sort of works for many organizations.  They still manage to get their work done, albeit with a fair bit of grumbling around project managers having to access multiple tools to get their jobs done.

The main challenge with this tool architecture is that it doesn’t support a nimble process framework.  As PM processes mature (pro-tip: they will) or adopt to the changing organization, the organization needs to maintain a strict focus on a capabilities infrastructure.  As the capabilities develop in maturity, the underlying tools must also adopt, and this is where the traditional organic PMIS fails…..to a large extent due to the technical debt incurred in the evolution of the overall system.

Traditionally, with on-premises systems, this tendency towards entropy is addressed every couple of years in the form of an upgrade.  The vendor releases new versions of the software, and as part of the inevitable upgrade process, the organization performs a subbotnik to reassess and simplify the overall system.  This cycle naturally prevents the overall system from getting too complex.

Today’s subbotnik is not your father’s subbotnik however.  Today, most of the discussions we have around upgrades involve a consideration of moving to the cloud.  Often times, this is driven by the desire to take advantage of the cost savings that enterprise cloud offerings now bring to the table.  Twice in a week, I’ve been in conversations with clients about how they could save significant infrastructure overhead costs through moving to the cloud – but have been prevented from doing so by the architecture of their existing PMIS.  This means that they’ve been stuck with expensive on-premises environments with feature sets that grow increasingly outdated with each day.

Furthermore, additional care must be taken when moving to the cloud as we will lose that convenient safety valve of reassessing the entire thing from the ground up as part of an upgrade within the next several years.  Whatever we design today, we will have to live with tomorrow – and the day after tomorrow.

Over the course of multiple discussions, the architecture I see evolving around the PMIS looks a lot like this:

image

Instead of pulling data backwards and forwards, we are consolidating it.  The data is either consolidated in real time with any number of reporting tools such as Microsoft Excel, PowerBI, Tableau or Spotfire…..or it’s being pulled into a data warehouse such as SQL or HANA using an ETL tool.  Doing so greatly reduces the complexity of the overall system and allows each part to feed data in such a way that the company can rapidly mature in a specific capability as needed.

Hence, for those organizations with a significant existing investment in on-premises PMIS and the associated integration that this usually entails, a cloud discussion almost always needs to start from the process discussion.  What is your process?  How do the tools you currently have enable this process?  How do the tools we currently have block the process?  Does the process truly support the project management needs of the enterprise?  It’s only after having this discussion, that a new system architecture may be designed…..one which enables the nimble enterprise.

Here, we run into some of the typical challenges of a cloud discussion.  Typically, the cloud agenda is part of the IT agenda – as a means of reducing operational costs associated with maintaining rooms full of big iron.  The processes we are supporting are owned almost always by a PMO or PMOs.  To have this discussion effectively, i.e. of how to move to the cloud, the PMOs must be engaged by IT to reassess process and ensure that the results can be moved to the cloud effectively and efficiently.

Prior postings on architecting a PMIS (here and here)…..and a couple of posts on mapping EPM tools to process architecture (here and here)

Architecting a PMIS for the Cloud

If It’s Mardi Gras Time, Let’s Talk Construction Scheduling and Project Controls

That’s right, excited to be heading out to New Orleans for the Construction CPM Conference…..where I will attempt to strike an appropriate balance between “Three Full Days of Study and Training in the Critical Path Method of Planning & Scheduling Analysis” and enjoying the run up to the annual festivities (in a restrained and tasteful fashion, as always).

Come look for me in the vicinity of either the Microsoft booth or the bar – depending on what time of day it is.

http://www.constructioncpm.com/

Looking for some more posts on leveraging Microsoft Project for construction scheduling and/or implementing project controls in an environment that supports the same?

  1. When Deterministic Scheduling Meets Kanban
  2. Defining an Update Methodology (Parts I-V)
  3. The Importance of Cost Abstraction (Parts I, II)
  4. The Major Milestone Report in Project Online
  5. Cumulative Milestone (or Task) Reporting in Project Online
  6. Generating a Baseline Execution Index Report
  7. First Look at Geographic Reporting with Microsoft Excel 2013
If It’s Mardi Gras Time, Let’s Talk Construction Scheduling and Project Controls

When Deterministic Scheduling Meets Kanban

One of the fun things about my job is that often I find myself knitting together multiple disparate systems and scheduling philosophies.  In this case, I’m working with a client on a field scheduling problem and we’re identifying how the multiple constituencies within the organization can collaborate to deliver a high level of service to the customer base – while at the same time ensuring an appropriate level of cost.

From a very macro level, we are now trying to solve an equation for two optima…..how do I deliver world class service to a customer base that needs responsive, predictable performance….while at the same time juggling crews and equipment such that I am ensuring maximum capital utilization?

When we are solving for one or the other optima, life is simple.  I can simply get more resources than I need, have them sit idle and waiting for the work to pass through – in which case the work doesn’t get delayed.  Or I can simply schedule the crews according to their optimal schedule – which may not respect commitments to my clients (the classic “cable man” or “appliance delivery” scenario).

We see the same discussion in other domains.  For example, from some of the discussions I’ve participated in within the last couple of weeks:

  1. In IT, I have a limited number of testing resources.  I need to run all of my projects through this testing bottleneck….hence I want to structure the project to reduce the time between design and release….but I also want to optimize the use of my testing resources.
  2. From a more macro IT portfolio sense, managers struggle with committing resources to projects before the projects actually can demonstrate a need for resources.
  3. In drilling, we have a limited number of rigs that must run from well to well performing drilling activities.  Meanwhile, each well requires a series of activities to get ready for the rig arrival.  Bringing the rig in too early simply means that it sits idle.
  4. In oilfield maintenance scheduling, I’m working through a backlog of maintenance tickets and trying to ensure efficient crew utilization.  (Ok – this is not a perfect comparison, as we lose a bit of the deterministic element, but still in the rough ballpark of the discussion.)

Theory of Constraints provides us some guidance here, as in TOC, we would identify the bottlenecked resources, i.e. the crews, and then ensure that the deterministic scheduling approach creates a backlog of work in front of them, i.e. the bottlenecked resources are always utilized.

Schedule Challenges

First, let’s look at how to approach this via scheduling – and this is  quite similar to a discussion a while back on release scheduling.  The basic gist is that the deterministic schedule only predicts an early readiness date for the construction crew to start their work….and then identifies a window within which they will perform the work.  The length of that window is then defined by the anticipated volatility and workload of the crew schedule.  (Which would mean that the more work they have….or the more variables that would be expected, say during a season where weather disruptions are to be expected…would mean we would want to lengthen that estimated window.)

As the readiness date approaches, the deterministic schedule dates may be adjusted based on real time field conditions.  At a defined milestone, the readiness status is handed off to the construction team….who then slot the work into the schedule for the next planning period, i.e. 2 weeks or 1 month.

This means a couple of things:

  1. We need to treat the construction date within our deterministic schedule as a target, but not a specific date on which work will be done.
  2. We need to identify a triggering task or condition prior to that date that indicates the work is ready to be scheduled within the construction scheduling queue.
  3. We need to have a separate scheduling queue for the constrained resource, i.e.  a process to define when work is ready to enter the queue….and how that queue interacts with the overall logic of the deterministic scheduling model.
  4. We need a way of assessing readiness for the work to be put into the construction scheduling queue.  If the work is not ready, we don’t want to waste everyone’s time scheduling construction.

image

Communication Challenges

From a communication perspective, we need to ensure that the systems/groups communicate very specific data between each other.  Using the model above, that would look something like this…and feel free to substitute any relevant constituency for “Crew” in this picture, i.e. drilling rigs, IT testing resources, etc.

image

The key here is to recognize the respective value that both perspectives have on the fundamental problem, i.e. that we’re attempting to solve an equation for both speed and efficiency.  Hence, we need to build a system (which includes people, process and technology) that can solve – and reconcile – for both.

When Deterministic Scheduling Meets Kanban