The Cloud Means Never Being on the Cusp Again

The cusp of a new platform is a dangerous place to be in.  In my world, it usually it means that we’re looking to upgrade or replace our EPM platform and therefore are scaling down our investment in it until we can make the great leap forward.  The problem is, of course, that our processes don’t stop evolving.  They keep changing and moving.

Once an organization determines it’s on the cusp, it’s more likely to have a process/tool mismatch, where the two get out of synch.  That then results in a drop of user adoption, a lack of process controls, and a gradual descent into the same chaos the organization started from.  Being on the cusp hurts organizational performance.

Enter the cloud.  Being in the cloud means that the organization will never really have to worry about big bang upgrades again.  Instead, the focus can move from the technical to ensuring the evolving processes are supported by the tools.  This tends to change the long term support discussion.  Instead of planning every several years to reset the tools to match the processes, organizations in the cloud move to a more constant, steady stream of tool changes that ensure compliance.

This, I suspect, will be one of the bigger changes of moving to the cloud….the change in thinking towards IT that comes from not having that periodic opportunity to remove the old and start with the new, that opportunity to get rid of the old architecture and move towards a brand new one.

The Cloud Means Never Being on the Cusp Again

Setting Default Desktop Scheduling Options

One of the takeaways from the recent Construction CPM Scheduling Conference in New Orleans last week (other than a couple of hurricanes on Bourbon street) was an acknowledgement that while Microsoft Project does indeed enable best practice scheduling, by default many of the required features are turned off.  This causes some frustration on behalf of the scheduler who must hunt deep into the Options panel to identify and select the appropriate options for each new project.

To assist in this endeavor, I wrote this little macro.  It basically goes through the options and sets them to optimize the scheduling interface for detailed construction scheduling (and therefore may have to be tweaked to support project scheduling in other domains).  In the past, I’ve triggered this macro whenever I run any other scripts on the schedule, i.e. I’ve written scripts to facilitate the update process….which call this routine up before going into collecting the update data.

Did I forget a key setting?  Let me know and I’ll update it accordingly.

Sub ApplyDefaultSettings()

    Dim Continue As String
    
    Continue = MsgBox("This macro will now apply the standard PMO settings to the project schedule.", vbOKCancel, "Confirm")
    
    If Continue = vbOK Then
        
        
        '1 - Project Settings
        With Application.ActiveProject
            .AutoTrack = True 'Sets the task status updates resource status setting
            .MoveCompleted = True 'Move completed work prior to the status date.
            .MoveRemaining = True 'Move incomplete work after the status date
            .SpreadPercentCompleteToStatusDate = True 'spread %Complete to the status date
            .NewTasksCreatedAsManual = False 'Turn off manual tasks
            .DisplayProjectSummaryTask = True 'Display project summary task
            .AutoLinkTasks = False 'Do not automatically link tasks when added to the schedule
            .MultipleCriticalPaths = True 'Calculate multiple critical paths
        End With
        
        '2 - Display Settings
        Application.NewTasksStartOn (pjProjectDate) 'Sets new tasks to default to the Project Start Date
        Application.DisplayEntryBar = True 'Displays the Entry Bar
        Application.Use3DLook = False 'Turns off 3D which used to cause printing issues
        
        '3 - Gantt Chart Settings
        GridlinesEditEx Item:=12, NormalType:=3 'Set the Status Date line
        GridlinesEditEx Item:=12, NormalColor:=192
        GridlinesEditEx Item:=4, NormalType:=0 'Turn off the Current Date line
        GridlinesEditEx Item:=0, Interval:=3 'Set dotted lines on every third Gantt row
        GridlinesEditEx Item:=0, IntervalType:=3
        GridlinesEditEx Item:=0, IntervalColor:=8355711
        GridlinesEditEx Item:=13, NormalType:=3 'Set dotted lines on every vertical top tier column
        GridlinesEditEx Item:=13, NormalColor:=8355711
        
    End If

End Sub

 

Setting Default Desktop Scheduling Options

Implementing EPM in Different Levels of Scheduling Maturity

Reading James O’Brien and Frederic Plotnick’s definitive book on construction scheduling, and I came across this passage that was rather thought provoking (well, it is for me, but I’m a bit of a scheduling nerd):

“….the primary and secondary purpose of the [schedule] must be to promote the project.  A tertiary purpose to support cost control, facilities management, or home office concerns should be acceptable as long as such do not detract from the primary purpose.

“Enterprise Scheduling software and implementation must be carefully managed to prevent increasing the burden of such on individual projects.”

First off, I’ll emphasize that Enterprise Project Management (EPM) is a rather broad term encompassing many of the capabilities required to deliver projects effectively: portfolio optimization, business case development, business architecture, etc.  In this case, we are focusing on a subset of these capabilities, specifically the field of enterprise scheduling.

Now, when it comes to enterprise scheduling, it seems to me that we are typically confronted with two different kinds of organizations in a “normal” engagement:

  1. Organizations with low scheduling maturity that are both trying to enhance maturity and enhance visibility into their project schedules.
  2. Organizations with high scheduling maturity that are only trying to enhance visibility into their project schedules, i.e. the “dashboard” scenario.

I would say that outside of simple question of visibility, most of the organizations I’ve worked with that fall into the latter category are also dealing with issues related to the ongoing “Great Shift Change,” where many of the schedulers who have highly mature (albeit unconsciously competent) processes are retiring, and enterprise scheduling is seen as a way of capturing their knowledge in the form of process and templates so that it may be passed on to the next generation of schedulers.

This also speaks to the role of the dashboard in an EPM engagement.  Specifically, as I always point out, there are two sets of controls being implemented here:

  1. Visibility into the actual dates of the deliverables within the schedule.  When will we hit key milestones?  When will the pipeline be ready to transport gas?  When will various crews be required to roll in and complete their work?  What are the risks that critical activities will get pushed into deer season and thus have to be delayed to ensure none of our personnel get shot accidentally?
  2. Visibility into the quality of the schedule.  How well are our schedulers following the basic precepts of solid CPM scheduling?  Do all tasks have predecessors and successors?  Are resource assignments correctly set?  Are constraints used correctly?

As annoying as they may be, the latter set of criteria are what’s important in driving up scheduling maturity.  They provide the reinforcement metrics required to take a group of engineers with limited scheduling experience and drive them up the maturity curve – which is most often what is called for when an organization decides to embark on an EPM journey.

The risk though, and I’ve seen this again and again, is that the organization gets lost in the wilderness of dashboard visibility into key milestones without spending the time and effort to build scheduling maturity.  That’s probably the main place where EPM engagements go off the rails.  In this case, the “schedulers” become obsessed with reporting the desired data – at the cost of developing valid schedule models.  (Your Schedule is Totally Mental)

So how do we avoid the dead end low-value result of having a bunch of simple reporting schedules?  Through education of the actual folks doing the scheduling.  Through emphasis of the right metrics.  Through the use of the correct information in driving the day to day decisions required to manage complex programs and portfolios of projects.  Most importantly, perhaps, through not skimping on the level of effort required to train and support these schedulers (and their bosses) as they make a mental leap from project managers who do some scheduling to actual bona fide schedulers.

Implementing EPM in Different Levels of Scheduling Maturity

Architecting a PMIS for the Cloud

Let’s face it.  On premises system architectures have long been an enabler for both lazy and amateur enterprise architects.  Need to move some data around?  Simply add a couple of fields to store it.  Need to add some integration just so we can see all of the data in a single place?  Build some batch jobs to push data hither and yon – and replicate it in multiple places just to make it easy to get at.

Nowhere is this more evident than in a Project Management Information System (PMIS) that has grown organically to the needs of the enterprise.  Invariably, a PMIS has grown as a collection of disparate systems and silos, all oriented to the needs of a specific functional group that may be involved in the execution of a project.

image

The integration eventually evolves into a veritable spaghetti diagram of lines moving data from one system to another.  Often, the data schema for the scheduling tool is expanded or repurposed to collect all of the data into a single, convenient data repository.  Generally speaking, this sort of works for many organizations.  They still manage to get their work done, albeit with a fair bit of grumbling around project managers having to access multiple tools to get their jobs done.

The main challenge with this tool architecture is that it doesn’t support a nimble process framework.  As PM processes mature (pro-tip: they will) or adopt to the changing organization, the organization needs to maintain a strict focus on a capabilities infrastructure.  As the capabilities develop in maturity, the underlying tools must also adopt, and this is where the traditional organic PMIS fails…..to a large extent due to the technical debt incurred in the evolution of the overall system.

Traditionally, with on-premises systems, this tendency towards entropy is addressed every couple of years in the form of an upgrade.  The vendor releases new versions of the software, and as part of the inevitable upgrade process, the organization performs a subbotnik to reassess and simplify the overall system.  This cycle naturally prevents the overall system from getting too complex.

Today’s subbotnik is not your father’s subbotnik however.  Today, most of the discussions we have around upgrades involve a consideration of moving to the cloud.  Often times, this is driven by the desire to take advantage of the cost savings that enterprise cloud offerings now bring to the table.  Twice in a week, I’ve been in conversations with clients about how they could save significant infrastructure overhead costs through moving to the cloud – but have been prevented from doing so by the architecture of their existing PMIS.  This means that they’ve been stuck with expensive on-premises environments with feature sets that grow increasingly outdated with each day.

Furthermore, additional care must be taken when moving to the cloud as we will lose that convenient safety valve of reassessing the entire thing from the ground up as part of an upgrade within the next several years.  Whatever we design today, we will have to live with tomorrow – and the day after tomorrow.

Over the course of multiple discussions, the architecture I see evolving around the PMIS looks a lot like this:

image

Instead of pulling data backwards and forwards, we are consolidating it.  The data is either consolidated in real time with any number of reporting tools such as Microsoft Excel, PowerBI, Tableau or Spotfire…..or it’s being pulled into a data warehouse such as SQL or HANA using an ETL tool.  Doing so greatly reduces the complexity of the overall system and allows each part to feed data in such a way that the company can rapidly mature in a specific capability as needed.

Hence, for those organizations with a significant existing investment in on-premises PMIS and the associated integration that this usually entails, a cloud discussion almost always needs to start from the process discussion.  What is your process?  How do the tools you currently have enable this process?  How do the tools we currently have block the process?  Does the process truly support the project management needs of the enterprise?  It’s only after having this discussion, that a new system architecture may be designed…..one which enables the nimble enterprise.

Here, we run into some of the typical challenges of a cloud discussion.  Typically, the cloud agenda is part of the IT agenda – as a means of reducing operational costs associated with maintaining rooms full of big iron.  The processes we are supporting are owned almost always by a PMO or PMOs.  To have this discussion effectively, i.e. of how to move to the cloud, the PMOs must be engaged by IT to reassess process and ensure that the results can be moved to the cloud effectively and efficiently.

Prior postings on architecting a PMIS (here and here)…..and a couple of posts on mapping EPM tools to process architecture (here and here)

Architecting a PMIS for the Cloud