The Cloud Means Never Being on the Cusp Again

The cusp of a new platform is a dangerous place to be in.  In my world, it usually it means that we’re looking to upgrade or replace our EPM platform and therefore are scaling down our investment in it until we can make the great leap forward.  The problem is, of course, that our processes don’t stop evolving.  They keep changing and moving.

Once an organization determines it’s on the cusp, it’s more likely to have a process/tool mismatch, where the two get out of synch.  That then results in a drop of user adoption, a lack of process controls, and a gradual descent into the same chaos the organization started from.  Being on the cusp hurts organizational performance.

Enter the cloud.  Being in the cloud means that the organization will never really have to worry about big bang upgrades again.  Instead, the focus can move from the technical to ensuring the evolving processes are supported by the tools.  This tends to change the long term support discussion.  Instead of planning every several years to reset the tools to match the processes, organizations in the cloud move to a more constant, steady stream of tool changes that ensure compliance.

This, I suspect, will be one of the bigger changes of moving to the cloud….the change in thinking towards IT that comes from not having that periodic opportunity to remove the old and start with the new, that opportunity to get rid of the old architecture and move towards a brand new one.

The Cloud Means Never Being on the Cusp Again

Setting Default Desktop Scheduling Options

One of the takeaways from the recent Construction CPM Scheduling Conference in New Orleans last week (other than a couple of hurricanes on Bourbon street) was an acknowledgement that while Microsoft Project does indeed enable best practice scheduling, by default many of the required features are turned off.  This causes some frustration on behalf of the scheduler who must hunt deep into the Options panel to identify and select the appropriate options for each new project.

To assist in this endeavor, I wrote this little macro.  It basically goes through the options and sets them to optimize the scheduling interface for detailed construction scheduling (and therefore may have to be tweaked to support project scheduling in other domains).  In the past, I’ve triggered this macro whenever I run any other scripts on the schedule, i.e. I’ve written scripts to facilitate the update process….which call this routine up before going into collecting the update data.

Did I forget a key setting?  Let me know and I’ll update it accordingly.

Sub ApplyDefaultSettings()

    Dim Continue As String
    
    Continue = MsgBox("This macro will now apply the standard PMO settings to the project schedule.", vbOKCancel, "Confirm")
    
    If Continue = vbOK Then
        
        
        '1 - Project Settings
        With Application.ActiveProject
            .AutoTrack = True 'Sets the task status updates resource status setting
            .MoveCompleted = True 'Move completed work prior to the status date.
            .MoveRemaining = True 'Move incomplete work after the status date
            .SpreadPercentCompleteToStatusDate = True 'spread %Complete to the status date
            .NewTasksCreatedAsManual = False 'Turn off manual tasks
            .DisplayProjectSummaryTask = True 'Display project summary task
            .AutoLinkTasks = False 'Do not automatically link tasks when added to the schedule
            .MultipleCriticalPaths = True 'Calculate multiple critical paths
        End With
        
        '2 - Display Settings
        Application.NewTasksStartOn (pjProjectDate) 'Sets new tasks to default to the Project Start Date
        Application.DisplayEntryBar = True 'Displays the Entry Bar
        Application.Use3DLook = False 'Turns off 3D which used to cause printing issues
        
        '3 - Gantt Chart Settings
        GridlinesEditEx Item:=12, NormalType:=3 'Set the Status Date line
        GridlinesEditEx Item:=12, NormalColor:=192
        GridlinesEditEx Item:=4, NormalType:=0 'Turn off the Current Date line
        GridlinesEditEx Item:=0, Interval:=3 'Set dotted lines on every third Gantt row
        GridlinesEditEx Item:=0, IntervalType:=3
        GridlinesEditEx Item:=0, IntervalColor:=8355711
        GridlinesEditEx Item:=13, NormalType:=3 'Set dotted lines on every vertical top tier column
        GridlinesEditEx Item:=13, NormalColor:=8355711
        
    End If

End Sub

 

Setting Default Desktop Scheduling Options

Implementing EPM in Different Levels of Scheduling Maturity

Reading James O’Brien and Frederic Plotnick’s definitive book on construction scheduling, and I came across this passage that was rather thought provoking (well, it is for me, but I’m a bit of a scheduling nerd):

“….the primary and secondary purpose of the [schedule] must be to promote the project.  A tertiary purpose to support cost control, facilities management, or home office concerns should be acceptable as long as such do not detract from the primary purpose.

“Enterprise Scheduling software and implementation must be carefully managed to prevent increasing the burden of such on individual projects.”

First off, I’ll emphasize that Enterprise Project Management (EPM) is a rather broad term encompassing many of the capabilities required to deliver projects effectively: portfolio optimization, business case development, business architecture, etc.  In this case, we are focusing on a subset of these capabilities, specifically the field of enterprise scheduling.

Now, when it comes to enterprise scheduling, it seems to me that we are typically confronted with two different kinds of organizations in a “normal” engagement:

  1. Organizations with low scheduling maturity that are both trying to enhance maturity and enhance visibility into their project schedules.
  2. Organizations with high scheduling maturity that are only trying to enhance visibility into their project schedules, i.e. the “dashboard” scenario.

I would say that outside of simple question of visibility, most of the organizations I’ve worked with that fall into the latter category are also dealing with issues related to the ongoing “Great Shift Change,” where many of the schedulers who have highly mature (albeit unconsciously competent) processes are retiring, and enterprise scheduling is seen as a way of capturing their knowledge in the form of process and templates so that it may be passed on to the next generation of schedulers.

This also speaks to the role of the dashboard in an EPM engagement.  Specifically, as I always point out, there are two sets of controls being implemented here:

  1. Visibility into the actual dates of the deliverables within the schedule.  When will we hit key milestones?  When will the pipeline be ready to transport gas?  When will various crews be required to roll in and complete their work?  What are the risks that critical activities will get pushed into deer season and thus have to be delayed to ensure none of our personnel get shot accidentally?
  2. Visibility into the quality of the schedule.  How well are our schedulers following the basic precepts of solid CPM scheduling?  Do all tasks have predecessors and successors?  Are resource assignments correctly set?  Are constraints used correctly?

As annoying as they may be, the latter set of criteria are what’s important in driving up scheduling maturity.  They provide the reinforcement metrics required to take a group of engineers with limited scheduling experience and drive them up the maturity curve – which is most often what is called for when an organization decides to embark on an EPM journey.

The risk though, and I’ve seen this again and again, is that the organization gets lost in the wilderness of dashboard visibility into key milestones without spending the time and effort to build scheduling maturity.  That’s probably the main place where EPM engagements go off the rails.  In this case, the “schedulers” become obsessed with reporting the desired data – at the cost of developing valid schedule models.  (Your Schedule is Totally Mental)

So how do we avoid the dead end low-value result of having a bunch of simple reporting schedules?  Through education of the actual folks doing the scheduling.  Through emphasis of the right metrics.  Through the use of the correct information in driving the day to day decisions required to manage complex programs and portfolios of projects.  Most importantly, perhaps, through not skimping on the level of effort required to train and support these schedulers (and their bosses) as they make a mental leap from project managers who do some scheduling to actual bona fide schedulers.

Implementing EPM in Different Levels of Scheduling Maturity

Architecting a PMIS for the Cloud

Let’s face it.  On premises system architectures have long been an enabler for both lazy and amateur enterprise architects.  Need to move some data around?  Simply add a couple of fields to store it.  Need to add some integration just so we can see all of the data in a single place?  Build some batch jobs to push data hither and yon – and replicate it in multiple places just to make it easy to get at.

Nowhere is this more evident than in a Project Management Information System (PMIS) that has grown organically to the needs of the enterprise.  Invariably, a PMIS has grown as a collection of disparate systems and silos, all oriented to the needs of a specific functional group that may be involved in the execution of a project.

image

The integration eventually evolves into a veritable spaghetti diagram of lines moving data from one system to another.  Often, the data schema for the scheduling tool is expanded or repurposed to collect all of the data into a single, convenient data repository.  Generally speaking, this sort of works for many organizations.  They still manage to get their work done, albeit with a fair bit of grumbling around project managers having to access multiple tools to get their jobs done.

The main challenge with this tool architecture is that it doesn’t support a nimble process framework.  As PM processes mature (pro-tip: they will) or adopt to the changing organization, the organization needs to maintain a strict focus on a capabilities infrastructure.  As the capabilities develop in maturity, the underlying tools must also adopt, and this is where the traditional organic PMIS fails…..to a large extent due to the technical debt incurred in the evolution of the overall system.

Traditionally, with on-premises systems, this tendency towards entropy is addressed every couple of years in the form of an upgrade.  The vendor releases new versions of the software, and as part of the inevitable upgrade process, the organization performs a subbotnik to reassess and simplify the overall system.  This cycle naturally prevents the overall system from getting too complex.

Today’s subbotnik is not your father’s subbotnik however.  Today, most of the discussions we have around upgrades involve a consideration of moving to the cloud.  Often times, this is driven by the desire to take advantage of the cost savings that enterprise cloud offerings now bring to the table.  Twice in a week, I’ve been in conversations with clients about how they could save significant infrastructure overhead costs through moving to the cloud – but have been prevented from doing so by the architecture of their existing PMIS.  This means that they’ve been stuck with expensive on-premises environments with feature sets that grow increasingly outdated with each day.

Furthermore, additional care must be taken when moving to the cloud as we will lose that convenient safety valve of reassessing the entire thing from the ground up as part of an upgrade within the next several years.  Whatever we design today, we will have to live with tomorrow – and the day after tomorrow.

Over the course of multiple discussions, the architecture I see evolving around the PMIS looks a lot like this:

image

Instead of pulling data backwards and forwards, we are consolidating it.  The data is either consolidated in real time with any number of reporting tools such as Microsoft Excel, PowerBI, Tableau or Spotfire…..or it’s being pulled into a data warehouse such as SQL or HANA using an ETL tool.  Doing so greatly reduces the complexity of the overall system and allows each part to feed data in such a way that the company can rapidly mature in a specific capability as needed.

Hence, for those organizations with a significant existing investment in on-premises PMIS and the associated integration that this usually entails, a cloud discussion almost always needs to start from the process discussion.  What is your process?  How do the tools you currently have enable this process?  How do the tools we currently have block the process?  Does the process truly support the project management needs of the enterprise?  It’s only after having this discussion, that a new system architecture may be designed…..one which enables the nimble enterprise.

Here, we run into some of the typical challenges of a cloud discussion.  Typically, the cloud agenda is part of the IT agenda – as a means of reducing operational costs associated with maintaining rooms full of big iron.  The processes we are supporting are owned almost always by a PMO or PMOs.  To have this discussion effectively, i.e. of how to move to the cloud, the PMOs must be engaged by IT to reassess process and ensure that the results can be moved to the cloud effectively and efficiently.

Prior postings on architecting a PMIS (here and here)…..and a couple of posts on mapping EPM tools to process architecture (here and here)

Architecting a PMIS for the Cloud

If It’s Mardi Gras Time, Let’s Talk Construction Scheduling and Project Controls

That’s right, excited to be heading out to New Orleans for the Construction CPM Conference…..where I will attempt to strike an appropriate balance between “Three Full Days of Study and Training in the Critical Path Method of Planning & Scheduling Analysis” and enjoying the run up to the annual festivities (in a restrained and tasteful fashion, as always).

Come look for me in the vicinity of either the Microsoft booth or the bar – depending on what time of day it is.

http://www.constructioncpm.com/

Looking for some more posts on leveraging Microsoft Project for construction scheduling and/or implementing project controls in an environment that supports the same?

  1. When Deterministic Scheduling Meets Kanban
  2. Defining an Update Methodology (Parts I-V)
  3. The Importance of Cost Abstraction (Parts I, II)
  4. The Major Milestone Report in Project Online
  5. Cumulative Milestone (or Task) Reporting in Project Online
  6. Generating a Baseline Execution Index Report
  7. First Look at Geographic Reporting with Microsoft Excel 2013
If It’s Mardi Gras Time, Let’s Talk Construction Scheduling and Project Controls

When Deterministic Scheduling Meets Kanban

One of the fun things about my job is that often I find myself knitting together multiple disparate systems and scheduling philosophies.  In this case, I’m working with a client on a field scheduling problem and we’re identifying how the multiple constituencies within the organization can collaborate to deliver a high level of service to the customer base – while at the same time ensuring an appropriate level of cost.

From a very macro level, we are now trying to solve an equation for two optima…..how do I deliver world class service to a customer base that needs responsive, predictable performance….while at the same time juggling crews and equipment such that I am ensuring maximum capital utilization?

When we are solving for one or the other optima, life is simple.  I can simply get more resources than I need, have them sit idle and waiting for the work to pass through – in which case the work doesn’t get delayed.  Or I can simply schedule the crews according to their optimal schedule – which may not respect commitments to my clients (the classic “cable man” or “appliance delivery” scenario).

We see the same discussion in other domains.  For example, from some of the discussions I’ve participated in within the last couple of weeks:

  1. In IT, I have a limited number of testing resources.  I need to run all of my projects through this testing bottleneck….hence I want to structure the project to reduce the time between design and release….but I also want to optimize the use of my testing resources.
  2. From a more macro IT portfolio sense, managers struggle with committing resources to projects before the projects actually can demonstrate a need for resources.
  3. In drilling, we have a limited number of rigs that must run from well to well performing drilling activities.  Meanwhile, each well requires a series of activities to get ready for the rig arrival.  Bringing the rig in too early simply means that it sits idle.
  4. In oilfield maintenance scheduling, I’m working through a backlog of maintenance tickets and trying to ensure efficient crew utilization.  (Ok – this is not a perfect comparison, as we lose a bit of the deterministic element, but still in the rough ballpark of the discussion.)

Theory of Constraints provides us some guidance here, as in TOC, we would identify the bottlenecked resources, i.e. the crews, and then ensure that the deterministic scheduling approach creates a backlog of work in front of them, i.e. the bottlenecked resources are always utilized.

Schedule Challenges

First, let’s look at how to approach this via scheduling – and this is  quite similar to a discussion a while back on release scheduling.  The basic gist is that the deterministic schedule only predicts an early readiness date for the construction crew to start their work….and then identifies a window within which they will perform the work.  The length of that window is then defined by the anticipated volatility and workload of the crew schedule.  (Which would mean that the more work they have….or the more variables that would be expected, say during a season where weather disruptions are to be expected…would mean we would want to lengthen that estimated window.)

As the readiness date approaches, the deterministic schedule dates may be adjusted based on real time field conditions.  At a defined milestone, the readiness status is handed off to the construction team….who then slot the work into the schedule for the next planning period, i.e. 2 weeks or 1 month.

This means a couple of things:

  1. We need to treat the construction date within our deterministic schedule as a target, but not a specific date on which work will be done.
  2. We need to identify a triggering task or condition prior to that date that indicates the work is ready to be scheduled within the construction scheduling queue.
  3. We need to have a separate scheduling queue for the constrained resource, i.e.  a process to define when work is ready to enter the queue….and how that queue interacts with the overall logic of the deterministic scheduling model.
  4. We need a way of assessing readiness for the work to be put into the construction scheduling queue.  If the work is not ready, we don’t want to waste everyone’s time scheduling construction.

image

Communication Challenges

From a communication perspective, we need to ensure that the systems/groups communicate very specific data between each other.  Using the model above, that would look something like this…and feel free to substitute any relevant constituency for “Crew” in this picture, i.e. drilling rigs, IT testing resources, etc.

image

The key here is to recognize the respective value that both perspectives have on the fundamental problem, i.e. that we’re attempting to solve an equation for both speed and efficiency.  Hence, we need to build a system (which includes people, process and technology) that can solve – and reconcile – for both.

When Deterministic Scheduling Meets Kanban

Portfolio Analysis in Project Online: Engagements Edition

I started writing about the mechanics of the portfolio analysis module of Microsoft Project Server back in 2010-2011 with Microsoft Project Server 2010.  That resulted in this somewhat comprehensive (at the time) white paper that’s probably due for a couple of minor updates.

Over the last several years, I’ve made a couple of attempts to correct some of the errata and omissions in the paper as well as bring it into line with Project Server 2013 and the Project Online cloud based release.  That has resulted in the following posts (organized in chronological order):

  1. Original white paper (See errata in the comments)
  2. Generic Resources and Portfolio Analysis
  3. Resource Plans and Portfolio Analysis (Note: As of writing, Resource Plans are effectively retired in the Online version of the product)
  4. Prioritization with Custom Fields

And now this.  In this episode, I want to get you caught up on how Resource Engagements impact the resource analysis functionality in the Portfolio Analysis module.  Turns out, this is quite easy.  You’ll note in tenants with the features activated, you will now see this option in the setup screen:

image

So as you can see, you have the option of selecting whether resource engagements decrement from available capacity.  One important thing to note is that if I have assignments that don’t map to an approved engagement – those assignments may not be part of the portfolio analysis if the second option is not selected.

Hence, if you want portfolio analysis to work the way it “used to” work, you would probably want to select the second option, i.e. don’t require resource managers to approve work before it shows up in the resource analysis calculations.

Portfolio Analysis in Project Online: Engagements Edition