Risks don’t add up

The way PMI deals with risk in the PMBOK® Guide is simplistic. Calculating the effect of one risk using the suggested probability x severity calculation provides one value.  For example, if there is an 20% probability an estimate is under valued by $50,000 the Expected Monetary Value (EMV) for this event will be:
  -$50,000 x 0.2 = -$10,000   it is simple but its not a lot of use in the real world.

The first problem is the under-estimated value is not known and would be better represented by a range statement but as the values in the range alter so does the probability of and value occurring.  Thinking of your car for a moment:

  • There is a fairly high probability of an accident causing a minor scratch or dent occurring in any given year (particularly in shopping centre car parks) say a 20% probability of an accident occurring with the damage costing $500 or less to repair. 
  • There is a very low probability of an accident causing the car to be written off; say a less then 1% chance of an accident costing $50,000 or more.

Whilst there is only one car and it may have more then one accident in a year these parameters do not mean there will ever actually be an accident!  Even the 20% probability of a $500 accident occurring in any given year, does not mean there will be at least one accident every 5 years.  The maths are much more complicated.

The next issue is correlation – returning to the under estimate…… was the under-estimate a one-off factor (caused by a single unrelated external supplier) or is it a systemic estimating error affecting a number of related estimates (possibly caused by changes in the exchange rate).  This needs modelling to determine the overall effect.

Then we come to the purpose of this article – do risks add together or discount each other?  It depends on the situation.

 

Situation 1 looks at the probability of starting on time.

Consider three schedule activities each of 10 days duration, all of which need to be complete before their outputs can be integrated.

Activity 1 & 2 both have a 90% probability of achieving the estimated duration of 10 days.

Activity 3 has an 80% probability of achieving the 10 days.

The overall chance of starting the ‘Integration’ activity on schedule needs an understanding of how these three activities affect its start. Based on the percentages above:

  • Activity 1 has a 1 in 10 chance of causing a delay
  • Activity 2 has a 1 in 10 chance of causing a delay
  • Activity 3 has a 1 in 5 chance of causing a delay

 

There are 10 x 10 x 5 = 500 possible outcomes within the model and within this
9 x 9 x 4 = 324 ways of not being late (it does not matter how early any of the projects finish as long as they are not late).

Take the number of ‘not late’ outcomes from the possible range of outcomes;  500 – 324 leaves 176 ways of being late.

176/500 = 0.352 or a 35.2% probability of not making the start date.

Or a 100 – 35.2 = 64.8% probability of being on time.

The quicker way to calculate this is simply to multiply the probabilities together:

0.9 x 0.9 x 0.8 = 64.8%

For a more complete explanation see: http://mosaicprojects.wordpress.com/2013/01/18/whats-the-probability/

 

 

Situation 2 looks at the probability of finishing on or under budget.

In this scenario, money saved on one part of the project can be used to offset overspending on another. Assume you have 10 teams working on your project and they all estimate completing their section of the work for between $8,000 and $12,000; with the expected average of $10,000 per team. As the PM, you can aggregate these estimates to arrive at a project budget of $100,000.

However, your team leaders are unlikely to submit an estimate which has only got a 50% chance of being achieved, let’s assume they use the 90% probability benchmark common in oil and gas projects…

To achieve a 90% probability of the estimate being achieved, each of the individual team estimates will need to be increased to around $11,300 (assuming a normal distribution); which pushes the overall project budget up to $113,000 if you simply add up the risk adjusted estimates.

If you accept this approach, how much safety does this give the project manager?? The answer is a surprising 99.998% probability of not exceeding the overall project budget!

The effect of combining uncertainties into a ‘portfolio’ is to reduce the overall level of uncertainty in the portfolio; basically wins on the ‘swings’ can be used to offset losses on the ‘roundabouts’ generating an increase in the overall probability of achieving any given target for the portfolio.

So if your project needs to achieve a 90% certainty overall and there are 10 separate teams, the correct budget is around $104,000, not the $113,000 calculated by summing each of the teams ‘90% estimates’ (or the $113,000 required if the project is a single holistic entity).  For more on this see Averaging the Power of Portfolios: http://mosaicprojects.wordpress.com/2012/07/08/averaging-the-power-of-portfolios/

 

Confused or worried????

Hopefully this short article has made you think about getting serious help when you start looking beyond developing a simple risk register. This is not my core skill but I do know enough about risk to understand that the difference between an individual project risks, the overall risk of a project and the risks associated with a portfolio of projects are complicated. 

Source: Project Management Articles

Breakdown Structures Revisited

Breakdown structures are central to the practice of project management and have their origins in the industrial revolution.  In the ‘Wealth of Nations’ Smith advocated breaking the production of goods into tiny tasks that can be undertaken by people following simple instructions. ‘Why hire a talented pin maker when ten factory workers using machines and working together can produce a thousand times more pins than the artisan working alone?’  Similar ideas underpinned Newtonian physics. Newton saw the world as a harmonious mechanism controlled by a universal law. Applying scientific observations to parts of the whole would allow understanding and insights to occur and eventually a complete understanding of the ‘clockwork universe’.

These ideas fed into scientific management.  Scientific management focuses on worker and machine relationships and assumes productivity can be increased by increasing the efficiency of production processes. In 1911, Frederick Taylor, known as the Father of Scientific Management, published Principles of Scientific Management in which he proposed work methods designed to increase worker productivity.

This ‘reductionist’ approach to complex endeavours, supported by the division of labour is central scientific management as well as to many modern project management processes built around ‘breakdown structures’[1].

Some of the types of Breakdown Structure in use today include:

  • WBS (Work Breakdown Structure)
  • OBS (Organizational Breakdown Structure)
  • CBS (Cost Breakdown Structure
  • RBS (Resource Breakdown Structure
  • PBS (Product Breakdown Structure)
  • BoM (Bill of Materials)
  • RBS (Risk Breakdown Structure)
  • CBS (Contract Breakdown Structure)

Their functions can be briefly defined as follows:

 

Work Breakdown Structure[2] (WBS)

A work breakdown structure (WBS) is a tool used to define and group a project’s discrete work elements (or tasks) in a way that helps organise and define the total work scope of the project It provides the framework for detailed cost estimating and control along with providing guidance for schedule development and control.

Organisation Breakdown Structure (OBS)

The organisation(al) breakdown structure (OBS) defines the organisational relationships and is used as the framework for assigning work responsibilities. The intersection of the OBS and WBS defines points of management accountability for the work called Control (or Cost) Accounts.

Cost Breakdown Structure (CBS)

The cost breakdown structure (CBS) classifies the costs within project into cost units/cost centres and cost elements/cost types. The establishment of a cost structure aids efficient cost planning, controlling, and the introduction of measures to reduce costs. The CBS and Control Accounts are frequently aligned (see section below)

Resource Breakdown Structure

The resource breakdown structure (RBS) is a standardised list of personnel resources related by function and arranged in a hierarchical structure to facilitate planning and controlling of project work.       

Product Breakdown Structure  (PBS)

A product breakdown structure (PBS) is an exhaustive, hierarchical tree structure of components that make up an item, arranged in whole-part relationship. The PRINCE2 project management method mandates the use of product based planning, part of which is developing a product breakdown structure.  In practice there is very little difference between a PBS and a WBS, both systems define the full extent of the work required to complete the project.

Bill of Materials (BoM)

Decomposes each tangible element of the project deliverables into its component parts and is often used for purchasing components.

Risk Break Down Structure  (RBS)

The risk breakdown structure (RBS) is a hierarchically organised depiction of the identified project risks arranged by risk category. The risks are placed into the hierarchical structure as they are identified, and the structure is organized by source so that the total risk exposure of the project can be more easily understood, and planning for the risk more easily accomplished.

Contract Breakdown Structure (CBS)

A hierarchal arrangement of head contractors, subcontractors, suppliers etc., to show the overall supply chain feeding goods and services into the project. The efficient functioning of the overall supply chain is critical for project success.

 

Aligning Cost Breakdown Structures and control Accounts 

As projects get larger it helps to have the overall budget broken down into smaller allocations. Cost accounts can be used to allocate the budget at a lower level and provide integration between the WBS and the cost control system. The budget is allocated to each cost account and the actual project expenses are reported at that same level.

Cost accounts can be established in different ways (not all of which tie into the WBS).

  • By WBS work package. Theoretically you could set up a separate cost account for each WBS element, but that does not make practical sense. Usually a number of work packages are assigned to a Control Account and cost management is undertaken at this level.
  • By resource type. In this approach, you may have a cost accounts for: internal labour, external labour, equipment, training, travel, etc.
  • By WBS by resource type. If you set up cost accounts for work packages on the WBS, you can also track the resource types within each work package. Each resource types can be tracked with sub-account numbers within the overall cost account (and consolidated separately is the code structure is consistent).

The more detailed your cost accounts are, the more work you will have setting up, allocating and tracking the cost account budgets, but the greater the potential for insight and control. For example, one area of the project could be over budget, but masked by another area that is under budget.

Probably the most significant element in applying Earned Value Management (EVM)[3] to a project is deciding the number and location of control accounts. How many? How large (budget)? Who will be the CAMs?

There is no clear cut process or algorithm. It depends on the work, the organization, the culture, the finance system, subcontract relationships, the scheduling system, the degree of risk in any one part of the project, the design for the WBS and OBS, and the project manager’s style and preference.

More Control Accounts means more EVM cost, more time collecting data, more detail, and maybe more accuracy. More Control Accounts also can mean more time spent in authoring, reviewing, approving, recording, and filing in forms.  Less Control Accounts means less EVM cost, less time collecting data, less detail, maybe more accuracy, fewer forms and less time processing those that remain.

So what is the right number of Control Accounts? It is a complicated and multidimensional problem with no ‘right answer’.  The only certainty is one size does not ‘fit all’ – pragmatic common sense is preferable to arbitrary rules.

 

Do all of these breakdowns really help? 

Traditional project management is based on these concepts.  However emerging disciplines, particularly complexity theory suggest that self organising systems such as a project team cannot be understood by studying the individual parts of the team[4]

As the late Douglas Adam once said “I can imagine Newton sitting down and working out his laws of motion and figuring out the way the Universe works and with him, a cat wandering around. The reason we had no idea how cats worked was because, since Newton, we had proceeded by the very simple principle that essentially, to see how things work, we took them apart. If you try and take a cat apart to see how it works, the first thing you have in your hands is a non-working cat.” 

The way complex entities work cannot be understood by breaking them down into parts. Even at the simplest level, studying a fish cannot explain how a shoal of fish work; at a complex level understanding a project task in isolation will not explain the dynamics of a major project and its team of resources.

My personal view is the ‘breakdowns’ are still helpful ways to develop insights – but they no longer offer viable answers (if they ever did).  The path to increasing project success lays in the way the insights are interpreted and used within the complexity of a dynamic project delivery system.


[1] For a more detailed discussion see, The Origins of Modern Project Management:
   http://www.mosaicprojects.com.au/Resources_Papers_050.html#Top

[3] For more on Earned Value Management see:

[4] For a brief overview of complexity see:
   http://www.mosaicprojects.com.au/Resources_Papers_070.html  

Source: Project Management Articles

Are numbers real?

As project managers we use numbers every day of the week but how real are they?

In the western world, numbers in the form we know and use today appeared in the 13th century when Leonardo Pisano Bigollo (c. 1170 – c. 1250), known as Fibonacci an Italian mathematician, published the Liber Abaci (1202). In the book, Fibonacci advocated numeration with the digits 0–9 and place value, and showed the practical importance of the new numeral system by applying it to commercial bookkeeping, and other applications.

This book also introduced the Fibonacci sequence which as many applications (the sequence is created by adding the previous 2 numbers 1, 2, 3, 5, 8, etc.) 

The book was well received throughout educated Europe and had a profound impact on European thought task manager app.

Fibonacci was born around 1170 to Guglielmo Bonacci, a wealthy Italian merchant. Guglielmo directed a trading post in Bugia, a port east of Algiers in the Almohad dynasty’s sultanate in North Africa (now Béjaïa, Algeria). As a young boy, Fibonacci travelled with him to help; it was there he learned about the Hindu-Arabic numeral system described in his book.

Our modern numbers are descended from the Hindu-Arabic numeral system developed by ancient Indian mathematicians, in which a sequence of digits such as ‘975’ is read as a single number. These Indian numerals are traditionally thought to have been adopted by the Muslim Persian and Arab mathematicians in India, and passed on to the Arabs further west with the current form of the numerals developing in North Africa and studied by Fibonacci.

This numbering system is easy to use and widespread but it was not the first or last.  Romans and earlier Mediterranean civilisations had their systems and most of the modern world relies on binary mathematics. Duodecimals were used in the UK prior to metrication (based on 12 to deal with measurements in feet and inches) etc.

Some numbers are ‘irrational’ such as the ‘square root of 2’ and π (Pi) – there is no complete answer.  Others are imaginary such as the square root of minus 1.

And then there are strange sequences that build fascinating patterns:

 
1 x 8 + 1 = 9
12 x 8 + 2 = 98
123 x 8 + 3 = 987
1234 x 8 + 4 = 9876
12345 x 8 + 5 = 98765
123456 x 8 + 6 = 987654
1234567 x 8 + 7 = 9876543
12345678 x 8 + 8 = 98765432
123456789 x 8 + 9 = 987654321

1 x 9 + 2 = 11
12 x 9 + 3 = 111
123 x 9 + 4 = 1111
1234 x 9 + 5 = 11111
12345 x 9 + 6 = 111111
123456 x 9 + 7 = 1111111
1234567 x 9 + 8 = 11111111
12345678 x 9 + 9 = 111111111
123456789 x 9 +10= 1111111111

Give our reliance on mathematics for virtually everything how ‘real’ is a system that cannot define the ratio between the diameter and circumference of a circle but can generate fascinating sequences like those above?

There’s no answer to this post other then to suggest there are 10 types of people in the world – those who understand binary mathematics and those that don’t.

Source: Project Management Articles

Predicting the Future

Only fools and the bankers who created the GFC think the future is absolutely predictable. The rest of know there is always a degree of uncertainty in any prediction about what may happen at some point in the future. The key question is either what degree of uncertainty, or in project management space what is the probability of achieving a predetermined time or cost commitment.

There are essentially three ways to deal with this uncertainty:

Option one is to hope the project will turn out OK. Unfortunately hope is not an effective strategy.

Option two is to plan effectively, measure actual progress and predict future outcomes using techniques such as Earned Value and Earned Schedule then proactively manage future performance to correct any deficiencies. Simply updating a CPM schedule is not enough, based on trend analysis, defined changes in performance need to be determined and instigated to bring the project back onto track.

Option three is to use probabilistic calculations to determine the degree of certainty around any forecast completion date, calculate appropriate contingencies and develop the baseline schedule to ensure the contingencies are preserved. From this baseline, applying the predictive techniques discussed in ‘option two’ plus effective risk management creates the best chance of success. The balance of this post is looking at the options for calculating a probabilistic outcome.

 

The original system developed to assess probability in a schedule was PERT.  PERT was developed in 1957 and was based on a number of simplifications that were known to be inaccurate (but were seen as ‘good enough’ for the objectives of the Polaris program). The major problem with PERT is it only calculates the probability distribution associated with the PERT Critical Path which inevitably underestimates the uncertainty in the overall schedule.  For more on the problems with PERT see Understanding PERT [http://www.mosaicprojects.com.au/WhitePapers/WP1087_PERT.pdf].  Fortunately both computing power and the understanding of uncertainty calculations have advanced since the 1950s.

Modern computing allows more effective calculations of uncertainty in schedules; the two primary options are Monte Carlo and Latin hypercube sampling. When you run a Monte Carlo simulation or a Latin Hypercube simulation, what you’re trying to achieve is convergence. Convergence is achieved when you reach the point where you could run another ten thousand, or another hundred thousand simulations, and your answer isn’t really going to change.  Because of the way the algorithms are implemented, Latin Hypercube reaches convergence more quickly than the Monte Carlo. It’s a more advanced, more efficient algorithm for distribution calculations.

Both options going to come to the same answer eventually, so the choice comes down to familiarity. Older school risk assessment people are going to have more experience with the Monte Carlo, so they might default to that, whereas people new to the discipline are likely to favour a more efficient algorithm. It’s really just a question of which method you are more comfortable with.  However, before making a decision, it helps to know a bit about both of these options:

 

Monte Carlo

Stanislaw Ulam first started playing around with the underpinning concepts, pre-World War II. He had broken his leg and was in rehab for a long time convalescing and played solitaire to pass the time. He wanted some way of figuring out what the probability was that he would finish his solitaire game successfully and tried many different math techniques, but he couldn’t do it. Then he came up with this idea of using probability distribution [http://www.mosaicprojects.com.au/WhitePapers/WP1037_Probability.pdf] as a method of figuring out the answer.

 

Years later, Ulam and the other scientists working on the Manhattan Project were trying to figure out what the likelihood was for the distribution of neutrons within a nuclear reaction. He remembered this method and used it to calculate something that they couldn’t figure out any other way. Then they needed a name for it! One of the guys on the team had an uncle that used to gamble a lot in Monte Carlo, so they decided to call it the Monte Carlo method in honour of the odds and probabilities found in casinos.

The Monte Carlo method (or Monte Carlo experiments) are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results; ie, by running simulations many times over in order to calculate those same probabilities heuristically just like actually playing and recording your results in a real casino situation.

In summary, Monte Carlo sampling uses random or pseudo-random numbers to sample from the probability distribution associated with each activity in a schedule (or cost item in the cost plan). The sampling is entirely random – that is, any given sample value may fall anywhere within the range of the input distribution and with enough iterations recreates the input distributions for the whole model. However, a problem of clustering may arise when a small number of iterations are performed.

 

Latin hypercube[i] sampling (LHS)

LHSis a statistical method for generating a sample of plausible collections of parameter values from a multidimensional distribution. It was described by McKay in 1979. An independently equivalent technique was proposed by Eglājs in 1977. And it was further elaborated by Ronald L. Iman, and others in 1981.

Latin Hypercube sampling stratifies the input probability distributions and takes a random value from each interval of the input distribution. The effect is that each sample (the data used for each simulation) is constrained to match the input distribution very closely. Therefore, for even a modest sample sizes, the Latin Hypercube method makes all, or nearly all, of the sample means fall within a small fraction of the standard error. This is usually desirable.

 

Different type of sampling

The difference between random sampling, Latin Hypercube sampling and orthogonal sampling can be explained as follows:

  1. The Monte Carlo approach uses random sampling; new sample points are generated without taking into account the previously generated sample points. One does thus not necessarily need to know beforehand how many sample points are needed.
  2. In Latin Hypercube sampling one must first decide how many sample points to use and for each sample point remember that it has been used. Fewer iterations are needed to achieve convergence.
  3. In Orthogonal sampling, the sample space is divided into equally probable subspaces. All sample points are then chosen simultaneously making sure that the total ensemble of sample points is a Latin Hypercube sample and that each subspace is sampled with the same density.

 

In summary orthogonal sampling ensures that the ensemble of random numbers are a very good representation of the real variability (but is rarely used in project management), LHS ensures that the ensemble of random numbers is representative of the real variability whereas traditional random sampling is just an ensemble of random numbers without any guarantees.

 

The Result

Once you have a reliable probability distribution and a management prepared to recognise, and deal with, uncertainty you are in the best position to effectively manage a project through to a successful conclusion.  Conversely, pretending uncertainty does not exists is an almost certainly a recipe for failure!

In conclusion, it would also be really nice to see clients start recognise the simple fact there are no absolute guarantees about future outcomes.  I am really looking forward to seeing the first intelligently prepared tender that asks the organisations submitting a tender to define the probability of them achieving the contract date and the contingency included in their project program to achieve this level of certainty.  Any tenderer that says they are 100% certain of achieving the contract date would of course be rejected based on the fact they are either dishonest or incompetent……


[i] In the context of statistical sampling, a square grid containing sample positions is a Latin Square if (and only if) there is only one sample in each row and each column. A Latin Hypercube is the generalisation of this concept to an arbitrary number of dimensions, whereby each sample is the only one in each axis-aligned hyperplane containing it.

Source: Project Management Articles