Archive for the ‘Improvementology’ Category

I am a big fan of pictures that tell a story … and this week I discovered someone who is creating great pictures … Hayley Lewis.

This is one of Hayley’s excellent sketch notes … the one that captures the essence of the Bruce Tuckman model of team development.

The reason that I share this particular sketch-note is because my experience of developing improvement-by-design teams is that it works just like this!

The tricky phase is the STORMING one because not all teams survive it!

About half sink in the storm – and that seems like an awful waste – and I believe it is avoidable.

This means that before starting the team development cycle, the leader needs to be aware of how to navigate themselves and the team through the storm phase … and that requires training, support and practice.

Which is the reason why coaching from a independent, experienced, capable practitioner is a critical element of the improvement process.

stick_figure_superhero_anim_150_wht_1857Have you heard the phrase “Pride comes before a fall“?

What does this mean? That the feeling of pride is the reason for the subsequent fall?

So by following that causal logic, if we do not allow ourselves to feel proud then we can avoid the fall?

And none of us like the feeling of falling and failing. We are fearful of that negative feeling, so with this simple trick we can avoid feeling bad. Yes?

But we all know the positive feeling of achievement – we feel pride when we have done good work, when our impact matches our intent.  Pride in our work.

Is that bad too?

Should we accept under-achievement and unexceptional mediocrity as the inevitable cost of avoiding the pain of possible failure?  Is that what we are being told to do here?

The phrase comes from the Bible, from the Book of Proverbs 16:18 to be precise.


And the problem here is that the phrase “pride comes before a fall” is not the whole proverb.

It has been simplified. Some bits have been omitted. And those omissions lead to ambiguity and the opportunity for obfuscation and re-interpretation.

In the fuller New International Version we see a missing bit … the “haughty spirit” bit.  That is another way of saying “over-confident” or “arrogant”.

But even this “authorised” version is still ambiguous and more questions spring to mind:

Q1. What sort of pride are we referring to? Just the confidence version? What about the pride that follows achievement?

Q2. How would we know if our feeling of confidence is actually justified?

Q3. Does a feeling of confidence always precede a fall? Is that how we diagnose over-confidence? Retrospectively? Are there instances when we feel confident but we do not fail? Are there instances when we do not feel confident and then fail?

Q4. Does confidence cause the fall or it is just a temporal association? Is there something more fundamental that causes both high-confidence and low-competence?

There is a well known model called the Conscious-Competence model of learning which generates a sequence of four stages to achieving a new skill. Such as one we need to achieve our intended outcomes.

We all start in the “blissful ignorance” zone of unconscious incompetence.  Our unknowns are unknown to us.  They are blind spots.  So we feel unjustifiably confident.


In this model the first barrier to progress is “wrong intuition” which means that we actually have unconscious assumptions that are distorting our perception of reality.

What we perceive makes sense to us. It is clear and obvious. We feel confident. We believe our own rhetoric.

But our unconscious assumptions can trick us into interpreting information incorrectly.  And if we derive decisions from unverified assumptions and invalid analysis then we may do the wrong thing and not achieve our intended outcome.  We may unintentionally cause ourselves to fail and not be aware of it.  But we are proud and confident.

Then the gap between our intent and our impact becomes visible to all and painful to us. So we are tempted to avoid the social pain of public failure by retreating behind the “Yes, But” smokescreen of defensive reasoning. The “doom loop” as it is sometimes called. The Victim Vortex. “Don’t name, shame and blame me, I was doing my best. I did not intent that to happen. To err is human”.

The good news is that this learning model also signposts a possible way out; a door in the black curtain of ignorance.  It suggests that we can learn how to correct our analysis by using feedback from reality to verify our rhetorical assumptions.  Those assumptions which pass the “reality check” we keep, those which fail the “reality check” we redesign and retest until they pass.  Bit by bit our inner rhetoric comes to more closely match reality and the wisdom of our decisions will improve.

And what we then see is improvement.  Our impact moves closer towards our intent. And we can justifiably feel proud of that achievement. We do not need to be best-compared-with-the-rest; just being better-than-we-were-before is OK. That is learning.


And this is how it feels … this is the Learning Curve … or the Nerve Curve as we call it.

What it says is that to be able to assess confidence we must also measure competence. Outcomes. Impact.

And to achieve excellence we have to be prepared to actively look for any gap between intent and impact.  And we have to be prepared to see it as an opportunity rather than as a threat. And we will need to be able to seek feedback and other people’s perspectives. And we need to be to open to asking for examples and explanations from those who have demonstrated competence.

It says that confidence is not a trustworthy surrogate for competence.

It says that we want the confidence that flows from competence because that is the foundation of trust.

Improvement flows at the speed of trust and seeing competence, confidence and trust growing is a joyous thing.

Pride and Joy are OK.

Arrogance and incompetence comes before a fall would be a better proverb.

thinker_figure_unsolve_puzzle_150_wht_18309Many of the challenges that we face in delivering effective and affordable health care do not have well understood and generally accepted solutions.

If they did there would be no discussion or debate about what to do and the results would speak for themselves.

This lack of understanding is leading us to try to solve a complicated system design challenge in our heads.  Intuitively.

And trying to do it this way is fraught with frustration and risk because our intuition tricks us. It was this sort of challenge that led Professor Rubik to invent his famous 3D Magic Cube puzzle.

It is difficult enough to learn how to solve the Magic Cube puzzle by trial and error; it is even more difficult to attempt to do it inside our heads! Intuitively.

And we know the Rubik Cube puzzle is solvable, so all we need are some techniques, tools and training to improve our Rubik Cube solving capability.  We can all learn how to do it.

Returning to the challenge of safe and affordable health care, and to the specific problem of unscheduled care, A&E targets, delayed transfers of care (DTOC), finance, fragmentation and chronic frustration.

This is a systems engineering challenge so we need some systems engineering techniques, tools and training before attempting it.  Not after failing repeatedly.


One technique that a systems engineer will use is called a Vee Diagram such as the one shown above.  It shows the sequence of steps in the generic problem solving process and it has the same sequence that we use in medicine for solving problems that patients present to us …

Diagnose, Design and Deliver

which is also known as …

Study, Plan, Do.

Notice that there are three words in the diagram that start with the letter V … value, verify and validate.  These are probably the three most important words in the vocabulary of a systems engineer.

One tool that a systems engineer always uses is a model of the system under consideration.

Models come in many forms from conceptual to physical and are used in two main ways:

  1. To assist the understanding of the past (diagnosis)
  2. To predict the behaviour in the future (prognosis)

And the process of creating a system model, the sequence of steps, is shown in the Vee Diagram.  The systems engineer’s objective is a validated model that can be trusted to make good-enough predictions; ones that support making wiser decisions of which design options to implement, and which not to.

So if a systems engineer presented us with a conceptual model that is intended to assist our understanding, then we will require some evidence that all stages of the Vee Diagram process have been completed.  Evidence that provides assurance that the model predictions can be trusted.  And the scope over which they can be trusted.

Last month a report was published by the Nuffield Trust that is entitled “Understanding patient flow in hospitals”  and it asserts that traffic flow on a motorway is a valid conceptual model of patient flow through a hospital.  Here is a direct quote from the second paragraph in the Executive Summary:

Unfortunately, no evidence is provided in the report to support the validity of the statement and that omission should ring an alarm bell.

The observation that “the hospitals with the least free space struggle the most” is not a validation of the conceptual model.  Validation requires a concrete experiment.

To illustrate why observation is not validation let us consider a scenario where I have a headache and I take a paracetamol and my headache goes away.  I now have some evidence that shows a temporal association between what I did (take paracetamol) and what I got (a reduction in head pain).

But this is not a valid experiment because I have not considered the other seven possible combinations of headache before (Y/N), paracetamol (Y/N) and headache after (Y/N).

An association cannot be used to prove causation; not even a temporal association.

When I do not understand the cause, and I am without evidence from a well-designed experiment, then I might be tempted to intuitively jump to the (invalid) conclusion that “headaches are caused by lack of paracetamol!” and if untested this invalid judgement may persist and even become a belief.

Understanding causality requires an approach called counterfactual analysis; otherwise known as “What if?” And we can start that process with a thought experiment using our rhetorical model.  But we must remember that we must always validate the outcome with a real experiment. That is how good science works.

A famous thought experiment was conducted by Albert Einstein when he asked the question “If I were sitting on a light beam and moving at the speed of light what would I see?” This question led him to the Theory of Relativity which completely changed the way we now think about space and time.  Einstein’s model has been repeatedly validated by careful experiment, and has allowed engineers to design and deliver valuable tools such as the Global Positioning System which uses relativity theory to achieve high positional precision and accuracy.

So let us conduct a thought experiment to explore the ‘faster movement requires more space‘ statement in the case of patient flow in a hospital.

First, we need to define what we mean by the words we are using.

The phrase ‘faster movement’ is ambiguous.  Does it mean higher flow (more patients per day being admitted and discharged) or does it mean shorter length of stage (the interval between the admission and discharge events for individual patients)?

The phrase ‘more space’ is also ambiguous. In a hospital that implies physical space i.e. floor-space that may be occupied by corridors, chairs, cubicles, trolleys, and beds.  So are we actually referring to flow-space or storage-space?

What we have in this over-simplified statement is the conflation of two concepts: flow-capacity and space-capacity. They are different things. They have different units. And the result of conflating them is meaningless and confusing.

However, our stated goal is to improve understanding so let us consider one combination, and let us be careful to be more precise with our terminology, “higher flow always requires more beds“. Does it? Can we disprove this assertion with an example where higher flow required less beds (i.e. space-capacity)?

The relationship between flow and space-capacity is well understood.

The starting point is Little’s Law which was proven mathematically in 1961 by J.D.C. Little and it states:

Average work in progress = Average lead time  X  Average flow.

In the hospital context, work in progress is the number of occupied beds, lead time is the length of stay and flow is admissions or discharges per time interval (which must be the same on average over a long period of time).

(NB. Engineers are rather pedantic about units so let us check that this makes sense: the unit of WIP is ‘patients’, the unit of lead time is ‘days’, and the unit of flow is ‘patients per day’ so ‘patients’ = ‘days’ * ‘patients / day’. Correct. Verified. Tick.)

So, is there a situation where flow can increase and WIP can decrease? Yes. When lead time decreases. Little’s Law says that is possible. We have disproved the assertion.

Let us take the other interpretation of higher flow as shorter length of stay: i.e. shorter length of stay always requires more beds.  Is this correct? No. If flow remains the same then Little’s Law states that we will require fewer beds. This assertion is disproved as well.

And we need to remember that Little’s Law is proven to be valid for averages, does that shed any light on the source of our confusion? Could the assertion about flow and beds actually be about the variation in flow over time and not about the average flow?

And this is also well understood. The original work on it was done almost exactly 100 years ago by Agner Arup Erlang and the problem he looked at was the quality of customer service of the early telephone exchanges. Specifically, how likely was the caller to get the “all lines are busy, please try later” response.

What Erlang showed was there there is a mathematical relationship between the number of calls being made (the demand), the probability of a call being connected first time (the service quality) and the number of telephone circuits and switchboard operators available (the service cost).

So it appears that we already have a validated mathematical model that links flow, quality and cost that we might use if we substitute ‘patients’ for ‘calls’, ‘beds’ for ‘telephone circuits’, and ‘being connected’ for ‘being admitted’.

And this topic of patient flow, A&E performance and Erlang queues has been explored already … here.

So a telephone exchange is a more valid model of a hospital than a motorway.

We are now making progress in deepening our understanding.

The use of an invalid, untested, conceptual model is sloppy systems engineering.

So if the engineering is sloppy we would be unwise to fully trust the conclusions.

And I share this feedback in the spirit of black box thinking because I believe that there are some valuable lessons to be learned here – by us all.

To vote for this topic please click here.
To subscribe to the blog newsletter please click here.
To email the author please click here.

bull_by_the_horns_anim_150_wht_9609This week I witnessed an act of courage by someone prepared to take the health care bull by the horns.

On 25th October 2016 a landmark review was published about the integrated health and social care system in Northern Ireland.

It is not a comfortable read.

And the act of courage was the simultaneous publication of the document “Health and Well-being 2026” by Michelle O’Neill, the new Minister of Health.

The full document can be downloaded here.

It is courageous because it says, bluntly, that there is a burning platform, the level of service is not acceptable, doing nothing is not an option, and nothing short of a system-wide redesign will be required.

It is courageous because it sets a clear vision, a burning ambition, and is very clear that this will not be a quick fix. It is a ten year plan.

That implies a constancy of purpose will need to be maintained for at least a decade.


And it is courageous because it says that:

we will have to learn how to do this

Here is one paragraph that says that:

Developing the science of improvement can be done at the same time as making improvements


We need an infrastructure that makes this possible.”

The good news is that this science of improvement in health care is already well advanced, and it will advance further: a whole health and social care system transformation-by-design is a challenge of some magnitude.

A health and social care system engineering (HSCSE) challenge.

One component of the ten year plan is to develop this capability through a process called co-production.

co-productionNotice that the focus is on pro-actively preventing illness, not just re-actively managing it.

Notice that the design is centered on both the customer and the supplier, not just on the supplier.

And notice that the population served are also expected to be equal partners in the transformation-by-design process.

Courage, constancy of purpose and capability development  … a very welcome breath of fresh air!

For more posts like this please vote here.
For more information please subscribe here.

database_transferring_data_150_wht_10400It has been a busy week.

And a common theme has cropped up which I have attempted to capture in the diagram below.

It relates to how the NHS measures itself and how it “drives” improvement.

The measures are called “failure metrics” – mortality, infections, pressure sores, waiting time breaches, falls, complaints, budget overspends.  The list is long.

The data for a specific trust are compared with an arbitrary minimum acceptable standard to decide where the organisation is on the Red-Amber-Green scale.

If we are in the red zone on the RAG chart … we get a kick.  If not we don’t.

The fear of being bullied and beaten raises the emotional temperature and the internal pressure … which drives movement to get away from the pain.  A nematode worm will behave this way. They are not stupid either.

As as we approach the target line our RAG indicator turns “amber” … this is the “not statistically significant zone” … and now the stick is being waggled, ready in case the light goes red again.

So we muster our reserves of emotional energy and we PUSH until our RAG chart light goes green … but then we have to hold it there … which is exhausting.  One pain is replaced by another.

The next step is for the population of NHS nematodes to be compared with each other … they must be “bench-marked”, and some are doing better than others … as we might expect. We have done our “sadistics” training courses.

The bottom 5% or 10% line is used to set the “arbitrary minimum standard target” … and the top 10% are feted at national award ceremonies … and feast on the envy of the other 90 or 95% of “losers”.

The Cream of the Crop now have a big tick in their mission statement objectives box “To be in the Top 10% of Trusts in the UK“.  Hip hip huzzah.

And what has this system design actually achieved? The Cream of the Crap.


It is said that every system is perfectly designed to deliver what it delivers.

And a system that has been designed to only use failure and fear to push improvement can only ever achieve chronic mediocrity – either chaotic mediocrity or complacent mediocrity.

So, if we actually do want to tap into the vast zone of unfulfilled potential, and if we do actually want to escape the perpetual pain of the Cream of the Crap Trap forever … we need a better system design.

So we need some system engineers to help us do that.

And this week I met some … at the Royal Academy of Engineering in London … and it felt like finding a candle of hope in the darkness of despair.

I said it had been a busy week!

For more posts like this please vote here.
For more information please subscribe here.

CapstanA capstan is a simple machine for combining the effort of many people and enabling them to achieve more than any of them could do alone.

The word appears to have come into English from the Portuguese and Spanish sailors at around the time of the Crusades.

Each sailor works independently of the others. There is no requirement them to be equally strong because the capstan will combine their efforts.  And the capstan also serves as a feedback loop because everyone can sense when someone else pushes harder or slackens off.  It is an example of simple, efficient, effective, elegant design.

In the world of improvement we also need simple, efficient, effective and elegant ways to combine the efforts of many in achieving a common purpose.  Such as raising the standards of excellence and weighing the anchors of resistance.

In health care improvement we have many simultaneous constraints and we have many stakeholders with specific perspectives and special expertise.

And if we are not careful they will tend to pull only in their preferred direction … like a multi-way tug-o-war.  The result?  No progress and exhausted protagonists.

There are those focused on improving productivity – Team Finance.

There are those focused on improving delivery – Team Operations.

There are those focused on improving safety – Team Governance.

And we are all tasked with improving quality – Team Everyone.

So we need a synergy machine that works like a capstan-of-old, and here is one design.

Engine_Of_ExcellenceIt has four poles and it always turns in a clockwise direction, so the direction of push is clear.

And when all the protagonists push in the same direction, they will get their own ‘win’ and also assist the others to make progress.

This is how the sails of success are hoisted to catch the wind of change; and how the anchors of anxiety are heaved free of the rocks of fear; and how the bureaucratic bilge is pumped overboard to lighten our load and improve our speed and agility.

And the more hands on the capstan the quicker we will achieve our common goal.

Collective excellence.

The Harvard Business Review is worth reading because many of its articles challenge deeply held assumptions, and then back up the challenge with the pragmatic experience of those who have succeeded to overcome the limiting beliefs.

So the heading on the April 2016 copy that awaited me on my return from an Easter break caught my eye: YOU CAN’T FIX CULTURE.



The successful leaders of major corporate transformations are agreed … the cultural change follows the technical change … and then the emergent culture sustains the improvement.

The examples presented include the Ford Motor Company, Delta Airlines, Novartis – so these are not corporate small fry!

The evidence suggests that the belief of “we cannot improve until the culture changes” is the mantra of failure of both leadership and management.

A health care system is characterised by a culture of risk avoidance. And for good reason. It is all too easy to harm while trying to heal!  Primum non nocere is a core tenet – first do no harm.

But, change and improvement implies taking risks – and those leaders of successful transformation know that the bigger risk by far is to become paralysed by fear and to do nothing.  Continual learning from many small successes and many small failures is preferable to crisis learning after a catastrophic failure!

The UK healthcare system is in a state of chronic chaos.  The evidence is there for anyone willing to look.  And waiting for the NHS culture to change, or pushing for culture change first appears to be a guaranteed recipe for further failure.

The HBR article suggests that it is better to stay focussed; to work within our circles of control and influence; to learn from others where knowledge is known, and where it is not – to use small, controlled experiments to explore new ground.

And I know this works because I have done it and I have seen it work.  Just by focussing on what is important to every member on the team; focussing on fixing what we could fix; not expecting or waiting for outside help; gathering and sharing the feedback from patients on a continuous basis; and maintaining patient and team safety while learning and experimenting … we have created a micro-culture of high safety, high efficiency, high trust and high productivity.  And we have shared the evidence via JOIS.

The micro-culture required to maintain the safety, flow, quality and productivity improvements emerged and evolved along with the improvements.

It was part of the effect, not the cause.

So the concept of ‘fix the system design flaws and the continual improvement culture will emerge’ seems to work at macro-system and at micro-system levels.

We just need to learn how to diagnose and treat healthcare system design flaws. And that is known knowledge.

So what is the next excuse?  Too busy?

figure_pointing_out_chart_data_150_clr_8005It was the time for Bob and Leslie’s regular Improvement Science coaching session.

<Leslie> Hi Bob, how are you today?

<Bob> I am getting over a winter cold but otherwise I am good.  And you?

<Leslie> I am OK and I need to talk something through with you because I suspect you will be able to help.

<Bob> OK. What is the context?

<Leslie> Well, one of the projects that I am involved with is looking at the elderly unplanned admission stream which accounts for less than half of our unplanned admissions but more than half of our bed days.

<Bob> OK. So what were you looking to improve?

<Leslie> We want to reduce the average length of stay so that we free up beds to provide resilient space-capacity to ease the 4-hour A&E admission delay niggle.

<Bob> That sounds like a very reasonable strategy.  So have you made any changes and measured any improvements?

<Leslie> We worked through the 6M Design® sequence. We studied the current system, diagnosed some time traps and bottlenecks, redesigned the ones we could influence, modified the system, and continued to measure to monitor the effect.

<Bob> And?

<Leslie> It feels better but the system behaviour charts do not show an improvement.

<Bob> Which charts, specifically?

<Leslie> The BaseLine XmR charts of average length of stay for each week of activity.

<Bob> And you locked the limits when you made the changes?

<Leslie> Yes. And there still were no red flags. So that means our changes have not had a significant effect. But it definitely feels better. Am I deluding myself?

<Bob> I do not believe so. Your subjective assessment is very likely to be accurate. Our Chimp OS 1.0 is very good at some things! I think the issue is with the tool you are using to measure the change.

<Leslie> The XmR chart?  But I thought that was THE tool to use?

<Bob> Like all tools it is designed for a specific purpose.  Are you familiar with the term Type II Error.

<Leslie> Doesn’t that come from research? I seem to remember that is the error we make when we have an under-powered study.  When our sample size is too small to confidently detect the change in the mean that we are looking for.

<Bob> A perfect definition!  The same error can happen when we are doing before and after studies too.  And when it does, we see the pattern you have just described: the process feels better but we do not see any red flags on our BaseLine© chart.

<Leslie> But if our changes only have a small effect how can it feel better?

<Bob> Because some changes have cumulative effects and we omit to measure them.

<Leslie> OMG!  That makes complete sense!  For example, if my bank balance is stable my average income and average expenses are balanced over time. So if I make a small-but-sustained improvement to my expenses, like using lower cost generic label products, then I will see a cumulative benefit over time to the balance, but not the monthly expenses; because the noise swamps the signal on that chart!

<Bob> An excellent analogy!

<Leslie> So the XmR chart is not the tool for this job. And if this is the only tool we have then we risk making a Type II error. Is that correct?

<Bob> Yes. We do still use an XmR chart first though, because if there is a big enough and fast enough shift then the XmR chart will reveal it.  If there is not then we do not give up just yet; we reach for our more sensitive shift detector tool.

<Leslie> Which is?

<Bob> I will leave you to ponder on that question.  You are a trained designer now so it is time to put your designer hat on and first consider the purpose of this new tool, and then create the outline a fit-for-purpose design.

<Leslie> OK, I am on the case!

british_pound_money_three_bundled_stack_400_wht_2425This week I conducted an experiment – on myself.

I set myself the challenge of measuring the cost of chaos, and it was tougher than I anticipated it would be.

It is easy enough to grasp the concept that fire-fighting to maintain patient safety amidst the chaos of healthcare would cost more in terms of tears and time …

… but it is tricky to translate that concept into hard numbers; i.e. cash.

Chaos is an emergent property of a system.  Safety, delivery, quality and cost are also emergent properties of a system. We can measure cost, our finance departments are very good at that. We can measure quality – we just ask “How did your experience match your expectation”.  We can measure delivery – we have created a whole industry of access target monitoring.  And we can measure safety by checking for things we do not want – near misses and never events.

But while we can feel the chaos we do not have an easy way to measure it. And it is hard to improve something that we cannot measure.

So the experiment was to see if I could create some chaos, then if I could calm it, and then if I could measure the cost of the two designs – the chaotic one and the calm one.  The difference, I reasoned, would be the cost of the chaos.

And to do that I needed a typical chunk of a healthcare system: like an A&E department where the relationship between safety, flow, quality and productivity is rather important (and has been a hot topic for a long time).

But I could not experiment on a real A&E department … so I experimented on a simplified but realistic model of one. A simulation.

What I discovered came as a BIG surprise, or more accurately a sequence of big surprises!

  1. First I discovered that it is rather easy to create a design that generates chaos and danger.  All I needed to do was to assume I understood how the system worked and then use some averaged historical data to configure my model.  I could do this on paper or I could use a spreadsheet to do the sums for me.
  2. Then I discovered that I could calm the chaos by reactively adding lots of extra capacity in terms of time (i.e. more staff) and space (i.e. more cubicles).  The downside of this approach was that my costs sky-rocketed; but at least I had restored safety and calm and I had eliminated the fire-fighting.  Everyone was happy … except the people expected to foot the bill. The finance director, the commissioners, the government and the tax-payer.
  3. Then I got a really big surprise!  My safe-but-expensive design was horribly inefficient.  All my expensive resources were now running at rather low utilisation.  Was that the cost of the chaos I was seeing? But when I trimmed the capacity and costs the chaos and danger reappeared.  So was I stuck between a rock and a hard place?
  4. Then I got a really, really big surprise!!  I hypothesised that the root cause might be the fact that the parts of my system were designed to work independently, and I was curious to see what happened when they worked interdependently. In synergy. And when I changed my design to work that way the chaos and danger did not reappear and the efficiency improved. A lot.
  5. And the biggest surprise of all was how difficult this was to do in my head; and how easy it was to do when I used the theory, techniques and tools of Improvement-by-Design.

So if you are curious to learn more … I have written up the full account of the experiment with rationale, methods, results, conclusions and references and I have published it here.

FreshMeatOldBonesEvolution is an amazing process.

Using the same building blocks that have been around for a lot time, it cooks up innovative permutations and combinations that reveal new and ever more useful properties.

Very often a breakthrough in understanding comes from a simplification, not from making it more complicated.

Knowledge evolves in just the same way.

Sometimes a well understood simplification in one branch of science is used to solve an ‘impossible’ problem in another.

Cross-fertilisation of learning is a healthy part of the evolution process.

Improvement implies evolution of knowledge and understanding, and then application of that insight in the process of designing innovative ways of doing things better.

And so it is in healthcare.  For many years the emphasis on healthcare improvement has been the Safety-and-Quality dimension, and for very good reasons.  We need to avoid harm and we want to achieve happiness; for everyone.

But many of the issues that plague healthcare systems are not primarily SQ issues … they are flow and productivity issues. FP. The safety and quality problems are secondary – so only focussing on them is treating the symptoms and not the cause.  We need to balance the wheel … we need flow science.

Fortunately the science of flow is well understood … outside healthcare … but apparently not so well understood inside healthcare … given the queues, delays and chaos that seem to have become the expected norm.  So there is a big opportunity for cross fertilisation here.  If we choose to make it happen.

For example, from computer science we can borrow the knowledge of how to schedule tasks to make best use of our finite resources and at the same time avoid excessive waiting.

It is a very well understood science. There is comprehensive theory, a host of techniques, and fit-for-purpose tools that we can pick of the shelf and use. Today if we choose to.

So what are the reasons we do not?

Is it because healthcare is quite introspective?

Is it because we believe that there is something ‘special’ about healthcare?

Is it because there is no evidence … no hard proof … no controlled trials?

Is it because we assume that queues are always caused by lack of resources?

Is it because we do not like change?

Is it because we do not like to admit that we do not know stuff?

Is it because we fear loss of face?

Whatever the reasons the evidence and experience shows that most (if not all) the queues, delays and chaos in healthcare systems are iatrogenic.

This means that they are self-generated. And that implies we can un-self-generate them … at little or no cost … if only we knew how.

The only cost is to our egos of having to accept that there is knowledge out there that we could use to move us in the direction of excellence.

New meat for our old bones?

CAS_DiagramThe theme this week has been emergent learning.

By that I mean the ‘ah ha’ moment that happens when lots of bits of a conceptual jigsaw go ‘click’ and fall into place.

When, what initially appears to be smoky confusion suddenly snaps into sharp clarity.  Eureka!  And now new learning can emerge.

This did not happen by accident.  It was engineered.

The picture above is part of a bigger schematic map of a system – in this case a system related to the global health challenge of escalating obesity.

It is a complicated arrangement of boxes and arrows. There are  dotted lines that outline parts of the system that have leaky boundaries like the borders on a political map.

But it is a static picture of the structure … it tells us almost nothing about the function, the system behaviour.

And our intuition tells us that, because it is a complicated structure, it will exhibit complex and difficult to understand behaviour.  So, guided by our inner voice, we toss it into the pile labelled Wicked Problems and look for something easier to work on.

Our natural assumption that a complicated structure always leads to complex behavior is an invalid simplification, and one that we can disprove in a matter of moments.

Exhibit 1. A system can be complicated and yet still exhibit simple, stable and predictable behavior.

Harrison_H1The picture is of a clock designed and built by John Harrison (1693-1776).  It is called H1 and it is a sea clock.

Masters of sailing ships required very accurate clocks to calculate their longitude, the East-West coordinate on the Earth’s surface. And in the 18th Century this was a BIG problem. Too many ships were getting lost at sea.

Harrison’s sea clock is complicated.  It has many moving parts, but it was the most stable and accurate clock of its time.  And his later ones were smaller, more accurate and even more complicated.

Exhibit 2.  A system can be simple yet still exhibit complex, unstable and unpredictable behavior.

Double-compound-pendulumThe image is of a pendulum made of only two rods joined by a hinge.  The structure is simple yet the behavior is complex, and this can only be appreciated with a dynamic visualisation.

The behaviour is clearly not random. It has structure. It is called chaotic.

So, with these two real examples we have disproved our assumption that a complicated structure always leads to complex behaviour; and we have also disproved its inverse … that complex behavior always comes from a complicated structure.

This deeper insight gives us hope.

We can design complicated systems to exhibit stable and predictable behaviour if, like John Harrison, we know how to.

But John Harrison was a rare, naturally-gifted, mechanical genius, and even with that advantage it took him decades to learn how to design and to build his sea clocks.  He was the first to do so and he was self-educated so his learning was emergent.

And to make it easier, he was working on a purely mechanical system comprised of non-living parts that only obeyed the Laws of Newtonian physics.

Our healthcare system is not quite like that.  The parts are living people whose actions are limited by physical Laws but whose decisions are steered by other policies … learned ones … and ones that can change.  They are called heuristics and they can vary from person-to-person and minute-to-minute.  Heuristics can be learned, unlearned, updated, and evolved.

This is called emergent learning.

And to generate it we only need to ‘engineer’ the context for it … the rest happens as if by magic … but only if we do the engineering well.

This week I personally observed over a dozen healthcare staff simultaneously re-invent a complicated process scheduling technique, at the same time as using it to eliminate the  queues, waiting and chaos in the system they wanted to improve.

Their queues just evaporated … without requiring any extra capacity or money. Eureka!

We did not show them how to do it so they could not have just copied what we did.

We designed and built the context for their learning to emerge … and it did.  On its own.

The ISP One Day Intensive Workshop delivered emergent learning … just as it was designed to do.

This engineering is called complex adaptive system design and this one example proves that CASD is both possible, learnable and therefore teachable.

comparing_information_anim_5545[Bzzzzzz] Bob’s phone vibrated to remind him it was time for the regular ISP remote coaching session with Leslie. He flipped the lid of his laptop just as Leslie joined the virtual meeting.

<Leslie> Hi Bob, and Happy New Year!

<Bob> Hello Leslie and I wish you well in 2016 too.  So, what shall we talk about today?

<Leslie> Well, given the time of year I suppose it should be the Winter Crisis.  The regularly repeating annual winter crisis. The one that feels more like the perpetual winter crisis.

<Bob> OK. What specifically would you like to explore?

<Leslie> Specifically? The habit of comparing of this year with last year to answer the burning question “Are we doing better, the same or worse?”  Especially given the enormous effort and political attention that has been focused on the hot potato of A&E 4-hour performance.

<Bob> Aaaaah! That old chestnut! Two-Points-In-Time comparison.

<Leslie> Yes. I seem to recall you usually add the word ‘meaningless’ to that phrase.

<Bob> H’mm.  Yes.  It can certainly become that, but there is a perfectly good reason why we do this.

<Leslie> Indeed, it is because we see seasonal cycles in the data so we only want to compare the same parts of the seasonal cycle with each other. The apples and oranges thing.

<Bob> Yes, that is part of it. So what do you feel is the problem?

<Leslie> It feels like a lottery!  It feels like whether we appear to be better or worse is just the outcome of a random toss.

<Bob> Ah!  So we are back to the question “Is the variation I am looking at signal or noise?” 

<Leslie> Yes, exactly.

<Bob> And we need a scientifically robust way to answer it. One that we can all trust.

<Leslie> Yes.

<Bob> So how do you decide that now in your improvement work?  How do you do it when you have data that does not show a seasonal cycle?

<Leslie> I plot-the-dots and use an XmR chart to alert me to the presence of the signals I am interested in – especially a change of the mean.

<Bob> Good.  So why can we not use that approach here?

<Leslie> Because the seasonal cycle is usually a big signal and it can swamp the smaller change I am looking for.

<Bob> Exactly so. Which is why we have to abandon the XmR chart and fall back the two points in time comparison?

<Leslie> That is what I see. That is the argument I am presented with and I have no answer.

<Bob> OK. It is important to appreciate that the XmR chart was not designed for doing this.  It was designed for monitoring the output quality of a stable and capable process. It was designed to look for early warning signs; small but significant signals that suggest future problems. The purpose is to alert us so that we can identify the root causes, correct them and the avoid a future problem.

<Leslie> So we are using the wrong tool for the job. I sort of knew that. But surely there must be a better way than a two-points-in-time comparison!

<Bob> There is, but first we need to understand why a TPIT is a poor design.

<Leslie> Excellent. I’m all ears.

<Bob> A two point comparison is looking at the difference between two values, and that difference can be positive, zero or negative.  In fact, it is very unlikely to be zero because noise is always present.

<Leslie> OK.

<Bob> Now, both of the values we are comparing are single samples from two bigger pools of data.  It is the difference between the pools that we are interested in but we only have single samples of each one … so they are not measurements … they are estimates.

<Leslie> So, when we do a TPIT comparison we are looking at the difference between two samples that come from two pools that have inherent variation and may or may not actually be different.

<Bob> Well put.  We give that inherent variation a name … we call it variance … and we can quantify it.

<Leslie> So if we do many TPIT comparisons then they will show variation as well … for two reasons; first because the pools we are sampling have inherent variation; and second just from the process of sampling itself.  It was the first lesson in the ISP-1 course.

<Bob> Well done!  So the question is: “How does the variance of the TPIT sample compare with the variance of the pools that the samples are taken from?”

<Leslie> My intuition tells me that it will be less because we are subtracting.

<Bob> Your intuition is half-right.  The effect of the variation caused by the signal will be less … that is the rationale for the TPIT after all … but the same does not hold for the noise.

<Leslie> So the noise variation in the TPIT is the same?

<Bob> No. It is increased.

<Leslie> What! But that would imply that when we do this we are less likely to be able to detect a change because a small shift in signal will be swamped by the increase in the noise!

<Bob> Precisely.  And the degree that the variance increases by is mathematically predictable … it is increased by a factor of two.

<Leslie> So as we usually present variation as the square root of the variance, to get it into the same units as the metric, then that will be increased by the square root of two … 1.414

<Bob> Yes.

<Leslie> I need to put this counter-intuitive theory to the test!

<Bob> Excellent. Accept nothing on faith. Always test assumptions. And how will you do that?

<Leslie> I will use Excel to generate a big series of normally distributed random numbers; then I will calculate a series of TPIT differences using a fixed time interval; then I will calculate the means and variations of the two sets of data; and then I will compare them.

<Bob> Excellent.  Let us reconvene in ten minutes when you have done that.

10 minutes later …

<Leslie> Hi Bob, OK I am ready and I would like to present the results as charts. Is that OK?

<Bob> Perfect!

<Leslie> Here is the first one.  I used our A&E performance data to give me some context. We know that on Mondays we have an average of 210 arrivals with an approximately normal distribution and a standard deviation of 44; so I used these values to generate the random numbers. Here is the simulated Monday Arrivals chart for two years.


<Bob> OK. It looks stable as we would expect and I see that you have plotted the sigma levels which look to be just under 50 wide.

<Leslie> Yes, it shows that my simulation is working. So next is the chart of the comparison of arrivals for each Monday in Year 2 compared with the corresponding week in Year 1.

TPIT_DifferenceData <Bob> Oooookaaaaay. What have we here?  Another stable chart with a mean of about zero. That is what we would expect given that there has not been a change in the average from Year 1 to Year 2. And the variation has increased … sigma looks to be just over 60.

<Leslie> Yes!  Just as the theory predicted.  And this is not a spurious answer. I ran the simulation dozens of times and the effect is consistent!  So, I am forced by reality to accept the conclusion that when we do two-point-in-time comparisons to eliminate a cyclical signal we will reduce the sensitivity of our test and make it harder to detect other signals.

<Bob> Good work Leslie!  Now that you have demonstrated this to yourself using a carefully designed and conducted simulation experiment, you will be better able to explain it to others.

<Leslie> So how do we avoid this problem?

<Bob> An excellent question and one that I will ask you to ponder on until our next chat.  You know the answer to this … you just need to bring it to conscious awareness.


take_a_walk_text_10710One of the barriers to improvement is jumping to judgment too quickly.

Improvement implies innovation and action …

doing something different …

and getting a better outcome.

Before an action is a decision.  Before a decision is a judgment.

And we make most judgments quickly, intuitively and unconsciously.  Our judgments are a reflection of our individual, inner view of the world. Our mental model.

So when we judge intuitively and quickly then we will actually just reinforce our current worldview … and in so doing we create a very effective barrier to learning and improvement.

We guarantee the status quo.

So how do we get around this barrier?

In essence we must train ourselves to become more consciously aware of the judgment step in our thinking process.  And one way to flush it up to the surface is to ask the deceptively powerful question … And?

When someone is thinking through a problem then an effective contribution that we can offer is to listen, reflect, summarize, clarify and to encourage by asking “And?”

This process has a name.  It is called a coaching conversation.

And anyone can learn to how do it. Anyone.

business_race__PA_150_wht_3222There is a widely held belief that competition is the only way to achieve improvement.

This is a limiting belief.

But our experience tells us that competition is an essential part of improvement!

So which is correct?

When two athletes compete they both have to train hard to improve their individual performance. The winner of the race is the one who improves the most.  So by competing with each other they are forced to improve.

The goal of improvement is excellence and the test-of-excellence is performed in the present and is done by competing with others. The most excellent is labelled the “best” or “winner”. Everyone else is branded “second best” or “loser”.

This is where we start to see the limiting belief of competition.

It has a crippling effect.  Many competitive people will not even attempt the race if they do not feel they can win.  Their limiting belief makes them too fearful. They fear loss of self-esteem. Their ego is too fragile. They value hubris more than humility. And by not taking part they abdicate any opportunity to improve. They remain arrogantly mediocre and blissfully ignorant of it. They are the real losers.

So how can we keep the positive effect of competition and at the same time escape the limiting belief?

There are two ways:

First we drop the assumption that the only valid test of excellence is a comparison of us with others in the present.  And instead we adopt the assumption that it is equally valid to compare us with ourselves in the past.

We can all improve compared with what we used to be. We can all be winners of that race.

And as improvement happens our perspective shifts.  What becomes normal in the present would have been assumed to be impossible in the past.

This week I sat at my desk in a state of wonder.

I held in my hand a small plastic widget about the size of the end of my thumb.  It was a new USB data stick that had just arrived, courtesy of Amazon, and on one side in small white letters it proudly announced that it could hold 64 Gigabytes of data (that is 64 x 1024 x 1024 x 1024). And it cost less than a take-away curry.

About 30 years ago, when I first started to learn how to design, build and program computer system, a memory chip that was about the same size and same cost could hold 4 kilobytes (4 x 1024).  

So in just 30 years we have seen a 16-million-fold increase in data storage capacity. That is astounding! Our collective knowledge of how to design and build memory chips has improved so much. And yet we take it for granted.

The second way to side-step the limiting belief is even more powerful.

It is to drop the belief that individual improvement is enough.

Collective improvement is much, much, much more effective.


The human body is made up of about 50 trillion (50 x 1000 x 1000 x 1000 x 1000) cells – about the same as the number of bytes could store on 1000 of my wonderful new 64 Gigabyte data sticks!

And each cell is a microscopic living individual. A nano-engineered adaptive system of wondrous complexity and elegance.

Each cell breathes, eats, grows, moves, reproduces, senses, learns and remembers. These cells are really smart too! And they talk to each other, and they learn from each other.

And what makes the human possible is that its community of 50 trillion smart cells are a collaborative community … not a competitive community.

If all our cells started to compete with each other we would be very quickly reduced to soup (which is what the Earth was bathed in for about 2.7 billions years).

The first multi-celled organisms gained a massive survival advantage when they learned how to collaborate.

The rest is the Story of Evolution.  Even Charles Darwin missed the point – evolution is more about collaboration than competition – and we are only now beginning to learn that lesson. The hard way.  

come_join_the_team_150_wht_10876So survival is about learning and improving.

And survival of the fittest does not mean the fittest individual … it means the fittest group.

Collaborative improvement is the process through which we can all achieve win-win-win excellence.

And the understanding of how to do this collaborative improvement has a name … it is called Improvement Science.

smack_head_in_disappointment_150_wht_16653The NHS appears to be suffering from some form of obsessive-compulsive disorder.

OCD sufferers feel extreme anxiety in certain situations. Their feelings drive their behaviour which is to reduce the perceived cause of their feelings. It is a self-sustaining system because their perception is distorted and their actions are largely ineffective. So their anxiety is chronic.

Perfectionists demonstrate a degree of obsessive-compulsive behaviour too.

In the NHS the triggers are called ‘targets’ and usually take the form of failure metrics linked to arbitrary performance specifications.

The anxiety is the fear of failure and its unpleasant consequences: the name-shame-blame-game.

So a veritable industry has grown around ways to mitigate the fear. A very expensive and only partially effective industry.

Data is collected, cleaned, manipulated and uploaded to the Mothership (aka NHS England). There it is further manipulated, massaged and aggregated. Then the accumulated numbers are posted on-line, every month for anyone with a web-browser to scrutinise and anyone with an Excel spreadsheet to analyse.

An ocean of measurements is boiled and distilled into a few drops of highly concentrated and sanitized data and, in the process, most of the useful information is filtered out, deleted or distorted.

For example …

One of the failure metrics that sends a shiver of angst through a Chief Operating Officer (COO) is the failure to deliver the first definitive treatment for any patient within 18 weeks of referral from a generalist to a specialist.

The infamous and feared 18-week target.

Service providers, such as hospitals, are actually fined by their Clinical Commissioning Groups (CCGs) for failing to deliver-on-time. Yes, you heard that right … one NHS organisation financially penalises another NHS organisation for failing to deliver a result over which they have only partial control.

Service providers do not control how many patients are referred, or a myriad of other reasons that delay referred patients from attending appointments, tests and treatments. But the service providers are still accountable for the outcome of the whole process.

This ‘Perform-or-Pay-The-Price Policy‘ creates the perfect recipe for a lot of unhappiness for everyone … which is exactly what we hear and what we see.

So what distilled wisdom does the Mothership share? Here is a snapshot …


Q1: How useful is this table of numbers in helping us to diagnose the root causes of long waits, and how does it help us to decide what to change in our design to deliver a shorter waiting time and more productive system?

A1: It is almost completely useless (in this format).

So what actually happens is that the focus of management attention is drawn to the part just before the speed camera takes the snapshot … the bit between 14 and 18 weeks.

Inside that narrow time-window we see a veritable frenzy of target-failure-avoiding behaviour.

Clinical priority is side-lined and management priority takes over.  This is a management emergency! After all, fines-for-failure are only going to make the already bad financial situation even worse!

The outcome of this fire-fighting is that the bigger picture is ignored. The focus is on the ‘whip’ … and avoiding it … because it hurts!

Message from the Mothership:    “Until morale improves the beatings will continue”.

The good news is that the undigestible data liquor does harbour some very useful insights.  All we need to do is to present it in a more palatable format … as pictures of system behaviour over time.

We need to use the data to calculate the work-in-progress (=WIP).

And then we need to plot the WIP in time-order so we can see how the whole system is behaving over time … how it is changing and evolving. It is a dynamic living thing, it has vitality.

So here is the WIP chart using the distilled wisdom from the Mothership.


And this picture does not require a highly trained data analyst or statistician to interpret it for us … a Mark I eyeball linked to 1.3 kg of wetware running ChimpOS 1.0 is enough … and if you are reading this then you must already have that hardware and software.

Two patterns are obvious:

1) A cyclical pattern that appears to have an annual frequency, a seasonal pattern. The WIP is higher in the summer than in the winter. Eh? What is causing that?

2) After an initial rapid fall in 2008 the average level was steady for 4 years … and then after March 2012 it started to rise. Eh? What is causing is that?

The purpose of a WIP chart is to stimulate questions such as:

Q1: What happened in March 2012 that might have triggered this change in system behaviour?

Q2: What other effects could this trigger have caused and is there evidence for them?

A1: In March 2012 the Health and Social Care Act 2012 became Law. In the summer of 2012 the shiny new and untested Clinical Commissioning Groups (CCGs) were authorised to take over the reins from the exiting Primary care Trusts (PCTs) and Strategic Health Authorities (SHAs). The vast £80bn annual pot of tax-payer cash was now in the hands of well-intended GPs who believed that they could do a better commissioning job than non-clinicians. The accountability for outcomes had been deftly delegated to the doctors.  And many of the new CCG managers were the same ones who had collected their redundancy checks when the old system was shut down. Now that sounds like a plausible system-wide change! A massive political experiment was underway and the NHS was the guinea-pig.

A2: Another NHS failure metric is the A&E 4-hour wait target which, worringly, also shows a deterioration that appears to have started just after July 2010, i.e. just after the new Government was elected into power.  Maybe that had something to do with it? Maybe it would have happened whichever party won at the polls.


A plausible temporal association does not constitute proof – and we cannot conclude a political move to a CCG-led NHS has caused the observed behaviour. Retrospective analysis alone is not able to establish the cause.

It could just as easily be that something else caused these behaviours. And it is important to remember that there are usually many causal factors combining together to create the observed effect.

And unraveling that Gordian Knot is the work of analysts, statisticians, economists, historians, academics, politicians and anyone else with an opinion.

We have a more pressing problem. We have a deteriorating NHS that needs urgent resuscitation!

So what can we do?

One thing we can do immediately is to make better use of our data by presenting it in ways that are easier to interpret … such as a work in progress chart.

Doing that will trigger different conversions; ones spiced with more curiosity and laced with less cynicism.

We can add more context to our data to give it life and meaning. We can season it with patient and staff stories to give it emotional impact.

And we can deepen our understanding of what causes lead to what effects.

And with that deeper understanding we can begin to make wiser decisions that will lead to more effective actions and better outcomes.

This is all possible. It is called Improvement Science.

And as we speak there is an experiment running … a free offer to doctors-in-training to learn the foundations of improvement science in healthcare (FISH).

In just two weeks 186 have taken up that offer and 13 have completed the course!

And this vanguard of curious and courageous innovators have discovered a whole new world of opportunity that they were completely unaware of before. But not anymore!

So let us ease off applying the whip and ease in the application of WIP.


Here is a short video describing how to create, animate and interpret a form of diagnostic Vitals Chart® using the raw data published by NHS England.  This is a training exercise from the Improvement Science Practitioner (level 2) course.

How to create an 18 weeks animated Bucket Brigade Chart (BBC)


RIA_graphicA question that is often asked by doctors in particular is “What is the difference between Research, Audit and Improvement Science?“.

It is a very good question and the diagram captures the essence of the answer.

Improvement science is like a bridge between research and audit.

To understand why that is we first need to ask a different question “What are the purposes of research, improvement science and audit? What do they do?

In a nutshell:

Research provides us with new knowledge and tells us what the right stuff is.
Improvement Science provides us with a way to design our system to do the right stuff.
Audit provides us with feedback and tells us if we are doing the right stuff right.

Research requires a suggestion and an experiment to test it.   A suggestion might be “Drug X is better than drug Y at treating disease Z”, and the experiment might be a randomised controlled trial (RCT).  The way this is done is that subjects with disease Z are randomly allocated to two groups, the control group and the study group.  A measure of ‘better’ is devised and used in both groups. Then the study group is given drug X and the control group is given drug Y and the outcomes are compared.  The randomisation is needed because there are always many sources of variation that we cannot control, and it also almost guarantees that there will be some difference between our two groups. So then we have to use sophisticated statistical data analysis to answer the question “Is there a statistically significant difference between the two groups? Is drug X actually better than drug Y?”

And research is often a complicated and expensive process because to do it well requires careful study design, a lot of discipline, and usually large study and control groups. It is an effective way to help us to know what the right stuff is but only in a generic sense.

Audit requires a standard to compare with and to know if what we are doing is acceptable, or not. There is no randomisation between groups but we still need a metric and we still need to measure what is happening in our local reality.  We then compare our local experience with the global standard and, because variation is inevitable, we have to use statistical tools to help us perform that comparison.

And very often audit focuses on avoiding failure; in other words the standard is a ‘minimum acceptable standard‘ and as long as we are not failing it then that is regarded as OK. If we are shown to be failing then we are in trouble!

And very often the most sophisticated statistical tool used for audit is called an average.  We measure our performance, we average it over a period of time (to remove the troublesome variation), and we compare our measured average with the minimum standard. And if it is below then we are in trouble and if it is above then we are not.  We have no idea how reliable that conclusion is though because we discounted any variation.

A perfect example of this target-driven audit approach is the A&E 95% 4-hour performance target.

The 4-hours defines the metric we are using; the time interval between a patient arriving in A&E and them leaving. It is called a lead time metric. And it is easy to measure.

The 95% defined the minimum  acceptable average number of people who are in A&E for less than 4-hours and it is usually aggregated over three months. And it is easy to measure.

So, if about 200 people arrive in a hospital A&E each day and we aggregate for 90 days that is about 18,000 people in total so the 95% 4-hour A&E target implies that we accept as OK for about 900 of them to be there for more than 4-hours.

Do the 900 agree? Do the other 17,100?  Has anyone actually asked the patients what they would like?

The problem with this “avoiding failure” mindset is that it can never lead to excellence. It can only deliver just above the minimum acceptable. That is called mediocrity.  It is perfectly possible for a hospital to deliver 100% on its A&E 4 hour target by designing its process to ensure every one of the 18,000 patients is there for exactly 3 hours and 59 minutes. It is called a time-trap design.

We can hit the target and miss the point.

And what is more the “4-hours” and the “95%” are completely arbitrary numbers … there is not a shred of research evidence to support them.

So just this one example illustrates the many problems created by having a gap between research and audit.

And that is why we need Improvement Science to help us to link them together.

We need improvement science to translate the global knowledge and apply it to deliver local improvement in whatever metrics we feel are most important. Safety metrics, flow metrics, quality metrics and productivity metrics. Simultaneously. To achieve system-wide excellence. For everyone, everywhere.

When we learn Improvement Science we learn to measure how well we are doing … we learn the power of measurement of success … and we learn to avoid averaging because we want to see the variation. And we still need a minimum acceptable standard because we want to exceed it 100% of the time. And we want continuous feedback on just how far above the minimum acceptable standard we are. We want to see how excellent we are, and we want to share that evidence and our confidence with our patients.

We want to agree a realistic expectation rather than paint a picture of the worst case scenario.

And when we learn Improvement Science we will see very clearly where to focus our improvement efforts.

Improvement Science is the bit in the middle.

Stop Press:  There is currently an offer of free on-line foundation training in improvement science for up to 1000 doctors-in-training … here  … and do not dally because places are being snapped up fast!

Rogers_CurveThe early phases of a transformation are where most fall by the wayside.

And the failure rate is horrifying – an estimated 80% of improvement initiatives fail to achieve their goals.

The recent history of the NHS is littered with the rusting wreckage of a series of improvement bandwagons.  Many who survived the crashes are too scarred and too scared to try again.

Transformation and improvement imply change which implies innovation … new ways of thinking, new ways of behaving, new techniques, new tools, and new ways of working.

And it has been known for over 50 years that innovation spreads in a very characteristic way. This process was described by Everett Rogers in a book called ‘Diffusion of Innovations‘ and is described visually in the diagram above.

The horizontal axis is a measure of individual receptiveness to the specific innovation … and the labels are behaviours: ‘I exhibit early adopter behaviour‘ (i.e. not ‘I am an early adopter’).

What Roger’s discovered through empirical observation was that in all cases the innovation diffuses from left-to-right; from innovation through early adoption to the ‘silent’ majority.

Complete diffusion is not guaranteed though … there are barriers between the phases.

One barrier is between innovation and early adoption.

There are many innovations that we never hear about and very often the same innovation appears in many places and often around the same time.

This innovation-adoption barrier is caused by two things:
1) most are not even aware of the problem … they are blissfully ignorant;
2) news of the innovation is not shared widely enough.

Innovators are sensitive people.  They sense there is a problem long before others do. They feel the fear and the excitement of need for innovation. They challenge their own assumptions and they actively seek solutions. They swim against the tide of ignorance, disinterest, skepticism and often toxic cynicism.  So when they do discover a way forward they often feel nervous about sharing it. They have learned (the hard way) that the usual reaction is to be dismissed and discounted.  Most people do not like to learn about unknown problems and hazards; and they like it even less to learn that there are solutions that they neither recognise nor understand.

But not everyone.

There is a group called the early adopters who, like the innovators, are aware of the problem. They just do not share the innovator’s passion to find a solution … irrespective of the risks … so they wait … their antennae tuned for news that a solution has been found.

Then they act.

And they act in one of two ways:

1) Talkers … re-transmit the news of the problem and the discovery of a generic solution … which is essential in building awareness.

2) Walkers … try the innovative approach themselves and in so doing learn a lot about their specific problem and the new ways to solving it.

And it is the early adopters that do both of these actions that are the most effective and the most valuable to everyone else.  Those that talk-the-new-walk and walk-the-new-talk.

And we can identify who they are because they will be able to tell stories of how they have applied the innovation in their world; and the results that they have achieved; and how they achieved them; and what worked well; and what did not; and what they learned; and how they evolved and applied the innovation to meet their specific needs.

They are the leaders, the coaches and the teachers of improvement and transformation.

They See One, Do Some, and Teach Many.

The early adopters are the bridge across the Innovation and Transformation Chasm.

Dr_Bob_ThumbnailThere is a big bun-fight kicking off on the topic of 7-day working in the NHS.

The evidence is that there is a statistical association between mortality in hospital of emergency admissions and day of the week: and weekends are more dangerous.

There are fewer staff working at weekends in hospitals than during the week … and delays and avoidable errors increase … so risk of harm increases.

The evidence also shows that significantly fewer patients are discharged at weekends.

So the ‘obvious’ solution is to have more staff on duty at weekends … which will cost more money.

Simple, obvious, linear and wrong.  Our intuition has tricked us … again!

Let us unravel this Gordian Knot with a bit of flow science and a thought experiment.

1. The evidence shows that there are fewer discharges at weekends … and so demonstrates lack of discharge flow-capacity. A discharge process is not a single step, there are many things that must flow in sync for a discharge to happen … and if any one of them is missing or delayed then the discharge does not happen or is delayed.  The weakest link effect.

2. The evidence shows that the number of unplanned admissions varies rather less across the week; which makes sense because they are unplanned.

3. So add those two together and at weekends we see hospitals filling up with unplanned admissions – not because the sick ones are arriving faster – but because the well ones are leaving slower.

4. The effect of this is that at weekends the queue of people in beds gets bigger … and they need looking after … which requires people and time and money.

5. So the number of staffed beds in a hospital must be enough to hold the biggest queue – not the average or some fudged version of the average like a 95th percentile.

6. So a hospital running a 5-day model needs more beds because there will be more variation in bed use and we do not want to run out of beds and delay the admission of the newest and sickest patients. The ones at most risk.

7. People do not get sicker because there is better availability of healthcare services – but saying we need to add more unplanned care flow capacity at weekends implies that it does.  What is actually required is that the same amount of flow-resource that is currently available Mon-Fri is spread out Mon-Sun. The flow-capacity is designed to match the customer demand – not the convenience of the supplier.  And that means for all parts of the system required for unplanned patients to flow.  What, where and when. It costs the same.

8. Then what happens is that the variation in the maximum size of the queue of patients in the hospital will fall and empty beds will appear – as if by magic.  Empty beds that ensure there is always one for a new, sick, unplanned admission on any day of the week.

9. And empty beds that are never used … do not need to be staffed … so there is a quick way to reduce expensive agency staff costs.

So with a comprehensive 7-day flow-capacity model the system actually gets safer, less chaotic, higher quality and less expensive. All at the same time. Safety-Flow-Quality-Productivity.

knee_jerk_reflexA commonly used technique for continuous improvement is the Plan-Do-Study-Act or PDSA cycle.

This is a derivative of the PDCA cycle first described by Walter Shewhart in the 1930’s … where C is Check.

The problem with PDSA is that improvement does not start with a plan, it starts with some form of study … so SAPD would be a better order.

IHI_MFITo illustrate this point if we look at the IHI Model for Improvement … the first step is a pair of questions related to purpose “What are we trying to accomplish?” and “How will we know a change is an improvement?

With these questions we are stepping back and studying our shared perspective of our desired future.

It is a conscious and deliberate act.

We are examining our mental models … studying them … and comparing them.  We have not reached a diagnosis or a decision yet, so we cannot plan or do yet.

The third question is a combination of diagnosis and design … we need to understand our current state in order to design changes that will take up to our improved future state.

We cannot plan what to do or how to do it until we have decided and agreed what the future design will look like, and tested that our proposed future design is fit-4-purpose.

So improvement by discovery or by design does not start with plan, it starts with study.

And another word for study is ‘sense’ which may be a better one … because study implies a deliberate, conscious, often slow process … while sense is not so restrictive.

Very often our actions are not the result of a deliberative process … they are automatic and reflex. We do not think about them. They just sort of happen.

The image of the knee-jerk reflex illustrates the point.

In fact we have little conscious control over these automatic motor reflexes which respond much more quickly than our conscious thinking process can.  We are aware of the knee jerk after it has happened, not before, so we may be fooled into thinking that we ‘Do’ without a ‘Plan’.  But when we look in more detail we can see the sensory input and the hard-wired ‘plan’ that links to to motor output.  Study-Plan-Do.

The same is true for many other actions – our unconscious mind senses, processes, decides, plans and acts long before we are consciously aware … and often the only clue we have is a brief flash of emotion … and usually not even that.  Our behaviour is largely habitual.

And even in situations when we need to make choices the sense-recognise-act process is fast … such as when a patient suddenly becomes very ill … we switch into the Resuscitate mode which is a pre-planned sequence of steps that is guided by what are sensing … but it is not made up on the spot. There is no committee. No meetings. We just do what we have learned and practiced how to do … because it was designed to.   It still starts with Study … it is just that the Study phase is very short … we just need enough information to trigger the pre-prepared plan. ABC – Airway … Breathing … Circulation. No discussion. No debate.

So, improvement starts with Study … and depending on what we sense what happens next will vary … and it will involve some form of decision and plan.

1. Unconscious, hard-wired, knee jerk reflex.
2. Unconscious, learned, habitual behaviour.
3. Conscious, pre-planned, steered response.
4. Conscious, deliberation-diagnosis-design then delivery.

The difference is just the context and the timing.   They are all Study-Plan-Do.

 And the Plan may be to Do Nothing …. the Deliberate Act of Omission.

And when we go-and-see and study the external reality we sometimes get a surprise … what we see is not what we expect. We feel a sense of confusion. And before we can plan we need to adjust our mental model so that it better matches reality. We need to establish clarity.  And in this situation we are doing Study-Adjust-Plan-Do …. S(A)PD.

There comes a point in every improvement journey when it is time to celebrate and share. This is the most rewarding part of the Improvement Science Practitioner (ISP) coaching role so I am going to share a real celebration that happened this week.

The picture shows Chris Jones holding his well-earned ISP-1 Certificate of Competence.  The “Maintaining the Momentum of Medicines”  redesign project is shown on the poster on the left and it is the tangible Proof of Competence. The hard evidence that the science of improvement delivers.


Behind us are the A3s for one of the Welsh Health Boards;  ABMU in fact.

An A3 is a way of summarising an improvement project very succinctly – the name comes from the size of paper used.  A3 is the biggest size that will go through an A4 fax machine (i.e. folded over) and the A3 discipline is to be concise and clear at the same time.

The three core questions that the A3 answers are:
Q1: What is the issue?
Q2: What would improvement need to look like?
Q3: How would we know that a change is an improvement?

This display board is one of many in the room, each sharing a succinct story of a different improvement journey and collectively a veritable treasure trove of creativity and discovery.

The A3s were of variable quality … and that is OK and is expected … because like all skills it takes practice. Lots of practice. Perfection is not the goal because it is unachievable. Best is not the goal because only one can be best. Progress is the goal because everyone can progress … and so progress is what we share and what we celebrate.

The event was the Fifth Sharing Event in the Welsh Flow Programme that has been running for just over a year and Chris is the first to earn an ISP-1 Certificate … so we all celebrated with him and shared the story.  It is a team achievement – everyone in the room played a part in some way – as did many more who were not in the room on the day.

stick_figure_look_point_on_cliff_anim_8156Improvement is like mountain walking.

After a tough uphill section we reach a level spot where we can rest; catch our breath; take in the view; reflect on our progress and the slips, trips and breakthroughs along the way; perhaps celebrate with drink and nibble of our chocolate ration; and then get up, look up, and square up for the next uphill bit.

New territory for us.  New challenges and new opportunities to learn and to progress and to celebrate and share our improvement stories.

FISH_ISP_eggs_jumpingResistance-to-change is an oft quoted excuse for improvement torpor. The implied sub-message is more like “We would love to change but They are resisting“.

Notice the Us-and-Them language.  This is the observable evidence of an “We‘re OK and They’re Not OK” belief.  And in reality it is this unstated belief and the resulting self-justifying behaviour that is an effective barrier to systemic improvement.

This Us-and-Them language generates cultural friction, erodes trust and erects silos that are effective barriers to the flow of information, of innovation and of learning.  And the inevitable reactive solutions to this Us-versus-Them friction create self-amplifying positive feedback loops that ensure the counter-productive behaviour is sustained.

One tangible manifestation are DRATs: Delusional Ratios and Arbitrary Targets.

So when a plausible, rational and well-evidenced candidate for an alternative approach is discovered then it is a reasonable reaction to grab it and to desperately spray the ‘magic pixie dust’ at everything.

This a recipe for disappointment: because there is no such thing as ‘improvement magic pixie dust’.

The more uncomfortable reality is that the ‘magic’ is the result of a long period of investment in learning and the associated hard work in practising and polishing the techniques and tools.

It may look like magic but is isn’t. That is an illusion.

And some self-styled ‘magicians’ choose to keep their hard-won skills secret … because by sharing them know that they will lose their ‘magic powers’ in a flash of ‘blindingly obvious in hindsight’.

And so the chronic cycle of despair-hope-anger-and-disappointment continues.

System-wide improvement in safety, flow, quality and productivity requires that the benefits of synergism overcome the benefits of antagonism.  This requires two changes to the current hope-and-despair paradigm.  Both are necessary and neither are sufficient alone.

1) The ‘wizards’ (i.e. magic folk) share their secrets.
2) The ‘muggles’ (i.e. non-magic folk) invest the time and effort in learning ‘how-to-do-it’.

The transition to this awareness is uncomfortable so it needs to be managed pro-actively … by being open about the risk … and how to mitigate it.

That is what experienced Practitioners of Improvement Science (and ISP) will do. Be open about the challenged ahead.

And those who desperately want the significant and sustained SFQP improvements; and an end to the chronic chaos; and an end to the gaming; and an end to the hope-and-despair cycle …. just need to choose. Choose to invest and learn the ‘how to’ and be part of the future … or choose to be part of the past.

Improvement science is simple … but it is not intuitively obvious … and so it is not easy to learn.

If it were we would be all doing it.

And it is the behaviour of a wise leader of change to set realistic and mature expectations of the challenges that come with a transition to system-wide improvement.

That is demonstrating the OK-OK behaviour needed for synergy to grow.

SFQP_enter_circle_middle_15576For a system to be both effective and efficient the parts need to work in synergy. This requires both alignment and collaboration.

Systems that involve people and processes can exhibit complex behaviour. The rules of engagement also change as individuals learn and evolve their beliefs and their behaviours.

The values and the vision should be more fixed. If the goalposts are obscure or oscillate then confusion and chaos is inevitable.

So why is collaborative alignment so difficult to achieve?

One factor has been mentioned. Lack of a common vision and a constant purpose.

Another factor is distrust of others. Our fear of exploitation, bullying, blame, and ridicule.

Distrust is a learned behaviour. Our natural inclination is trust. We have to learn distrust. We do this by copying trust-eroding behaviours that are displayed by our role models. So when leaders display these behaviours then we assume it is OK to behave that way too.  And we dutifully emulate.

The most common trust eroding behaviour is called discounting.  It is a passive-aggressive habit characterised by repeated acts of omission:  Such as not replying to emails, not sharing information, not offering constructive feedback, not asking for other perspectives, and not challenging disrespectful behaviour.

There are many causal factors that lead to distrust … so there is no one-size-fits-all solution to dissolving it.

One factor is ineptitude.

This is the unwillingness to learn and to use available knowledge for improvement.

It is one of the many manifestations of incompetence.  And it is an error of omission.

Whenever we are unable to solve a problem then we must always consider the possibility that we are inept.  We do not tend to do that.  Instead we prefer to jump to the conclusion that there is no solution or that the solution requires someone else doing something different. Not us.

The impossibility hypothesis is easy to disprove.  If anyone has solved the problem, or a very similar one, and if they can provide evidence of what and how then the problem cannot be impossible to solve.

The someone-else’s-fault hypothesis is trickier because proving it requires us to influence others effectively.  And that is not easy.  So we tend to resort to easier but less effective methods … manipulation, blame, bullying and so on.

A useful way to view this dynamic is as a set of four concentric circles – with us at the centre.

The outermost circle is called the ‘Circle of Ignorance‘. The collection of all the things that we do not know we do not know.

Just inside that is the ‘Circle of Concern‘.  These are things we know about but feel completely powerless to change. Such as the fact that the world turns and the sun rises and falls with predictable regularity.

Inside that is the ‘Circle of Influence‘ and it is a broad and continuous band – the further away the less influence we have; the nearer in the more we can do. This is the zone where most of the conflict and chaos arises.

The innermost is the ‘Circle of Control‘.  This is where we can make changes if we so choose to. And this is where change starts and from where it spreads.

SFQP_enter_circle_middle_15576So if we want system-level improvements in safety, flow, quality and productivity (or cost) then we need to align these four circles. Or rather the gaps in them.

We start with the gaps in our circle of control. The things that we believe we cannot do … but when we try … we discover that we can (and always could).

With this new foundation of conscious competence we can start to build new relationships, develop trust and to better influence others in a win-win-win conversation.

And then we can collaborate to address our common concerns – the ones that require coherent effort. We can agree and achieve our common purpose, vision and goals.

And from there we will be able to explore the unknown opportunities that lie beyond. The ones we cannot see yet.

Dr_Bob_Thumbnail[Bing] Bob logged in for the weekly Webex coaching session. Leslie was not yet on line, but joined a few minutes later.

<Leslie> Hi Bob, sorry I am a bit late, I have been grappling with a data analysis problem and did not notice the time.

<Bob> Hi Leslie. Sounds interesting. Would you like to talk about that?

<Leslie> Yes please! It has been driving me nuts!

<Bob> OK. Some context first please.

<Leslie> Right, yes. The context is an improvement-by-design assignment with a primary care team who are looking at ways to reduce the unplanned admissions for elderly patients by 10%.

<Bob> OK. Why 10%?

<Leslie> Because they said that would be an operationally very significant reduction.  Most of their unplanned admissions, and therefore costs for admissions, are in that age group.  They feel that some admissions are avoidable with better primary care support and a 10% reduction would make their investment of time and effort worthwhile.

<Bob> OK. That makes complete sense. Setting a new design specification is OK.  I assume they have some baseline flow data.

<Leslie> Yes. We have historical weekly unplanned admissions data for two years. It looks stable, though rather variable on a week-by-week basis.

<Bob> So has the design change been made?

<Leslie> Yes, over three months ago – so I expected to be able to see something by now but there are no red flags on the XmR chart of weekly admissions. No change.  They are adamant that they are making a difference, particularly in reducing re-admissions.  I do not want to disappoint them by saying that all their hard work has made no difference!

<Bob> OK Leslie. Let us approach this rationally.  What are the possible causes that the weekly admissions chart is not signalling a change?

<Leslie> If there has not been a change in admissions. This could be because they have indeed reduced readmissions but new admissions have gone up and is masking the effect.

<Bob> Yes. That is possible. Any other ideas?

<Leslie> That their intervention has made no difference to re-admissions and their data is erroneous … or worse still … fabricated!

<Bob> Yes. That is possible too. Any other ideas?

<Leslie> Um. No. I cannot think of any.

<Bob> What about the idea that the XmR chart is not showing a change that is actually there?

<Leslie> You mean a false negative? That the sensitivity of the XmR chart is limited? How can that be? I thought these charts will always signal a significant shift.

<Bob> It depends on the degree of shift and the amount of variation. The more variation there is the harder it is to detect a small shift.  In a conventional statistical test we would just use bigger samples, but that does not work with an XmR chart because the run tests are all fixed length. Pre-defined sample sizes.

<Leslie> So that means we can miss small but significant changes and come to the wrong conclusion that our change has had no effect! Isn’t that called a Type 2 error?

<Bob> Yes, it is. And we need to be aware of the limitations of the analysis tool we are using. So, now you know that how might you get around the problem?

<Leslie> One way would be to aggregate the data over a longer time period before plotting on the chart … we know that will reduce the sample variation.

<Bob> Yes. That would work … but what is the downside?

<Leslie> That we have to wait a lot longer to show a change, or not. We do not want that.

<Bob> I agree. So what we do is we use a chart that is much more sensitive to small shifts of the mean.  And that is called a cusum chart. These were not invented until 30 years after Shewhart first described his time-series chart.  To give you an example, do you recall that the work-in-progress chart is much more sensitive to changes in flow than either demand or activity charts?

<Leslie> Yes, and the WIP chart also reacts immediately if either demand or activity change. It is the one I always look at first.

<Bob> That is because a WIP chart is actually a cusum chart. It is the cumulative sum of the difference between demand and activity.

<Leslie> OK! That makes sense. So how do I create and use a cusum chart?

<Bob> I have just emailed you some instructions and a few examples. You can try with your unplanned admissions data. It should only take a few minutes. I will get a cup of tea and a chocolate Hobnob while I wait.

[Five minutes later]

<Leslie> Wow! That is just brilliant!  I can see clearly on the cusum chart when the shifts happened and when I split the XmR chart at those points the underlying changes become clear and measurable. The team did indeed achieve a 10% reduction in admissions just as they claimed they had.  And I checked with a statistical test which confirmed that it is statistically significant.

<Bob> Good work.  Cusum charts take a bit of getting used to and we have be careful about the metric we are plotting and a few other things but it is a useful trick to have up our sleeves for situations like this.

<Leslie> Thanks Bob. I will bear that in mind.  Now I just need to work out how to explain cusum charts to others! I do not want to be accused of using statistical smoke-and-mirrors! I think a golf metaphor may work with the GPs.

magnify_text_anim_16253(1)There is no doubt about it …

… change is not easy.

If it were we would all be doing it …

… all of the time.

So one skill that an effective agent of change demonstrates is persistence.

And also patience. And also reflective learning.

A recent change project demonstrated objective, measurable outcomes which showed that the original design goal was achieved. In budget. It took two years from first contact to final report.

Why two years? Could it have been done quicker?

In principle – ‘Emphatically, yes’.  In practice – ‘Evidently, no’.

With the benefit of hindsight it is always clearer what might have caused the delay.  Maybe the experience-based advice of those guiding the process was discounted.  Maybe the repeated recommendation that an initial investment in learning the basic science of improvement would deliver a quicker return was ignored.  Maybe.

So the reflective learning from the first wave was re-invested in the second wave.

And the second wave delivered a significant and objectively measurable improvement in one year.

And the reflective learning from the second wave was re-invested in the third wave.

And the third wave delivered a significant and objectively measurable improvement in six months.

And the three improvement projects were of comparable complexity.

So what is happening here?

The process of improvement is itself being improved.  Experience and learning are being re-invested.

And two repeating themes emerge ….

Patience is needed to await outcomes and to learn from them.

Persistence is needed to re-examine old paradigms with this new knowledge and new understanding.

Patience and Persistence. And these principles apply as much to the teacher as to the taught.

Troublemaker_vs_RebelSystem-wide, significant, and sustained improvement implies system-wide change.

And system-wide change implies more than 20% of the people commit to action. This is the cultural tipping point.

These critical 20% have a badge … they call themselves rebels … and they are perceived as troublemakers by those who profit most from the status quo.

But troublemakers and rebels are radically different … as shown in the summary by Lois Kelly.

Rebels share a common, future-focussed purpose.  A mission.  They are passionate, optimistic and creative.  They understand synergy and how to release and align the stored emotional energy of both themselves and others.  And most importantly they are value-led and that makes them attractive.  Values such as honesty, integrity and industry are what make leaders together-effective.

SHCR_logoAnd as we speak there is school for rebels in healthcare gaining momentum …  and their programme is current, open to all and free to access. And the change agent development materials are excellent!

Click here to download their study guide.

Converting possibilities into realities is the essence of design … so our merry band of rebels will also need to learn how to convert their positive rhetoric into practical reality. And that is more physics than psychology.

Streams flow because of physics not because of passion.SFQP_Compass

And this is why the science of improvement is important because it is the synthesis of the people dimension and the process dimension – into a system that delivers significant and sustained improvement.

On all dimensions. Safety, Flow, Quality and Productivity.

The lighthouse is our purpose; the whale represents the magnitude of our challenge; the blue sky is the creative thinking we need … to avoid trying to boil the ocean.

And the noisy, greedy, s****y seagulls are the troublemakers who always will plague us.

[Image by Malaika Art].

Nanny_McPheeThere comes a point in every improvement-by-design journey when it is time for the improvement guide to leave.

An experienced coach knows when that time has arrived and the expected departure is in the contract.

The Nanny McPhee Coaching Contract:

“When you need me but do not want me then I have to stay. And when you want me but do not need me then I have to leave.”

The science of improvement can appear like ‘magic’ at first because seemingly impossible simultaneous win-win-win benefits are seen to happen with minimal effort.

It is not magic.  It requires years of training and practice to become a ‘magician’.  So those who have invested in learning the know-how are just catalysts.  When their catalysts-of-change work is done then they must leave to do it elsewhere.

The key to managing this transition is to set this expectation clearly and right at the start; so it does not come as a surprise. And to offer reminders along the way.

And it is important to follow through … when the time is right.

It is not always easy though.

There are three commonly encountered situations that will test the temptation of the guide.

1) When things are going very badly because the coaching contract is being breached; usually by old, habitual, trust-eroding, error-of-omission behaviours such as: not communicating, not sharing learning, and not delivering on commitments. The coach, fearing loss of reputation and face, is tempted to stay longer and to try harder. Often getting angry and frustrated in the process.  This is an error of judgement. If the coaching contract is being persistently breached then the Exit Clause should be activated clearly and cleanly.

2) When things are going OK, it is easy to become complacent and the temptation then is to depart too soon, only to hear later that the solo-flyers “crashed and burned”, because they were not quite ready and could not (or would not) see it.  This is the “need but do not want” part of the Nanny McPhee Coaching Contract.  One role of the ISP coach is to respectfully challenge the assertion that ‘We can do it ourselves‘ … by saying ‘OK, please demonstrate‘.

3) When things are going very well it is tempting to blow the Trumpet of Success too early, attracting the attention of others who will want to take short cuts, to bypass the effort of learning for themselves, and to jump onto someone else’s improvement bus.  The danger here is that they bring their counter-productive, behavioural baggage with them. This can cause the improvement bus to veer off course on the twists and turns of the nerve curve; or grind to a halt on the steeper parts of the learning curve.

An experienced ISP coach will respectfully challenge the individuals and the teams to help them develop their experience, competence and confidence. And just as they start to become too comfortable with having someone to defer to for all decisions, the ISP coach will announce their departure and depart as announced.

This is the “want but do not need” part of the Nanny McPhee Coaching Contract.

And experience teaches us that this mutually respectful behaviour works better.