Archive for the ‘Quality’ Category

This week about thirty managers and clinicians in South Wales conducted two experiments to test the design of the Flow Design Practical Skills One Day Workshop.

Their collective challenge was to diagnose and treat a “chronically sick” clinic and the majority had no prior exposure to health care systems engineering (HCSE) theory, techniques, tools or training.

Two of the group, Chris and Jat, had been delegates at a previous ODWS, and had then completed their Level-1 HCSE training and real-world projects.

They had seen it and done it, so this experiment was to test if they could now teach it.

Could they replicate the “OMG effect” that they had experienced and that fired up their passion for learning and using the science of improvement?

Read on »

Sometimes change is dramatic. A big improvement appears very quickly. And when that happens we are caught by surprise (and delight).

Our emotional reaction is much faster than our logical response. “Wow! That’s a miracle!


Our logical Tortoise eventually catches up with our emotional Hare and says “Hare, we both know that there is no such thing as miracles and magic. There must be a rational explanation. What is it?

And Hare replies “I have no idea, Tortoise.  If I did then it would not have been such a delightful surprise. You are such a kill-joy! Can’t you just relish the relief without analyzing the life out of it?

Tortoise feels hurt. “But I just want to understand so that I can explain to others. So that they can do it and get the same improvement.  Not everyone has a ‘nothing-ventured-nothing-gained’ attitude like you! Most of us are too fearful of failing to risk trusting the wild claims of improvement evangelists. We have had our fingers burned too often.


The apparent miracle is real and recent … here is a snippet of the feedback:

Notice carefully the last sentence. It took a year of discussion to get an “OK” and a month of planning to prepare the “GO”.

That is not a miracle and some magic … that took a lot of hard work!

The evangelist is the customer. The supplier is an engineer.


The context is the chronic niggle of patients trying to get an appointment with their GP, and the chronic niggle of GPs feeling overwhelmed with work.

Here is the back story …

In the opening weeks of the 21st Century, the National Primary Care Development Team (NPDT) was formed.  Primary care was a high priority and the government had allocated £168m of investment in the NHS Plan, £48m of which was earmarked to improve GP access.

The approach the NPDT chose was:

harvest best practice +
use a panel of experts +
disseminate best practice.

Dr (later Sir) John Oldham was the innovator and figure-head.  The best practice was copied from Dr Mark Murray from Kaiser Permanente in the USA – the Advanced Access model.  The dissemination method was copied from from Dr Don Berwick’s Institute of Healthcare Improvement (IHI) in Boston – the Collaborative Model.

The principle of Advanced Access is “today’s-work-today” which means that all the requests for a GP appointment are handled the same day.  And the proponents of the model outlined the key elements to achieving this:

1. Measure daily demand.
2. Set capacity so that is sufficient to meet the daily demand.
3. Simple booking rule: “phone today for a decision today”.

But that is not what was rolled out. The design was modified somewhere between aspiration and implementation and in two important ways.

First, by adding a policy of “Phone at 08:00 for an appointment”, and second by adding a policy of “carving out” appointment slots into labelled pots such as ‘Dr X’ or ‘see in 2 weeks’ or ‘annual reviews’.

Subsequent studies suggest that the tweaking happened at the GP practice level and was driven by the fear that, by reducing the waiting time, they would attract more work.

In other words: an assumption that demand for health care is supply-led, and without some form of access barrier, the system would be overwhelmed and never be able to cope.


The result of this well-intended tampering with the Advanced Access design was to invalidate it. Oops!

To a systems engineer this is meddling was counter-productive.

The “today’s work today” specification is called a demand-led design and, if implemented competently, will lead to shorter waits for everyone, no need for urgent/routine prioritization and slot carve-out, and a simpler, safer, calmer, more efficient, higher quality, more productive system.

In this context it does not mean “see every patient today” it means “assess and decide a plan for every patient today”.

In reality, the actual demand for GP appointments is not known at the start; which is why the first step is to implement continuous measurement of the daily number and category of requests for appointments.

The second step is to feed back this daily demand information in a visual format called a time-series chart.

The third step is to use this visual tool for planning future flow-capacity, and for monitoring for ‘signals’, such as spikes, shifts, cycles and slopes.

That was not part of the modified design, so the reasonable fear expressed by GPs was (and still is) that by attempting to do today’s-work-today they would unleash a deluge of unmet need … and be swamped/drowned.

So a flood defense barrier was bolted on; the policy of “phone at 08:00 for an appointment today“, and then the policy of  channeling the over spill into pots of “embargoed slots“.

The combined effect of this error of omission (omitting the measured demand visual feedback loop) and these errors of commission (the 08:00 policy and appointment slot carve-out policy) effectively prevented the benefits of the Advanced Access design being achieved.  It was a predictable failure.

But no one seemed to realize that at the time.  Perhaps because of the political haste that was driving the process, and perhaps because there were no systems engineers on the panel-of-experts to point out the risks of diluting the design.

It is also interesting to note that the strategic aim of the NPCT was to develop a self-sustaining culture of quality improvement (QI) in primary care. That didn’t seem to have happened either.


The roll out of Advanced Access was not the success it was hoped. This is the conclusion from the 300+ page research report published in 2007.


The “Miracle on Tavanagh Avenue” that was experienced this week by both patients and staff was the expected effect of this tampering finally being corrected; and the true potential of the original demand-led design being released – for all to experience.

Remember the essential ingredients?

1. Measure daily demand and feed it back as a visual time-series chart.
2. Set capacity so that is sufficient to meet the daily demand.
3. Use a simple booking rule: “phone anytime for a decision today”.

But there is also an extra design ingredient that has been added in this case, one that was not part of the original Advanced Access specification, one that frees up GP time to provide the required “resilience” to sustain a same-day service.

And that “secret” ingredient is how the new design worked so quickly and feels like a miracle – safe, calm, enjoyable and productive.

This is health care systems engineering (HCSE) in action.


So congratulations to Harry Longman, the whole team at GP Access, and to Dr Philip Lusty and the team at Riverside Practice, Tavangh Avenue, Portadown, NI.

You have demonstrated what was always possible.

The fear of failure prevented it before, just as it prevented you doing this until you were so desperate you had no other choices.

To read the fuller story click here.

PS. Keep a close eye on the demand time-series chart and if it starts to rise then investigate the root cause … immediately.


Phil and Pete are having a coffee and a chat.  They both work in the NHS and have been friends for years.

They have different jobs. Phil is a commissioner and an accountant by training, Pete is a consultant and a doctor by training.

They are discussing a challenge that affects them both on a daily basis: unscheduled care.

Both Phil and Pete want to see significant and sustained improvements and how to achieve them is often the focus of their coffee chats.


<Phil> We are agreed that we both want improvement, both from my perspective as a commissioner and from your perspective as a clinician. And we agree that what we want to see improvements in patient safety, waiting, outcomes, experience for both patients and staff, and use of our limited NHS resources.

<Pete> Yes. Our common purpose, the “what” and “why”, has never been an issue.  Where we seem to get stuck is the “how”.  We have both tried many things but, despite our good intentions, it feels like things are getting worse!

<Phil> I agree. It may be that what we have implemented has had a positive impact and we would have been even worse off if we had done nothing. But I do not know. We clearly have much to learn and, while I believe we are making progress, we do not appear to be learning fast enough.  And I think this knowledge gap exposes another “how” issue: After we have intervened, how do we know that we have (a) improved, (b) not changed or (c) worsened?

<Pete> That is a very good question.  And all that I have to offer as an answer is to share what we do in medicine when we ask a similar question: “How do I know that treatment A is better than treatment B?”  It is the essence of medical research; the quest to find better treatments that deliver better outcomes and at lower cost.  The similarities are strong.

<Phil> OK. How do you do that? How do you know that “Treatment A is better than Treatment B” in a way that anyone will trust the answer?

 <Pete> We use a science that is actually very recent on the scientific timeline; it was only firmly established in the first half of the 20th century. One reason for that is that it is rather a counter-intuitive science and for that reason it requires using tools that have been designed and demonstrated to work but which most of us do not really understand how they work. They are a bit like magic black boxes.

<Phil> H’mm. Please forgive me for sounding skeptical but that sounds like a big opportunity for making mistakes! If there are lots of these “magic black box” tools then how do you decide which one to use and how do you know you have used it correctly?

<Pete> Those are good questions! Very often we don’t know and in our collective confusion we generate a lot of unproductive discussion.  This is why we are often forced to accept the advice of experts but, I confess, very often we don’t understand what they are saying either! They seem like the medieval Magi.

<Phil> H’mm. So these experts are like ‘magicians’ – they claim to understand the inner workings of the black magic boxes but are unable, or unwilling, to explain in a language that a ‘muggle’ would understand?

<Pete> Very well put. That is just how it feels.

<Phil> So can you explain what you do understand about this magical process? That would be a start.


<Pete> OK, I will do my best.  The first thing we learn in medical research is that we need to be clear about what it is we are looking to improve, and we need to be able to measure it objectively and accurately.

<Phil> That  makes sense. Let us say we want to improve the patient’s subjective quality of the A&E experience and objectively we want to reduce the time they spend in A&E. We measure how long they wait. 

<Pete> The next thing is that we need to decide how much improvement we need. What would be worthwhile? So in the example you have offered we know that reducing the average time patients spend in A&E by just 30 minutes would have a significant effect on the quality of the patient and staff experience, and as a by-product it would also dramatically improve the 4-hour target performance.

<Phil> OK.  From the commissioning perspective there are lots of things we can do, such as commissioning alternative paths for specific groups of patients; in effect diverting some of the unscheduled demand away from A&E to a more appropriate service provider.  But these are the sorts of thing we have been experimenting with for years, and it brings us back to the question: How do we know that any change we implement has had the impact we intended? The system seems, well, complicated.

<Pete> In medical research we are very aware that the system we are changing is very complicated and that we do not have the power of omniscience.  We cannot know everything.  Realistically, all we can do is to focus on objective outcomes and collect small samples of the data ocean and use those in an attempt to draw conclusions can trust. We have to design our experiment with care!

<Phil> That makes sense. Surely we just need to measure the stuff that will tell us if our impact matches our intent. That sounds easy enough. What’s the problem?

<Pete> The problem we encounter is that when we measure “stuff” we observe patient-to-patient variation, and that is before we have made any changes.  Any impact that we may have is obscured by this “noise”.

<Phil> Ah, I see.  So if the our intervention generates a small impact then it will be more difficult to see amidst this background noise. Like trying to see fine detail in a fuzzy picture.

<Pete> Yes, exactly like that.  And it raises the issue of “errors”.  In medical research we talk about two different types of error; we make the first type of error when our actual impact is zero but we conclude from our data that we have made a difference; and we make the second type of error when we have made an impact but we conclude from our data that we have not.

<Phil> OK. So does that imply that the more “noise” we observe in our measure for-improvement before we make the change, the more likely we are to make one or other error?

<Pete> Precisely! So before we do the experiment we need to design it so that we reduce the probability of making both of these errors to an acceptably low level.  So that we can be assured that any conclusion we draw can be trusted.

<Phil> OK. So how exactly do you do that?

<Pete> We know that whenever there is “noise” and whenever we use samples then there will always be some risk of making one or other of the two types of error.  So we need to set a threshold for both. We have to state clearly how much confidence we need in our conclusion. For example, we often use the convention that we are willing to accept a 1 in 20 chance of making the Type I error.

<Phil> Let me check if I have heard you correctly. Suppose that, in reality, our change has no impact and we have set the risk threshold for a Type 1 error at 1 in 20, and suppose we repeat the same experiment 100 times – are you saying that we should expect about five of our experiments to show data that says our change has had the intended impact when in reality it has not?

<Pete> Yes. That is exactly it.

<Phil> OK.  But in practice we cannot repeat the experiment 100 times, so we just have to accept the 1 in 20 chance that we will make a Type 1 error, and we won’t know we have made it if we do. That feels a bit chancy. So why don’t we just set the threshold to 1 in 100 or 1 in 1000?

<Pete> We could, but doing that has a consequence.  If we reduce the risk of making a Type I error by setting our threshold lower, then we will increase the risk of making a Type II error.

<Phil> Ah! I see. The old swings-and-roundabouts problem. By the way, do these two errors have different names that would make it  easier to remember and to explain?

<Pete> Yes. The Type I error is called a False Positive. It is like concluding that a patient has a specific diagnosis when in reality they do not.

<Phil> And the Type II error is called a False Negative?

<Pete> Yes.  And we want to avoid both of them, and to do that we have to specify a separate risk threshold for each error.  The convention is to call the threshold for the false positive the alpha level, and the threshold for the false negative the beta level.

<Phil> OK. So now we have three things we need to be clear on before we can do our experiment: the size of the change that we need, the risk of the false positive that we are willing to accept, and the risk of a false negative that we are willing to accept.  Is that all we need?

<Pete> In medical research we learn that we need six pieces of the experimental design jigsaw before we can proceed. We only have three pieces so far.

<Phil> What are the other three pieces then?

<Pete> We need to know the average value of the metric we are intending to improve, because that is our baseline from which improvement is measured.  Improvements are often framed as a percentage improvement over the baseline.  And we need to know the spread of the data around that average, the “noise” that we referred to earlier.

<Phil> Ah, yes!  I forgot about the noise.  But that is only five pieces of the jigsaw. What is the last piece?

<Pete> The size of the sample.

<Phil> Eh?  Can’t we just go with whatever data we can realistically get?

<Pete> Sadly, no.  The size of the sample is how we control the risk of a false negative error.  The more data we have the lower the risk. This is referred to as the power of the experimental design.

<Phil> OK. That feels familiar. I know that the more experience I have of something the better my judgement gets. Is this the same thing?

<Pete> Yes. Exactly the same thing.

<Phil> OK. So let me see if I have got this. To know if the impact of the intervention matches our intention we need to design our experiment carefully. We need all six pieces of the experimental design jigsaw and they must all fall inside our circle of control. We can measure the baseline average and spread; we can specify the impact we will accept as useful; we can specify the risks we are prepared to accept of making the false positive and false negative errors; and we can collect the required amount of data after we have made the intervention so that we can trust our conclusion.

<Pete> Perfect! That is how we are taught to design research studies so that we can trust our results, and so that others can trust them too.

<Phil> So how do we decide how big the post-implementation data sample needs to be? I can see we need to collect enough data to avoid a false negative but we have to be pragmatic too. There would appear to be little value in collecting more data than we need. It would cost more and could delay knowing the answer to our question.

<Pete> That is precisely the trap than many inexperienced medical researchers fall into. They set their sample size according to what is achievable and affordable, and then they hope for the best!

<Phil> Well, we do the same. We analyse the data we have and we hope for the best.  In the magical metaphor we are asking our data analysts to pull a white rabbit out of the hat.  It sounds rather irrational and unpredictable when described like that! Have medical researchers learned a way to avoid this trap?

<Pete> Yes, it is a tool called a power calculator.

<Phil> Ooooo … a power tool … I like the sound of that … that would be a cool tool to have in our commissioning bag of tricks. It would be like a magic wand. Do you have such a thing?

<Pete> Yes.

<Phil> And do you understand how the power tool magic works well enough to explain to a “muggle”?

<Pete> Not really. To do that means learning some rather unfamiliar language and some rather counter-intuitive concepts.

<Phil> Is that the magical stuff I hear lurks between the covers of a medical statistics textbook?

<Pete> Yes. Scary looking mathematical symbols and unfathomable spells!

<Phil> Oh dear!  Is there another way for to gain a working understanding of this magic? Something a bit more pragmatic? A path that a ‘statistical muggle’ might be able to follow?

<Pete> Yes. It is called a simulator.

<Phil> You mean like a flight simulator that pilots use to learn how to control a jumbo jet before ever taking a real one out for a trip?

<Pete> Exactly like that.

<Phil> Do you have one?

<Pete> Yes. It was how I learned about this “stuff” … pragmatically.

<Phil> Can you show me?

<Pete> Of course.  But to do that we will need a bit more time, another coffee, and maybe a couple of those tasty looking Danish pastries.

<Phil> A wise investment I’d say.  I’ll get the the coffee and pastries, if you fire up the engines of the simulator.

The immortal words from Apollo 13 that alerted us to an evolving catastrophe …

… and that is what we are seeing in the UK health and social care system … using the thermometer of A&E 4-hour performance. England is the red line.

uk_ae_runchart

The chart shows that this is not a sudden change, it has been developing over quite a long period of time … so why does it feel like an unpleasant surprise?


One reason may be that NHS England is using performance management techniques that were out of date in the 1980’s and are obsolete in the 2010’s!

Let me show you what I mean. This is a snapshot from the NHS England Board Minutes for November 2016.

nhse_rag_nov_2016
RAG stands for Red-Amber-Green and what we want to see on a Risk Assessment is Green for the most important stuff like safety, flow, quality and affordability.

We are not seeing that.  We are seeing Red/Amber for all of them. It is an evolving catastrophe.

A risk RAG chart is an obsolete performance management tool.

Here is another snippet …

nhse_ae_nov_2016

This demonstrates the usual mix of single point aggregates for the most recent month (October 2016); an arbitrary target (4 hours) used as a threshold to decide failure/not failure; two-point comparisons (October 2016 versus October 2015); and a sprinkling of ratios. Not a single time-series chart in sight. No pictures that tell a story.

Click here for the full document (which does also include some very sensible plans to maintain hospital flow through the bank holiday period).

The risk of this way of presenting system performance data is that it is a minefield of intuitive traps for the unwary.  Invisible pitfalls that can lead to invalid conclusions, unwise decisions, potentially ineffective and/or counter-productive actions, and failure to improve. These methods are risky and that is why they should be obsolete.

And if NHSE is using obsolete tools than what hope do CCGs and Trusts have?


Much better tools have been designed.  Tools that are used by organisations that are innovative, resilient, commercially successful and that deliver safety, on-time delivery, quality and value for money. At the same time.

And they are obsolete outside the NHS because in the competitive context of the dog-eat-dog real world, organisations do not survive if they do not innovate, improve and learn as fast as their competitors.  They do not have the luxury of being shielded from reality by having a central tax-funded monopoly!

And please do not misinterpret my message here; I am a 100% raving fan of the NHS ethos of “available to all and free at the point of delivery” and an NHS that is funded centrally and fairly. That is not my issue.

My issue is the continued use of obsolete performance management tools in the NHS.


Q: So what are the alternatives? What do the successful commercial organisations use instead?

A: System behaviour charts.

SBCs are pictures of how the system is behaving over time – pictures that tell a story – pictures that have meaning – pictures that we can use to diagnose, design and deliver a better outcome than the one we are heading towards.

Pictures like the A&E performance-over-time chart above.

Click here for more on how and why.


Therefore, if the DoH, NHSE, NHSI, STPs, CCGs and Trust Boards want to achieve their stated visions and missions then the writing-on-the-wall says that they will need to muster some humility and learn how successful organisations do this.

This is not a comfortable message to hear and it is easier to be defensive than receptive.

The NHS has to change if it wants to survive and continue serve the people who pay the salaries. And time is running out. Continuing as we are is not an option. Complaining and blaming are not options. Doing nothing is not an option.

Learning is the only option.

Anyone can learn to use system behaviour charts.  No one needs to rely on averages, two-point comparisons, ratios, targets, and the combination of failure-metrics and us-versus-them-benchmarking that leads to the chronic mediocrity trap.

And there is hope for those with enough hunger, humility and who are prepared to do the hard-work of developing their personal, team, department and organisational capability to use better management methods.


Apollo 13 is a true story.  The catastrophe was averted.  The astronauts were brought home safely.  The film retells the story of how that miracle was achieved. Perhaps watching the whole film would be somewhere to start, because it holds many valuable lessons for us all – lessons on how effective teams behave.

On 5th July 2018, the NHS will be 70 years old, and like many of those it was created to serve, it has become elderly and frail.

We live much longer, on average, than we used to and the growing population of frail elderly are presenting an unprecedented health and social care challenge that the NHS was never designed to manage.

The creases and cracks are showing, and each year feels more pressured than the last.


This week a story that illustrates this challenge was shared with me along with permission to broadcast …

“My mother-in-law is 91, in general she is amazingly self-sufficient, able to arrange most of her life with reasonable care at home via a council tendered care provider.

She has had Parkinson’s for years, needing regular medication to enable her to walk and eat (it affects her jaw and swallowing capability). So the care provision is time critical, to get up, have lunch, have tea and get to bed.

She’s also going deaf, profoundly in one ear, pretty bad in the other. She wears a single ‘in-ear’ aid, which has a micro-switch on/off toggle, far too small for her to see or operate. Most of the carers can’t put it in, and fail to switch it off.

Her care package is well drafted, but rarely adhered to. It should be 45 minutes in the morning, 30, 15, 30 through the day. Each time administering the medications from the dossette box. Despite the register in/out process from the carers, many visits are far less time than designed (and paid for by the council), with some lasting 8 minutes instead of 30!

Most carers don’t ensure she takes her meds, which sometimes leads to dropped pills on the floor, with no hope of picking them up!

While the care is supposedly ‘time critical’ the provider don’t manage it via allocated time slots, they simply provide lists, that imply the order of work, but don’t make it clear. My mother-in-law (Mum) cannot be certain when the visit will occur, which makes going out very difficult.

The carers won’t cook food, but will micro-wave it, thus if a cooked meal is to happen, my Mum will start it, with the view of the carers serving it. If they arrive early, the food is under-cooked (“Just put vinegar on it, it will taste better”) and if they arrive late, either she’ll try to get it out herself, or it will be dried out / cremated.

Her medication pattern should be every 4 to 5 hours in the day, with a 11:40 lunch visit, and a 17:45 tea visit, followed by a 19:30 bed prep visit, she finishes up with too long between meds, followed by far too close together. Her GP has stated that this is making her health and Parkinson’s worse.

Mum also rarely drinks enough through the day, in the hot whether she tends to dehydrate, which we try to persuade her must be avoided. Part of the problem is Parkinson’s related, part the hassle of getting to the toilet more often. Parkinson’s affects swallowing, so she tends to sip, rather than gulp. By sipping often, she deludes herself that she is drinking enough.

She also is stubbornly not adjusting methods to align to issues. She drinks tea and water from her lovely bone china cups. Because her grip is not good and her hand shakes, we can’t fill those cups very high, so her ‘cup of tea’ is only a fraction of what it could be.

As she can walk around most days, there’s no way of telling whether she drinks enough, and she frequently has several different carers in a day.

When Mum gets dehydrated, it affects her memory and her reasoning, similar to the onset of dementia. It also seems to increase her probability of falling, perhaps due to forgetting to be defensive.

When she falls, she cannot get up, thus usually presses her alarm dongle, resulting in me going round to get her up, check for concussion, and check for other injuries, prior to settling her down again. These can be ten weeks apart, through to a few in a week.

When she starts to hallucinate, we do our very best to increase drinking, seeking to re-hydrate.

On Sunday, something exceptional happened, Mum fell out of bed and didn’t press her alarm. The carer found her and immediately called the paramedics and her GP, who later called us in. For the first time ever she was not sufficiently mentally alert to press her alarm switch.

After initial assessment, she was taken to A&E, luckily being early on Sunday morning it was initially quite quiet.

Hospital

The Hospital is on the boundary between two counties, within a large town, a mixture of new build elements, between aging structures. There has been considerable investment within A&E, X-ray etc. due partly to that growth industry and partly due to the closures of cottage hospitals and reducing GP services out of hours.

It took some persuasion to have Mum put on a drip, as she hadn’t had breakfast or any fluids, and dehydration was a probable primary cause of her visit. They took bloods, an X-ray of her chest (to check for fall related damage) and a CT scan of her head, to see if there were issues.

I called the carers to tell them to suspend visits, but the phone simply rang without be answered (not for the first time.)

After about six hours, during which time she was awake, but not very lucid, she was transferred to the day ward, where after assessment she was given some meds, a sandwich and another drip.

Later that evening we were informed she was to be kept on a drip for 24 hours.

The next day (Bank Holiday Monday) she was transferred to another ward. When we arrived she was not on a drip, so their decisions had been reversed.

I spoke at length with her assigned staff nurse, and was told the following: Mum could come out soon if she had a 24/7 care package, and that as well as the known issues mum now has COPD. When I asked her what COPD was, she clearly didn’t know, but flustered a ‘it is a form of heart failure that affects breathing’. (I looked it up on my phone a few minutes later.)

So, to get mum out, I had to arrange a 24/7 care package, and nowhere was open until the next day.

Trying to escalate care isn’t going to be easy, even in the short term. My emails to ‘usually very good’ social care people achieved nothing to start with on Tuesday, and their phone was on the ‘out of hours’ setting for evenings and weekends, despite being during the day of a normal working week.

Eventually I was told that there would be nothing to achieve until the hospital processed the correct exit papers to Social Care.

When we went in to the hospital (on Tuesday) a more senior nurse was on duty. She explained that mum was now medically fit to leave hospital if care can be re-established. I told her that I was trying to set up 24/7 care as advised. She looked through the notes and said 24/7 care was not needed, the normal 4 x a day was enough. (She was clearly angry).

I then explained that the newly diagnosed COPD may be part of the problem, she said that she’s worked with COPD patients for 16 years, and mum definitely doesn’t have COPD. While she was amending the notes, I noticed that mum’s allergy to aspirin wasn’t there, despite us advising that on entry. The nurse also explained that as the hospital is in one county, but almost half their patients are from another, they are always stymied on ‘joined up working’

While we were talking with mum, her meds came round and she was only given paracetamol for her pain, but NOT her meds for Parkinson’s. I asked that nurse why that was the case, and she said that was not on her meds sheet. So I went back to the more senior nurse, she checked the meds as ordered and Parkinson’s was required 4 x a day, but it was NOT transferred onto the administration sheet. The doctor next to us said she would do it straight away, and I was told, “Thank God you are here to get this right!”

Mum was given her food, it consisted of some soup, which she couldn’t spoon due to lack of meds and a dry tough lump of gammon and some mashed sweet potato, which she couldn’t chew.

When I asked why meds were given at five, after the delivery of food, they said ‘That’s our system!’, when I suggested that administering Parkinson’s meds an hour before food would increase the ability to eat the food they said “that’s a really good idea, we should do that!”

On Wednesday I spoke with Social Care to try to re-start care to enable mum to get out. At that time the social worker could neither get through to the hospital nor the carers. We spoke again after I had arrived in hospital, but before I could do anything.

On arrival at the hospital I was amazed to see the white-board declaring that mum would be discharged for noon on Monday (in five days-time!). I spoke with the assigned staff nurse who said, “That’s the earliest that her carers can re-start, and anyway its nearly the weekend”.

I said that “mum was medically OK for discharge on Tuesday, after only two days in the hospital, and you are complacent to block the bed for another six days, have you spoken with the discharge team?”

She replied, “No they’ll have gone home by now, and I’ve not seen them all day” I told her that they work shifts, and that they will be here, and made it quite clear if she didn’t contact SHEDs that I’d go walkabout to find them. A few minutes later she told me a SHED member would be with me in 20 minutes.

While the hospital had resolved her medical issues, she was stuck in a ward, with no help to walk, the only TV via a complex pay-for system she had no hope of understanding, with no day room, so no entertainment, no exercise, just boredom encouraged to lay in bed, wear a pad because she won’t be taken to the loo in time.

When the SHED worker arrived I explained the staff nurse attitude, she said she would try to improve those thinking processes. She took lots of details, then said that so long as mum can walk with assistance, she could be released after noon, to have NHS carer support, 4 times a day, from the afternoon. She walked around the ward for the first time since being admitted, and while shaky was fine.

Hopefully all will be better now?”


This story is not exceptional … I have heard it many times from many people in many different parts of the UK.  It is the norm rather than the exception.

It is the story of a fragmented and fractured system of health and social care.

It is the story of frustration for everyone – patients, family, carers, NHS staff, commissioners, and tax-payers.  A fractured care system is unsafe, chaotic, frustrating and expensive.

There are no winners here.  It is not a trade off, compromise or best possible.

It is just poor system design.


What we want has a name … it is called a Frail Safe design … and this is not a new idea.  It is achievable. It has been achieved.

http://www.frailsafe.org.uk

So why is this still happening?

The reason is simple – the NHS does not know any other way.  It does not know how to design itself to be safe, calm, efficient, high quality and affordable.

It does not know how to do this because it has never learned that this is possible.

But it is possible to do, and it is possible to learn, and that learning does not take very long or cost very much.

And the return vastly outnumbers the investment.


The title of this blog is Righteous Indignation

… if your frail elderly parents, relatives or friends were forced to endure a system that is far from frail safe; and you learned that this situation was avoidable and that a safer design would be less expensive; and all you hear is “can’t do” and “too busy” and “not enough money” and “not my job” …  wouldn’t you feel a sense of righteous indignation?

I do.


For more posts like this please vote here.
For more information please subscribe here.

CrashTestDummyThere are two complementary approaches to safety and quality improvement: desire and design.

In the improvement-by-desire world we use a suck-it-and-see approach to fix a problem.  It is called PDSA.

Sometimes this works and we pat ourselves on the back, and remember the learning for future use.

Sometimes it works for us but has a side effect: it creates a problem for someone else.  And we may not be aware of the unintended consequence unless someone shouts “Oi!” It may be too late by then of course.


The more parts in a system, and the more interconnected they are, the more likely it is that a well-intended suck-it-and-see change will create an unintended negative impact.

And in that situation our temptation is to … do nothing … and put up with the problems. It seems the safest option.


In the improvement-by-design world we choose to study first, and to find the causal roots of the system behaviour we are seeing.  Our first objective is a diagnosis.

With that we can propose rational design changes that we anticipate will deliver the improvement we seek without creating adverse effects.

But we have learned the hard way that our intuition can trick us … so we need a way to test our designs … a safe and controlled way.  We need a crash test dummy!


What they do is to deliberately experience our design in a controlled experiment, and what they generate for us is constructive feedback. What did work, and what did not.

A crash test dummy is tough and sensitive at the same time.  They do not break easily and yet they feel the pain and gain too.  They are resilient.


And with their feedback we can re-visit our design and improve it further, or we can use it to offer evidence-based assurance that our design is fit-for-purpose.

Safety and Quality Assurance is improvement-by-design. Diagnosis-and-treatment.

Safety and Quality Control is improvement-by-desire. Suck-and-see.

If you were a passenger or a patient … which option would you prefer?

stick_figure_help_button_150_wht_9911Imagine this scenario:

You develop some non-specific symptoms.

You see your GP who refers you urgently to a 2 week clinic.

You are seen, assessed, investigated and informed that … you have cancer!


The shock, denial, anger, blame, bargaining, depression, acceptance sequence kicks off … it is sometimes called the Kübler-Ross grief reaction … and it is a normal part of the human psyche.

But there is better news. You also learn that your condition is probably treatable, but that it will require chemotherapy, and that there are no guarantees of success.

You know that time is of the essence … the cancer is growing.

And time has a new relevance for you … it is called life time … and you know that you may not have as much left as you had hoped.  Every hour is precious.


So now imagine your reaction when you attend your local chemotherapy day unit (CDU) for your first dose of chemotherapy and have to wait four hours for the toxic but potentially life-saving drugs.

They are very expensive and they have a short shelf-life so the NHS cannot afford to waste any.   The Aseptic Unit team wait until all the safety checks are OK before they proceed to prepare your chemotherapy.  That all takes time, about four hours.

Once the team get to know you it will go quicker. Hopefully.

It doesn’t.

The delays are not the result of unfamiliarity … they are the result of the design of the process.

All your fellow patients seem to suffer repeated waiting too, and you learn that they have been doing so for a long time.  That seems to be the way it is.  The waiting room is well used.

Everyone seems resigned to the belief that this is the best it can be.

They are not happy about it but they feel powerless to do anything.


Then one day someone demonstrates that it is not the best it can be.

It can be better.  A lot better!

And they demonstrate that this better way can be designed.

And they demonstrate that they can learn how to design this better way.

And they demonstrate what happens when they apply their new learning …

… by doing it and by sharing their story of “what-we-did-and-how-we-did-it“.

CDU_Waiting_Room

If life time is so precious, why waste it?

And perhaps the most surprising outcome was that their safer, quicker, calmer design was also 20% more productive.

KingsFund_Quality_Report_May_2016This week the King’s Fund published their Quality Monitoring Report for the NHS, and it makes depressing reading.

These highlights are a snapshot.

The website has some excellent interactive time-series charts that transform the deluge of data the NHS pumps out into pictures that tell a shameful story.

On almost all reported dimensions, things are getting worse and getting worse faster.

Which I do not believe is the intention.

But it is clearly the impact of the last 20 years of health and social care policy.


What is more worrying is the data that is notably absent from the King’s Fund QMR.

The first omission is outcome: How well did the NHS deliver on its intended purpose?  It is stated at the top of the NHS England web site …

NHSE_Purpose

And lets us be very clear here: dying, waiting, complaining, and over-spending are not measures of what we want: health and quality success metrics.  They are a measures of what we do not want; they are failure metrics.

The fanatical focus on failure is part of the hyper-competitive, risk-averse medical mindset:

primum non nocere (first do no harm),

and as a patient I am reassured to hear that but is no harm all I can expect?

What about:

tunc mederi (then do some healing)


And where is the data on dying in the Kings Fund QMR?

It seems to be notably absent.

And I would say that is a quality issue because it is something that patients are anxious about.  And that may be because they are given so much ‘open information’ about what might go wrong, not what should go right.


And you might think that sharp, objective data on dying would be easy to collect and to share.  After all, it is not conveniently fuzzy and subjective like satisfaction.

It is indeed mandatory to collect hospital mortality data, but sharing it seems to be a bit more of a problem.

The fear-of-failure fanaticism extends there too.  In the wake of humiliating, historical, catastrophic failures like Mid Staffs, all hospitals are monitored, measured and compared. And the negative deviants are named, shamed and blamed … in the hope that improvement might follow.

And to do the bench-marking we need to compare apples with apples; not peaches with lemons.  So we need to process the raw data to make it fair to compare; to ensure that factors known to be associated with higher risk of death are taken into account. Factors like age, urgency, co-morbidity and primary diagnosis.  Factors that are outside the circle-of-control of the hospitals themselves.

And there is an army of academics, statisticians, data processors, and analysts out there to help. The fruit of their hard work and dedication is called SHMI … the Summary Hospital Mortality Index.

SHMI_Specification

Now, the most interesting paragraph is the third one which outlines what raw data is fed in to building the risk-adjusted model.  The first four are objective, the last two are more subjective, especially the diagnosis grouping one.

The importance of this distinction comes down to human nature: if a hospital is failing on its SHMI then it has two options:
(a) to improve its policies and processes to improve outcomes, or
(b) to manipulate the diagnosis group data to reduce the SHMI score.

And the latter is much easier to do, it is called up-coding, and basically it involves camping at the pessimistic end of the diagnostic spectrum. And we are very comfortable with doing that in health care. We favour the Black Hat.

And when our patients do better than our pessimistically-biased prediction, then our SHMI score improves and we look better on the NHS funnel plot.

We do not have to do anything at all about actually improving the outcomes of the service we provide, which is handy because we cannot do that. We do not measure it!


And what might be notably absent from the data fed in to the SHMI risk-model?  Data that is objective and easy to measure.  Data such as length of stay (LOS) for example?

Is there a statistical reason that LOS is omitted? Not really. Any relevant metric is a contender for pumping into a risk-adjustment model.  And we all know that the sicker we are, the longer we stay in hospital, and the less likely we are to come out unharmed (or at all).  And avoidable errors create delays and complications that imply more risk, more work and longer length of stay. Irrespective of the illness we arrived with.

So why has LOS been omitted from SHMI?

The reason may be more political than statistical.

We know that the risk of death increases with infirmity and age.

We know that if we put frail elderly patients into a hospital bed for a few days then they will decondition and become more frail, require more time in hospital, are more likely to need a transfer of care to somewhere other than home, are more susceptible to harm, and more likely to die.

So why is LOS not in the risk-of-death SHMI model?

And it is not in the King’s Fund QR report either.

Nor is the amount of cash being pumped in to keep the HMS NHS afloat each month.

All notably absent!

frailsafeSafe means avoiding harm, and safety is an emergent property of a well-designed system.

Frail means infirm, poorly, wobbly and at higher risk of harm.

So we want our health care system to be a FrailSafe Design.

But is it? How would we know? And what could we do to improve it?


About ten years ago I was involved in a project to improve the safety design of a specific clinical stream flowing through the hospital that I work in.

The ‘at risk’ group of patients were frail elderly patients admitted as an emergency after a fall and who had suffered a fractured thigh bone. The neck of the femur.

Historically, the outcome for these patients was poor.  Many do not survive, and many of the survivors never returned to independent living. They become even more frail.


The project was undertaken during an organisational transition, the hospital was being ‘taken over’ by a bigger one.  This created a window of opportunity for some disruptive innovation, and the project was labelled as a ‘Lean’ one because we had been inspired by similar work done at Bolton some years before and Lean was the flavour of the month.

The actual change was small: it was a flow design tweak that cost nothing to implement.

First we asked two flow questions:
Q1: How many of these high-risk frail patients do we admit a year?
A1: About one per day on average.
Q2: What is the safety critical time for these patients?
A2: The first four days.  The sooner they have hip surgery and are able to be actively mobilise the better their outcome.

Second we applied Little’s Law which showed the average number of patients in this critical phase is four. This was the ‘work in progress’ or WIP.

And we knew that variation is always present, and we knew that having all these patients in one place would make it much easier for the multi-disciplinary teams to provide timely care and to avoid potentially harmful delays.

So we suggested that one six-bedded bay on one of the trauma wards be designated the Fractured Neck Of Femur bay.

That was the flow diagnosis and design done.

The safety design was created by the multi-disciplinary teams who looked after these patients: the geriatricians, the anaesthetists, the perioperative emergency care team (PECT), the trauma and orthopaedic team, the physiotherapists, and so on.

They designed checklists to ensure that all #NOF patients got what they needed when they needed it and so that nothing important was left to chance.

And that was basically it.

And the impact was remarkable. The stream flowed. And one measured outcome was a dramatic and highly statistically significant reduction in mortality.

Injury_2011_Results
The full paper was published in Injury 2011; 42: 1234-1237.

We had created a FrailSafe Design … which implied that what was happening before was clearly not safe for these frail patients!


And there was an improved outcome for the patients who survived: A far larger proportion rehabilitated and returned to independent living, and a far smaller proportion required long-term institutional care.

By learning how to create and implement a FrailSafe Design we had added both years-to-life and life-to-years.

It cost nothing to achieve and the message was clear, as this quote is from the 2011 paper illustrates …

Injury_2011_Message

What was a bit disappointing was the gap of four years between delivering this dramatic and highly significant patient safety and quality improvement and the sharing of the story.


What is more exciting is that the concept of FrailSafe is growing, evolving and spreading.

Monitor_Summary


This week an interesting report was published by Monitor – about some possible reasons for the A&E debacle that England experienced in the winter of 2014.

Summary At A Glance

“91% of trusts did not  meet the A&E 4-hour maximum waiting time standard last winter – this was the worst performance in 10 years”.


So it seems a bit odd that the very detailed econometric analysis and the testing of “Ten Hypotheses” did not look at the pattern of change over the previous 10 years … it just compared Oct-Dec 2014 with the same period for 2013! And the conclusion: “Hospitals were fuller in 2014“.  H’mm.


The data needed to look back 10 years is readily available on the various NHS England websites … so here it is plotted as simple time-series charts.  These are called system behaviour charts or SBCs. Our trusted analysis tools will be a Mark I Eyeball connected to the 1.3 kg of wetware between our ears that runs ChimpOS 1.0 …  and we will look back 11 years to 2004.

A&E_Arrivals_2004-15First we have the A&E Arrivals chart … about 3.4 million arrivals per quarter. The annual cycle is obvious … higher in the summer and falling in the winter. And when we compare the first five years with the last six years there has been a small increase of about 5% and that seems to associate with a change of political direction in 2010.

So over 11 years the average A&E demand has gone up … a bit … but only by about 5%.


A&E_Admissions_2004-15In stark contrast the A&E arrivals that are admitted to hospital has risen relentlessly over the same 11 year period by about 50% … that is about 5% per annum … ten times the increase in arrivals … and with no obvious step in 2010. We can see the annual cycle too.  It is a like a ratchet. Click click click.


But that does not make sense. Where are these extra admissions going to? We can only conclude that over 11 years we have progressively added more places to admit A&E patients into.  More space-capacity to store admitted patients … so we can stop the 4-hour clock perhaps? More emergency assessment units perhaps? Places to wait with the clock turned off perhaps? The charts imply that our threshold for emergency admission has been falling: Admission has become increasingly the ‘easier option’ for whatever reason.  So why is this happening? Do more patients need to be admitted?


In a recent empirical study we asked elderly patients about their experience of the emergency process … and we asked them just after they had been discharged … when it was still fresh in their memories. A worrying pattern emerged. Many said that they had been admitted despite them saying they did not want to be.  In other words they did not willingly consent to admission … they were coerced.

This is anecdotal data so, by implication, it is wholly worthless … yes?  Perhaps from a statistical perspective but not from an emotional one.  It is a red petticoat being waved that should not be ignored.  Blissful ignorance comes from ignoring anecdotal stuff like this. Emotionally uncomfortable anecdotal stories. Ignore the early warning signs and suffer the potentially catastrophic consequences.


A&E_Breaches_2004-15And here is the corresponding A&E 4-hour Target Failure chart.  Up to 2010 the imposed target was 98% success (i.e. 2% acceptable failure) and, after bit of “encouragement” in 2004-5, this was actually achieved in some of the summer months (when the A&E demand was highest remember).

But with a change of political direction in 2010 the “hated” 4-hour target was diluted down to 95% … so a 5% failure rate was now ‘acceptable’ politically, operationally … and clinically.

So it is no huge surprise that this is what was achieved … for a while at least.

In the period 2010-13 the primary care trusts (PCTs) were dissolved and replaced by clinical commissioning groups (CCGs) … the doctors were handed the ignition keys to the juggernaut that was already heading towards the cliff.

The charts suggest that the seeds were already well sown by 2010 for an evolving catastrophe that peaked last year; and the changes in 2010 and 2013 may have just pressed the accelerator pedal a bit harder. And if the trend continues it will be even worse this coming winter. Worse for patients and worse for staff and worse for commissioners and  worse for politicians. Lose lose lose lose.


So to summarise the data from the NHS England’s own website:

1. A&E arrivals have gone up 5% over 11 years.
2. Admissions from A&E have gone up 50% over 11 years.
3. Since lowering the threshold for acceptable A&E performance from 98% to 95% the system has become unstable and “fallen off the cliff” … but remember, a temporal association does not prove causation.

So what has triggered the developing catastrophe?

Well, it is important to appreciate that when a patient is admitted to hospital it represents an increase in workload for every part of the system that supports the flow through the hospital … not just the beds.  Beds represent space-capacity. They are just where patients are stored.  We are talking about flow-capacity; and that means people, consumables, equipment, data and cash.

So if we increase emergency admissions by 50% then, if nothing else changes, we will need to increase the flow-capacity by 50% and the space-capacity to store the work-in-progress by 50% too. This is called Little’s Law. It is a mathematically proven Law of Flow Physics. It is not negotiable.

So have we increased our flow-capacity and our space-capacity (and our costs) by 50%? I don’t know. That data is not so easy to trawl from the websites. It will be there though … somewhere.

What we have seen is an increase in bed occupancy (the red box on Monitor’s graphic above) … but not a 50% increase … that is impossible if the occupancy is already over 85%.  A hospital is like a rigid metal box … it cannot easily expand to accommodate a growing queue … so the inevitable result in an increase in the ‘pressure’ inside.  We have created an emergency care pressure cooker. Well lots of them actually.

And that is exactly what the staff who work inside hospitals says it feels like.

And eventually the relentless pressure and daily hammering causes the system to start to weaken and fail, gradually at first then catastrophically … which is exactly what the NHS England data charts are showing.


So what is the solution?  More beds?

Nope.  More beds will create more space and that will relieve the pressure … for a while … but it will not address the root cause of why we are admitting 50% more patients than we used to; and why we seem to need to increase the pressure inside our hospitals to squeeze the patients through the process and extrude them out of the various exit nozzles.

Those are the questions we need to have understandable and actionable answers to.

Q1: Why are we admitting 5% more of the same A&E arrivals each year rather than delivering what they need in 4 hours or less and returning them home? That is what the patients are asking for.

Q2: Why do we have to push patients through the in-hospital process rather than pulling them through? The staff are willing to work but not inside a pressure cooker.


A more sensible improvement strategy is to look at the flow processes within the hospital and ensure that all the steps and stages are pulling together to the agreed goals and plan for each patient. The clinical management plan that was decided when the patient was first seen in A&E. The intended outcome for each patient and the shortest and quickest path to achieving it.


Our target is not just a departure within 4 hours of arriving in A&E … it is a competent diagnosis (study) and an actionable clinical management plan (plan) within 4 hours of arriving; and then a process that is designed to deliver (do) it … for every patient. Right, first time, on time, in full and at a cost we can afford.

Q: Do we have that?
A: Nope.

Q: Is that within our gift to deliver?
A: Yup.

Q: So what is the reason we are not already doing it?
A: Good question.  Who in the NHS is trained how to do system-wide flow design like this?

Dr_Bob_ThumbnailThere is a big bun-fight kicking off on the topic of 7-day working in the NHS.

The evidence is that there is a statistical association between mortality in hospital of emergency admissions and day of the week: and weekends are more dangerous.

There are fewer staff working at weekends in hospitals than during the week … and delays and avoidable errors increase … so risk of harm increases.

The evidence also shows that significantly fewer patients are discharged at weekends.


So the ‘obvious’ solution is to have more staff on duty at weekends … which will cost more money.


Simple, obvious, linear and wrong.  Our intuition has tricked us … again!


Let us unravel this Gordian Knot with a bit of flow science and a thought experiment.

1. The evidence shows that there are fewer discharges at weekends … and so demonstrates lack of discharge flow-capacity. A discharge process is not a single step, there are many things that must flow in sync for a discharge to happen … and if any one of them is missing or delayed then the discharge does not happen or is delayed.  The weakest link effect.

2. The evidence shows that the number of unplanned admissions varies rather less across the week; which makes sense because they are unplanned.

3. So add those two together and at weekends we see hospitals filling up with unplanned admissions – not because the sick ones are arriving faster – but because the well ones are leaving slower.

4. The effect of this is that at weekends the queue of people in beds gets bigger … and they need looking after … which requires people and time and money.

5. So the number of staffed beds in a hospital must be enough to hold the biggest queue – not the average or some fudged version of the average like a 95th percentile.

6. So a hospital running a 5-day model needs more beds because there will be more variation in bed use and we do not want to run out of beds and delay the admission of the newest and sickest patients. The ones at most risk.

7. People do not get sicker because there is better availability of healthcare services – but saying we need to add more unplanned care flow capacity at weekends implies that it does.  What is actually required is that the same amount of flow-resource that is currently available Mon-Fri is spread out Mon-Sun. The flow-capacity is designed to match the customer demand – not the convenience of the supplier.  And that means for all parts of the system required for unplanned patients to flow.  What, where and when. It costs the same.

8. Then what happens is that the variation in the maximum size of the queue of patients in the hospital will fall and empty beds will appear – as if by magic.  Empty beds that ensure there is always one for a new, sick, unplanned admission on any day of the week.

9. And empty beds that are never used … do not need to be staffed … so there is a quick way to reduce expensive agency staff costs.

So with a comprehensive 7-day flow-capacity model the system actually gets safer, less chaotic, higher quality and less expensive. All at the same time. Safety-Flow-Quality-Productivity.

Dr_Bob_ThumbnailA recurring theme this week has been the concept of ‘quality’.

And it became quickly apparent that a clear definition of quality is often elusive.

Which seems to have led to a belief that quality is difficult to measure because it is subjective and has no precise definition.

The science of quality improvement is nearly 100 years old … and it was shown a long time ago, in 1924 in fact, that it is rather easy to measure quality – objectively and scientifically.

The objective measure of quality is called “yield”.

To measure yield we simply ask all our customers this question:

Did your experience meet your expectation?” 

If the answer is ‘Yes’ then we count this as OK; if it is ‘No’ then we count it as Not OK.

Yield is the ratio of the OKs divided by the number of customers who answered.


But this tried-and-tested way of measuring quality has a design flaw:

Where does a customer get their expectation from?

Because if a customer has an unrealistically high expectation then whatever we do will be perceived by them as Not OK.

So to consistently deliver a high quality service (i.e. high yield) we need to be able to influence both the customer experience and the customer expectation.


If we set our sights on a worthwhile and realistic expectation and we broadcast that to our customers, then we also need a way of avoiding their disappointment … that our objective quality outcome audit may reveal.

One way to defuse disappointment is to set a low enough expectation … which is, sadly, the approach adopted by naysayers,  complainers, cynics and doom-mongers. The inept.

That is not the path to either improvement or to excellence. It is the path to apathy.

A better approach is to set ourselves some internal standards of expectation and to check at each step if our work meets our own standard … and if it fails then we know we need have some more work to do.

This commonly used approach to maintaining quality is called a check-and-correct design.

So let us explore the ramifications of this check-and-correct approach to quality.


Suppose the quality of the product or service that we deliver is influenced by many apparently random factors. And when we actually measure our yield we discover that the chance of getting a right-first-time outcome is about 50%.  This amounts to little more than a quality lottery and we could simulate that ‘random’ process by tossing a coin.

So to set a realistic expectation for future customers there are two further questions we need to answer:
1. How long can an typical customer expect to wait for our product or service?
2. How much can an typical customer expect to pay for our product or service?

It is not immediately and intuitively obvious what the answers to these questions are … so we need to perform an experiment to find out.

Suppose we have five customers who require our product or service … we could represent them as Post It Notes; and suppose we have a clock … we could measure how long the process is taking; and suppose we have our coin … we can simulate the yield of the step; … and suppose we do not start the lead time clock until we start the work for each customer.

We now have the necessary and sufficient components to assemble a simple simulation model of our system … a model that will give us realistic answers to our questions.

So let us see what happens … just click the ‘Start Game’ button.


It is worth running this exercise about a dozen times and recording the data for each run … then plotting the results on a time-series chart.

The data to plot is the make-time (which is the time displayed on the top left) and the cost (which is display top middle).

The make-time is the time from starting the first game to completing the last task.

The cost is the number of coin tosses we needed to do to deliver all work to the required standard.

And here are the charts from my dozen runs (yours will be different).

PostItNote_MakeTimeChart

PostItNote_CostChart

The variation from run to run is obvious; as is the correlation between a make-time and a high cost.

The charts also answer our two questions … a make time up to 90 would not be exceptional and an average cost of 10 implies that is the minimum price we need to charge in order to stay in business.

Our customers are waiting while we check-and-correct our own errors and we are expecting them to pay for the extra work!

In the NHS we have a name for this low-quality high-cost design: Payment By Results.


The charts also show us what is possible … a make time of 20 and a cost of 5.

That happened when, purely by chance, we tossed five heads in a row in the Quality Lottery.

So with this insight we could consider how we might increase the probability of ‘throwing a head’ i.e. doing the work right-first-time … because we can see from our charts what would happen.

The improved quality and cost of changing ourselves and our system to remove the root causes of our errors.

Quality Improvement-by-Design.

That something worth learning how to do.

And can we honestly justify not doing it?

It was the time for Bob and Leslie’s regular coaching session. Dr_Bob_ThumbnailBob was already on line when Leslie dialed in to the teleconference.

<Leslie> Hi Bob, sorry I am a bit late.

<Bob> No problem Leslie. What aspect of improvement science shall we explore today?

<Leslie> Well, I’ve been working through the Safety-Flow-Quality-Productivity cycle in my project and everything is going really well.  The team are really starting to put the bits of the jigsaw together and can see how the synergy works.

<Bob> Excellent. And I assume they can see the sources of antagonism too.

<Leslie> Yes, indeed! I am now up to the point of considering productivity and I know it was introduced at the end of the Foundation course but only very briefly.

<Bob> Yes,  productivity was described as a system metric. A ratio of a steam metric and a stage metric … what we get out of the streams divided by what we put into the stages.  That is a very generic definition.

<Leslie> Yes, and that I think is my problem. It is too generic and I get it confused with concepts like efficiency.  Are they the same thing?

<Bob> A very good question and the short answer is “No”, but we need to explore that in more depth.  Many people confuse efficiency and productivity and I believe that is because we learn the meaning of words from the context that we see them used in. If  others use the words imprecisely then it generates discussion, antagonism and confusion and we are left with the impression of that it is a ‘difficult’ subject.  The reality is that it is not difficult when we use the words in a valid way.

<Leslie> OK. That reassures me a bit … so what is the definition of efficiency?

<Bob> Efficiency is a stream metric – it is the ratio of the minimum cost of the resources required to complete one task divided by the actual cost of the resources used to complete one task.

<Leslie> Um.  OK … so how does time come into that?

<Bob> Cost is a generic concept … it can refer to time, money and lots of other things.  If we stick to time and money then we know that if we have to employ ‘people’ then time will cost money because people need money to buy essential stuff that the need for survival. Water, food, clothes, shelter and so on.

<Leslie> So we could use efficiency in terms of resource-time required to complete a task?

<Bob> Yes. That is a very useful way of looking at it.

<Leslie> So how is productivity different? Completed tasks out divided by cash in to pay for resource time would be a productivity metric. It looks the same.

<Bob> Does it?  The definition of efficiency is possible cost divided by actual cost. It is not the as our definition of system productivity.

<Leslie> Ah yes, I see. So do others define productivity the same way?

<Bob> Try looking it up on Wikipedia …

<Leslie> OK … here we go …

Productivity is an average measure of the efficiency of production. It can be expressed as the ratio of output to inputs used in the production process, i.e. output per unit of input”.

Now that is really confusing!  It looks like efficiency and productivity are the same. Let me see what the Wikipedia definition of efficiency is …

“Efficiency is the (often measurable) ability to avoid wasting materials, energy, efforts, money, and time in doing something or in producing a desired result”.

But that is closer to your definition of efficiency – the actual cost is the minimum cost plus the cost of waste.

<Bob> Yes.  I think you are starting to see where the confusion arises.  And this is because there is a critical piece of the jigsaw missing.

<Leslie> Oh …. and what is that?

<Bob> Worth.

<Leslie> Eh?

<Bob> Efficiency has nothing to do with whether the output of the stream has any worth.  I can produce a worthless product with low waste … in other words very efficiently.  And what if we have the situation where the output of my process is actually harmful.  The more efficiently I use my resources the more harm I will cause from a fixed amount of resource … and in that situation it is actually safer to have a very inefficient process!

<Leslie> Wow!  That really hits the nail on the head … and the implications are … profound.  Efficiency is onbective and relates only to flow … and between flow and productivity we have to cross the Safety-Quality line. Productivity also includes the subjective concept of worth or value. That all makes complete sense now. A productive system is a subjectively and objectively win-win-win design.

<Bob> Yup.  Get the safety. flow and quality perspectives of the design in synergy and productivity will sky-rocket. It is called a Fit-4-Purpose design.

smack_head_in_disappointment_150_wht_16653Many organisations proclaim that their mission is to achieve excellence but then proceed to deliver mediocre performance.

Why is this?

It is certainly not from lack of purpose, passion or people.

So the flaw must lie somewhere in the process.


The clue lies in how we measure performance … and to see the collective mindset behind the design of the performance measurement system we just need to examine the key performance indicators or KPIs.

Do they measure failure or success?


Let us look at some from the NHS …. hospital mortality, hospital acquired infections, never events, 4-hour A&E breaches, cancer wait breaches, 18 week breaches, and so on.

In every case the metric reported is a failure metric. Not a success metric.

And the focus of action is getting away from failure.

Damage mitigation, damage limitation and damage compensation.


So we have the answer to our question: we know we are doing a good job when we are not failing.

But are we?

When we are not failing we are not doing a bad job … is that the same as doing a good job?

Q: Does excellence  = not excrement?

A: No. There is something between these extremes.

The succeed-or-fail dichotomy is a distorting simplification created by applying an arbitrary threshold to a continuous measure of performance.


And how, specifically, have we designed our current system to avoid failure?

Usually by imposing an arbitrary target connected to a punitive reaction to failure. Management by fear.

This generates punishment-avoidance and back-covering behaviour which is manifest as a lot of repeated checking and correcting of the inevitable errors that we find.  A lot of extra work that requires extra time and that requires extra money.

So while an arbitrary-target-driven-check-and-correct design may avoid failing on safety, the additional cost may cause us to then fail on financial viability.

Out of the frying pan and into the fire.

No wonder Governance and Finance come into conflict!

And if we do manage to pull off a uneasy compromise … then what level of quality are we achieving?


Studies show that if take a random sample of 100 people from the pool of ‘disappointed by their experience’ and we ask if they are prepared to complain then only 5% will do so.

So if we use complaints as our improvement feedback loop and we react to that and make changes that eliminate these complaints then what do we get? Excellence?

Nope.

We get what we designed … just good enough to avoid the 5% of complaints but not the 95% of disappointment.

We get mediocrity.


And what do we do then?

We start measuring ‘customer satisfaction’ … which is actually asking the question ‘did your experience meet your expectation?’

And if we find that satisfaction scores are disappointingly low then how do we improve them?

We have two choices: improve the experience or reduce the expectation.

But as we are very busy doing the necessary checking-and-correcting then our path of least resistance to greater satisfaction is … to lower expectations.

And we do that by donning the black hat of the pessimist and we lay out the the risks and dangers.

And by doing that we generate anxiety and fear.  Which was not the intended outcome.


Our mission statement proclaims ‘trusted to achieve excellence’ not ‘designed to deliver mediocrity’.

But mediocrity is what the evidence says we are delivering. Just good enough to avoid a smack from the Regulators.

And if we are honest with ourselves then we are forced to conclude that:

A design that uses failure metrics as the primary feedback loop can achieve no better than mediocrity.


So if we choose  to achieve excellence then we need a better feedback design.

We need a design that uses success metrics as the primary feedback loop and we use failure metrics only in safety critical contexts.

And the ideal people to specify the success metrics are those who feel the benefit directly and immediately … the patients who receive care and the staff who give it.

Ask a patient what they want and they do not say “To be treated in less than 18 weeks”.  In fact I have yet to meet a patient who has even heard of the 18-week target!

A patient will say ‘I want to know what is wrong, what can be done, when it can be done, who will do it, what do I need to do, and what can I expect to be the outcome’.

Do we measure any of that?

Do we measure accuracy of diagnosis? Do we measure use of best evidenced practice? Do we know the possible delivery time (not the actual)? Do we inform patients of what they can expect to happen? Do we know what they can expect to happen? Do we measure outcome for every patient? Do we feed that back continuously and learn from it?

Nope.


So …. if we choose and commit to delivering excellence then we will need to start measuring-4-success and feeding what we see back to those who deliver the care.

Warts and all.

So that we know when we are doing a good job, and we know where to focus further improvement effort.

And if we abdicate that commitment and choose to deliver mediocrity-by-default then we are the engineers of our own chaos and despair.

We have the choice.

We just need to make it.

NHS_Legal_CostsThis heading in the the newspaper today caught my eye.

Reading the rest of the story triggered a strong emotional response: anger.

My inner chimp was not happy. Not happy at all.

So I took my chimp for a walk and we had a long chat and this is the story that emerged.

The first trigger was the eye-watering fact that the NHS is facing something like a £26 billion litigation cost.  That is about a quarter of the total NHS annual budget!

The second was the fact that the litigation bill has increased by over £3 billion in the last year alone.

The third was that the extra money will just fall into a bottomless pit – the pockets of legal experts – not to where it is intended, to support overworked and demoralised front-line NHS staff. GPs, nurses, AHPs, consultants … the ones that deliver care.

That is why my chimp was so upset.  And it sounded like righteous indignation rather than irrational fear.


So what is the root cause of this massive bill? A more litigious society? Ambulance chasing lawyers trying to make a living? Dishonest people trying to make a quick buck out of a tax-funded system that cannot defend itself?

And what is the plan to reduce this cost?

Well in the article there are three parts to this:
“apologise and learn when you’re wrong,  explain and vigorously defend when we’re right, view court as a last resort.”

This sounds very plausible but to achieve it requires knowing when we are wrong or right.

How do we know?


Generally we all think we are right until we are proved wrong.

It is the way our brains are wired. We are more sure about our ‘rightness’ than the evidence suggests is justified. We are naturally optimistic about our view of ourselves.

So to be proved wrong is emotionally painful and to do it we need:
1) To make a mistake.
2) For that mistake to lead to psychological or physical harm.
3) For the harm to be identified.
4) For the cause of the harm to be traced back to the mistake we made.
5) For the evidence to be used to hold us to account, (to apologise and learn).

And that is all hunky-dory when we are individually inept and we make avoidable mistakes.

But what happens when the harm is the outcome of a combination of actions that individually are harmless but which together are not?  What if the contributory actions are sensible and are enforced as policies that we dutifully follow to the letter?

Who is held to account?  Who needs to apologise? Who needs to learn?  Someone? Anyone? Everyone? No one?

The person who wrote the policy?  The person who commissioned the policy to be written? The person who administers the policy? The person who follows the policy?

How can that happen if the policies are individually harmless but collectively lethal?


The error here is one of a different sort.

It is called an ‘error of omission’.  The harm is caused by what we did not do.  And notice the ‘we’.

What we did not do is to check the impact on others of the policies that we write for ourselves.

Example:

The governance department of a large hospital designs safety policies that if not followed lead to disciplinary action and possible dismissal.  That sounds like a reasonable way to weed out the ‘bad apples’ and the policies are adhered to.

At the same time the operations department designs flow policies (such as maximum waiting time targets and minimum resource utilisation) that if not followed lead to disciplinary action and possible dismissal.  That also sounds like a reasonable way to weed out the layabouts whose idleness cause queues and delays and the policies are adhered to.

And at the same time the finance department designs fiscal policies (such as fixed budgets and cost improvement targets) that if not followed lead to disciplinary action and possible dismissal. Again, that sounds like a reasonable way to weed out money wasters and the policies are adhered to.

What is the combined effect? The multiple safety checks take more time to complete, which puts extra workload on resources and forces up utilisation. As the budget ceiling is lowered the financial and operational pressures build, the system heats up, stress increases, corners are cut, errors slip through the safety checks. More safety checks are added and the already over-worked staff are forced into an impossible position.  Chaos ensues … more mistakes are made … patients are harmed and justifiably seek compensation by litigation.  Everyone loses (except perhaps the lawyers).


So why was my inner chimp really so unhappy?

Because none of this is necessary. This scenario is avoidable.

Reducing the pain of complaints and the cost of litigation requires setting realistic expectations to avoid disappointment and it requires not creating harm in the first place.

That implies creating healthcare systems that are inherently safe, not made not-unsafe by inspection-and-correction.

And it implies measuring and sharing intended and actual outcomes not  just compliance with policies and rates of failure to meet arbitrary and conflicting targets.

So if that is all possible and all that is required then why are we not doing it?

Simple. We never learned how. We never knew it is possible.

teamwork_puzzle_build_PA_150_wht_2341[Bing bong]. The sound heralded Lesley logging on to the weekly Webex coaching session with Bob, an experienced Improvement Science Practitioner.

<Bob> Good afternoon Lesley.  How has your week been and what topic shall we explore today?

<Lesley> Hi Bob. Well in a nutshell, the bit of the system that I have control over feels like a fragile oasis of calm in a perpetual desert of chaos.  It is hard work keeping the oasis clear of the toxic sand that blows in!

<Bob> A compelling metaphor. I can just picture it.  Maintaining order amidst chaos requires energy. So what would you like to talk about?

<Lesley> Well, I have a small shoal of FISHees who I am guiding  through the foundation shallows and they are getting stuck on Little’s Law.  I confess I am not very good at explaining it and that suggests to me that I do not really understand it well enough either.

<Bob> OK. So shall we link those two theme – chaos and Little’s Law?

<Lesley> That sounds like an excellent plan!

<Bob> OK. So let us refresh the foundation knowledge. What is Little’s Law?

<Lesley>It is a fundamental Law of process physics that relates flow, with lead time and work in progress.

<Bob> Good. And specifically?

<Lesley> Average lead time is equal to the average flow multiplied by the average work in progress.

<Bob>Yes. And what are the units of flow in your equation?

<Lesley> Ah yes! That is  a trap for the unwary. We need to be clear how we express flow. The usual way is to state it as number of tasks in a defined period of time, such as patients admitted per day.  In Little’s Law the convention is to use the inverse of that which is the average interval between consecutive flow events. This is an unfamiliar way to present flow to most people.

<Bob> Good. And what is the reason that we use the ‘interval between events’ form?

<Leslie> Because it is easier to compare it with two critically important  flow metrics … the takt time and the cycle time.

<Bob> And what is the takt time?

<Leslie> It is the average interval between new tasks arriving … the average demand interval.

<Bob> And the cycle time?

<Leslie> It is the shortest average interval between tasks departing …. and is determined by the design of the flow constraint step.

<Bob> Excellent. And what is the essence of a stable flow design?

<Lesley> That the cycle time is less than the takt time.

<Bob>Why less than? Why not equal to?

<Leslie> Because all realistic systems need some flow resilience to exhibit stable and predictable-within-limits behaviour.

<Bob> Excellent. Now describe the design requirements for creating chronically chaotic system behaviour?

<Leslie> This is a bit trickier to explain. The essence is that for chronically chaotic behaviour to happen then there must be two feedback loops – a destabilising loop and a stabilising loop.  The destabilising loop creates the chaos, the stabilising loop ensures it is chronic.

<Bob> Good … so can you give me an example of a destabilising feedback loop?

<Leslie> A common one that I see is when there is a long delay between detecting a safety risk and the diagnosis, decision and corrective action.  The risks are often transitory so if the corrective action arrives long after the root cause has gone away then it can actually destabilise the process and paradoxically increase the risk of harm.

<Bob> Can you give me an example?

<Leslie>Yes. Suppose a safety risk is exposed by a near miss.  A delay in communicating the niggle and a root cause analysis means that the specific combination of factors that led to the near miss has gone. The holes in the Swiss cheese are not static … they move about in the chaos.  So the action that follows the accumulation of many undiagnosed near misses is usually the non-specific mantra of adding yet another safety-check to the already burgeoning check-list. The longer check-list takes more time to do, and is often repeated many times, so the whole flow slows down, queues grow bigger, waiting times get longer and as pressure comes from the delivery targets corners start being cut, and new near misses start to occur; on top of the other ones. So more checks are added and so on.

<Bob> An excellent example! And what is the outcome?

<Leslie> Chronic chaos which is more dangerous, more disordered and more expensive. Lose lose lose.

<Bob> And how do the people feel who work in the system?

<Leslie> Chronically naffed off! Angry. Demotivated. Cynical.

<Bob>And those feelings are the key symptoms.  Niggles are not only symptoms of poor process design, they are also symptoms of a much deeper problem: a violation of values.

<Leslie> I get the first bit about poor design; but what is that second bit about values?

<Bob>  We all have a set of values that we learned when we were very young and that have bee shaped by life experience.  They are our source of emotional energy, and our guiding lights in an uncertain world. Our internal unconscious check-list.  So when one of our values is violated we know because we feel angry. How that anger is directed varies from person to person … some internalise it and some externalise it.

<Leslie> OK. That explains the commonest emotion that people report when they feel a niggle … frustration which is the same as anger.

<Bob>Yes.  And we reveal our values by uncovering the specific root causes of our niggles.  For example if I value ‘Hard Work’ then I will be niggled by laziness. If you value ‘Experimentation’ then you may be niggled by ‘Rigid Rules’.  If someone else values ‘Safety’ then they may value ‘Rigid Rules’ and be niggled by ‘Innovation’ which they interpret as risky.

<Leslie> Ahhhh! Yes, I see.  This explains why there is so much impassioned discussion when we do a 4N Chart! But if this behaviour is so innate then it must be impossible to resolve!

<Bob> Understanding  how our values motivate us actually helps a lot because we are naturally attracted to others who share the same values – because we have learned that it reduces conflict and stress and improves our chance of survival. We are tribal and tribes share the same values.

<Leslie> Is that why different  departments appear to have different cultures and behaviours and why they fight each other?

<Bob> It is one factor in the Silo Wars that are a characteristic of some large organisations.  But Silo Wars are not inevitable.

<Leslie> So how are they avoided?

<Bob> By everyone knowing what common purpose of the organisation is and by being clear about what values are aligned with that purpose.

<Leslie> So in the healthcare context one purpose is avoidance of harm … primum non nocere … so ‘safety’ is a core value.  Which implies anything that is felt to be unsafe generates niggles and well-intended but potentially self-destructive negative behaviour.

<Bob> Indeed so, as you described very well.

<Leslie> So how does all this link to Little’s Law?

<Bob>Let us go back to the foundation knowledge. What are the four interdependent dimensions of system improvement?

<Leslie> Safety, Flow, Quality and Productivity.

<Bob> And one measure of  productivity is profit.  So organisations that have only short term profit as their primary goal are at risk of making poor long term safety, flow and quality decisions.

<Leslie> And flow is the key dimension – because profit is just  the difference between two cash flows: income and expenses.

<Bob> Exactly. One way or another it all comes down to flow … and Little’s Law is a fundamental Law of flow physics. So if you want all the other outcomes … without the emotionally painful disorder and chaos … then you cannot avoid learning to use Little’s Law.

<Leslie> Wow!  That is a profound insight.  I will need to lie down in a darkened room and meditate on that!

<Bob> An oasis of calm is the perfect place to pause, rest and reflect.

patient_stumbling_with_bandages_150_wht_6861Primum non nocere” is Latin for “First do no harm”.

It is a warning mantra that had been repeated by doctors for thousands of years and for good reason.

Doctors  can be bad for your health.

I am not referring to the rare case where the doctor deliberately causes harm.  Such people are criminals and deserve to be in prison.

I am referring to the much more frequent situation where the doctor has no intention to cause harm – but harm is the outcome anyway.

Very often the risk of harm is unavoidable. Healthcare is a high risk business. Seriously unwell patients can be very unstable and very unpredictable.  Heroic efforts to do whatever can be done can result in unintended harm and we have to accept those risks. It is the nature of the work.  Much of the judgement in healthcare is balancing benefit with risk on a patient by patient basis. It is not an exact science. It requires wisdom, judgement, training and experience. It feels more like an art than a science.

The focus of this essay is not the above. It is on unintentionally causing avoidable harm.

Or rather unintentionally not preventing avoidable harm which is not quite the same thing.

Safety means prevention of avoidable harm. A safe system is one that does that. There is no evidence of harm to collect. A safe system does not cause harm. Never events never happen.

Safe systems are designed to be safe.  The root causes of harm are deliberately designed out one way or another.  But it is not always easy because to do that we need to understand the cause-and-effect relationships that lead to unintended harm.  Very often we do not.


In 1847 a doctor called Ignaz Semmelweis made a very important discovery. He discovered that if the doctors and medical students washed their hands in disinfectant when they entered the labour ward, then the number of mothers and babies who died from infection was reduced.

And the number dropped a lot.

It fell from an annual average of 10% to less than 2%!  In really bad months the rate was 30%.

The chart below shows the actual data plotted as a time-series chart. The yellow flag in 1848 is just after Semmelweis enforced a standard practice of hand-washing.

Vienna_Maternal_Mortality_1785-1848

Semmelweis did not know the mechanism though. This was not a carefully designed randomised controlled trial (RCT). He was desperate. And he was desperate because this horrendous waste of young lives was only happening on the doctors ward.  On the nurses ward, which was just across the corridor, the maternal mortality was less than 2%.

The hospital authorities explained it away as ‘bad air’ from outside. That was the prevailing belief at the time. Unavoidable. A risk that had to be just accepted.

Semmeleis could not do a randomized controlled trial because they were not invented until a century later.

And Semmelweis suspected that the difference between the mortality on the nurses and the doctors wards was something to do with the Mortuary. Only the doctors performed the post-mortems and the practice of teaching anatomy to medical students using post-mortem dissection was an innovation pioneered in Vienna in 1823 (the first yellow flag on the chart above). But Semmelweis did not have this data in 1847.  He collated it later and did not publish it until 1861.

What Semmelweis demonstrated was the unintended and avoidable deaths were caused by ignorance of the mechanism of how microorganisms cause disease. We know that now. He did not.

It would be another 20 years before Louis Pasteur demonstrated the mechanism using the famous experiment with the swan neck flask. Pasteur did not discover microorganisms;  he proved that they did not appear spontaneously in decaying matter as was believed. He proved that by killing the bugs by boiling, the broth in the flask  stayed fresh even though it was exposed to the air. That was a big shock but it was a simple and repeatable experiment. He had a mechanism. He was believed. Germ theory was born. A Scottish surgeon called Joseph Lister read of this discovery and surgical antisepsis was born.

Semmelweis suspected that some ‘agent’ may have been unwittingly transported from the dead bodies to the live mothers and babies on the hands of the doctors.  It was a deeply shocking suggestion that the doctors were unwittingly killing their patients.

The other doctors did not take this suggestion well. Not well at all. They went into denial. They discounted the message and they discharged the messenger. Semmelweis never worked in Vienna again. He went back to Hungary and repeated the experiment. It worked.


Even today the message that healthcare practitioners can unwittingly bring avoidable harm to their patients is disturbing. We still seek solace in denial.

Hospital acquired infections (HAI) are a common cause of harm and many are avoidable using simple, cheap and effective measures such as hand-washing.

The harm does not come from what we do. It comes from what we do not do. It happens when we omit to follow the simple safety measures that have be proven to work. Scientifically. Statistically Significantly. Understood and avoidable errors of omission.


So how is this “statistically significant scientific proof” acquired?

By doing experiments. Just like the one Ignaz Semmelweis conducted. But the improvement he showed was so large that it did not need statistical analysis to validate it.  And anyway such analysis tools were not available in 1847. If they had been he might have had more success influencing his peers. And if he had achieved that goal then thousands, if not millions, of deaths from hospital acquired infections may have been prevented.  With the clarity of hindsight we now know this harm was avoidable.

No. The problem we have now is because the improvement that follows a single intervention is not very large. And when the causal mechanisms are multi-factorial we need more than one intervention to achieve the improvement we want. The big reduction in avoidable harm. How do we do that scientifically and safely?


About 20% of hospital acquired infections occur after surgical operations.

We have learned much since 1847 and we have designed much safer surgical systems and processes. Joseph Lister ushered in the era of safe surgery, much has happened since.

We routinely use carefully designed, ultra-clean operating theatres, sterilized surgical instruments, gloves and gowns, and aseptic techniques – all to reduce bacterial contamination from outside.

But surgical site infections (SSIs) are still common place. Studies show that 5% of patients on average will suffer this complication. Some procedures are much higher risk than others, despite the precautions we take.  And many surgeons assume that this risk must just be accepted.

Others have tried to understand the mechanism of SSI and their research shows that the source of the infections is the patients themselves. We all carry a ‘bacterial flora’ and normally that is no problem. Our natural defense – our skin – is enough.  But when that biological barrier is deliberately breached during a surgical operation then we have a problem. The bugs get in and cause mischief. They cause surgical site infections.

So we have done more research to test interventions to prevent this harm. Each intervention has been subject to well-designed, carefully-conducted, statistically-valid and very expensive randomized controlled trials.  And the results are often equivocal. So we repeat the trials – bigger, better controlled trials. But the effects of the individual interventions are small and they easily get lost in the noise. So we pool the results of many RCTs in what is called a ‘meta-analysis’ and the answer from that is very often ‘not proven’ – either way.  So individual surgeons are left to make the judgement call and not surprisingly there is wide variation in practice.  So is this the best that medical science can do?

No. There is another way. What we can do is pool all the learning from all the trials and design a multi-facetted intervention. A bundle of care. And the idea of a bundle is that the  separate small effects will add or even synergise to create one big effect.  We are not so much interested in the mechanism as the outcome. Just like Ignaz Semmelweiss.

And we can now do something else. We can test our bundle of care using statistically robust tools that do not require a RCT.  They are just as statistically valid as a RCT but a different design.

And the appropriate tool for this to measure the time interval between adverse the events  – and then to plot this continuous metric as a time-series chart.

But we must be disciplined. First we must establish the baseline average interval and then we introduce our bundle and then we just keep measuring the intervals.

If our bundle works then the interval between the adverse events gets longer – and we can easily prove that using our time-series chart. The longer the interval the more ‘proof’ we have.  In fact we can even predict how long we need to observe to prove that ‘no events’ is a statistically significant improvement. That is an elegant an efficient design.


Here is a real and recent example.

The time-series chart below shows the interval in days between surgical site infections following routine hernia surgery. These are not life threatening complications. They rarely require re-admission or re-operation. But they are disruptive for patients. They cause pain, require treatment with antibiotics, and the delay recovery and return to normal activities. So we would like to avoid them if possible.

Hernia_SSI_CareBundle

The green and red lines show the baseline period. The  green line says that the average interval between SSIs is 14 days.  The red line says that an interval more than about 60 days would be surprisingly long: valid statistical evidence of an improvement.  The end of the green and red lines indicates when the intervention was made: when the evidence-based designer care bundle was adopted together with the discipline of applying it to every patient. No judgement. No variation.

The chart tells the story. No complicated statistical analysis is required. It shows a statistically significant improvement.  And the SSI rate fell by over 80%. That is a big improvement.

We still do not know how the care bundle works. We do not know which of the seven simultaneous simple and low-cost interventions we chose are the most important or even if they work independently or in synergy.  Knowledge of the mechanism was not our goal.

Our goal was to improve outcomes for our patients – to reduce avoidable harm – and that has been achieved. The evidence is clear.

That is Improvement Science in action.

And to read the full account of this example of the Science of Improvement please go to:

http://www.journalofimprovementscience.net

It is essay number 18.

And avoid another error of omission. Do not omit to share this message – it is important.

hurry_with_the_SFQP_kit[Dring] Bob’s laptop signaled the arrival of Leslie for their regular ISP remote coaching session.

<Bob> Hi Leslie. Thanks for emailing me with a long list of things to choose from. It looks like you have been having some challenging conversations.

<Leslie> Hi Bob. Yes indeed! The deepening gloom and the last few blog topics seem to be polarising opinion. Some are claiming it is all hopeless and others, perhaps out of desperation, are trying the FISH stuff for themselves and discovering that it works.  The ‘What Ifs’ are engaged in war of words with the ‘Yes Buts’.

<Bob> I like your metaphor! Where would you like to start on the long list of topics?

<Leslie> That is my problem. I do not know where to start. They all look equally important.

<Bob> So, first we need a way to prioritise the topics to get the horse-before-the-cart.

<Leslie> Sounds like a good plan to me!

<Bob> One of the problems with the traditional improvement approaches is that they seem to start at the most difficult point. They focus on ‘quality’ first – and to be fair that has been the mantra from the gurus like W.E.Deming. ‘Quality Improvement’ is the Holy Grail.

<Leslie>But quality IS important … are you saying they are wrong?

<Bob> Not at all. I am saying that it is not the place to start … it is actually the third step.

<Leslie>So what is the first step?

<Bob> Safety. Eliminating avoidable harm. Primum Non Nocere. The NoNos. The Never Events. The stuff that generates the most fear for everyone. The fear of failure.

<Leslie> You mean having a service that we can trust not to harm us unnecessarily?

<Bob> Yes. It is not a good idea to make an unsafe design more efficient – it will deliver even more cumulative harm!

<Leslie> OK. That makes perfect sense to me. So how do we do that?

<Bob> It does not actually matter.  Well-designed and thoroughly field-tested checklists have been proven to be very effective in the ‘ultra-safe’ industries like aerospace and nuclear.

<Leslie> OK. Something like the WHO Safe Surgery Checklist?

<Bob> Yes, that is a good example – and it is well worth reading Atul Gawande’s book about how that happened – “The Checklist Manifesto“.  Gawande is a surgeon who had published a lot on improvement and even so was quite skeptical that something as simple as a checklist could possibly work in the complex world of surgery. In his book he describes a number of personal ‘Ah Ha!’ moments that illustrate a phenomenon that I call Jiggling.

<Leslie> OK. I have made a note to read Checklist Manifesto and I am curious to learn more about Jiggling – but can we stick to the point? Does quality come after safety?

<Bob> Yes, but not immediately after. As I said, Quality is the third step.

<Leslie> So what is the second one?

<Bob> Flow.

There was a long pause – and just as Bob was about to check that the connection had not been lost – Leslie spoke.

<Leslie> But none of the Improvement Schools teach basic flow science.  They all focus on quality, waste and variation!

<Bob> I know. And attempting to improve quality before improving flow is like papering the walls before doing the plastering.  Quality cannot grow in a chaotic context. The flow must be smooth before that. And the fear of harm must be removed first.

<Leslie> So the ‘Improving Quality through Leadership‘ bandwagon that everyone is jumping on will not work?

<Bob> Well that depends on what the ‘Leaders’ are doing. If they are leading the way to learning how to design-for-safety and then design-for-flow then the bandwagon might be a wise choice. If they are only facilitating collaborative agreement and group-think then they may be making an unsafe and ineffective system more efficient which will steer it over the edge into faster decline.

<Leslie>So, if we can stabilize safety using checklists do we focus on flow next?

<Bob>Yup.

<Leslie> OK. That makes a lot of sense to me. So what is Jiggling?

<Bob> This is Jiggling. This conversation.

<Leslie> Ah, I see. I am jiggling my understanding through a series of ‘nudges’ from you.

<Bob>Yes. And when the learning cogs are a bit rusty, some Improvement Science Oil and a bit of Jiggling is more effective and much safer than whacking the caveman wetware with a big emotional hammer.

<Leslie>Well the conversation has certainly jiggled Safety-Flow-Quality-and-Productivity into a sensible order for me. That has helped a lot. I will sort my to-do list into that order and start at the beginning. Let me see. I have a plan for safety, now I can focus on flow. Here is my top flow niggle. How do I design the resource capacity I need to ensure the flow is smooth and the waiting times are short enough to avoid ‘persecution’ by the Target Time Police?

<Bob> An excellent question! I will send you the first ISP Brainteaser that will nudge us towards an answer to that question.

<Leslie> I am ready and waiting to have my brain-teased and my niggles-nudged!

London_UndergroundSystems are built from intersecting streams of work called processes.

This iconic image of the London Underground shows a system map – a set of intersecting transport streams.

Each stream links a sequence of independent steps – in this case the individual stations.  Each step is a system in itself – it has a set of inner streams.

For a system to exhibit stable and acceptable behaviour the steps must be in synergy – literally ‘together work’. The steps also need to be in synchrony – literally ‘same time’. And to do that they need to be aligned to a common purpose.  In the case of a transport system the design purpose is to get from A to B safety, quickly, in comfort and at an affordable cost.

In large socioeconomic systems called ‘organisations’ the steps represent groups of people with special knowledge and skills that collectively create the desired product or service.  This creates an inevitable need for ‘handoffs’ as partially completed work flows through the system along streams from one step to another. Each step contributes to the output. It is like a series of baton passes in a relay race.

This creates the requirement for a critical design ingredient: trust.

Each step needs to be able to trust the others to do their part:  right-first-time and on-time.  All the steps are directly or indirectly interdependent.  If any one of them is ‘untrustworthy’ then the whole system will suffer to some degree. If too many generate dis-trust then the system may fail and can literally fall apart. Trust is like social glue.

So a critical part of people-system design is the development and the maintenance of trust-bonds.

And it does not happen by accident. It takes active effort. It requires design.

We are social animals. Our default behaviour is to trust. We learn distrust by experiencing repeated disappointments. We are not born cynical – we learn that behaviour.

The default behaviour for inanimate systems is disorder – and it has a fancy name – it is called ‘entropy’. There is a Law of Physics that says that ‘the average entropy of a system will increase over time‘. The critical word is ‘average’.

So, if we are not aware of this and we omit to pay attention to the hand-offs between the steps we will observe increasing disorder which leads to repeated disappointments and erosion of trust. Our natural reaction then is ‘self-protect’ which implies ‘check-and-reject’ and ‘check and correct’. This adds complexity and bureaucracy and may prevent further decline – which is good – but it comes at a cost – quite literally.

Eventually an equilibrium will be achieved where our system performance is limited by the amount of check-and-correct bureaucracy we can afford.  This is called a ‘mediocrity trap’ and it is very resilient – which means resistant to change in any direction.


To escape from the mediocrity trap we need to break into the self-reinforcing check-and-reject loop and we do that by developing a design that challenges ‘trust eroding behaviour’.  The strategy is to develop a skill called  ‘smart trust’.

To appreciate what smart trust is we need to view trust as a spectrum: not as a yes/no option.

At one end is ‘nonspecific distrust’ – otherwise known as ‘cynical behaviour’. At the other end is ‘blind trust’ – otherwise  known and ‘gullible behaviour’.  Neither of these are what we need.

In the middle is the zone of smart trust that spans healthy scepticism  through to healthy optimism.  What we need is to maintain a balance between the two – not to eliminate them. This is because some people are ‘glass-half-empty’ types and some are ‘glass-half-full’. And both views have a value.

The action required to develop smart trust is to respectfully challenge every part of the organisation to demonstrate ‘trustworthiness’ using evidence.  Rhetoric is not enough. Politicians always score very low on ‘most trusted people’ surveys.

The first phase of this smart trust development is for steps to demonstrate trustworthiness to themselves using their own evidence, and then to share this with the steps immediately upstream and downstream of them.

So what evidence is needed?

SFQP1Safety comes first. If a step cannot be trusted to be safe then that is the first priority. Safe systems need to be designed to be safe.

Flow comes second. If the streams do not flow smoothly then we experience turbulence and chaos which increases stress,  the risk of harm and creates disappointment for everyone. Smooth flow is the result of careful  flow design.

Third is Quality which means ‘setting and meeting realistic expectations‘.  This cannot happen in an unsafe, chaotic system.  Quality builds on Flow which builds on Safety. Quality is a design goal – an output – a purpose.

Fourth is Productivity (or profitability) and that does not automatically follow from the other three as some QI Zealots might have us believe. It is possible to have a safe, smooth, high quality design that is unaffordable.  Productivity needs to be designed too.  An unsafe, chaotic, low quality design is always more expensive.  Always. Safe, smooth and reliable can be highly productive and profitable – if designed to be.

So whatever the driver for improvement the sequence of questions is the same for every step in the system: “How can I demonstrate evidence of trustworthiness for Safety, then Flow, then Quality and then Productivity?”

And when that happens improvement will take off like a rocket. That is the Speed of Trust.  That is Improvement Science in Action.

clock_hands_spinning_import_150_wht_3149Tangible improvement takes time. Sometimes it takes a long time.

The more fundamental the improvement the more people are affected. The more people involved the greater the psychological inertia. The greater the resistance the longer it takes to show tangible effects.

The advantage of deep-level improvement is that the cumulative benefit is greater – the risk is that the impatient Improvementologist may give up too early – sometimes just before the benefit becomes obvious to all.

The seeds of change need time to germinate and to grow – and not all good ideas will germinate. The green shoots of innovation do not emerge immediately – there is often a long lag and little tangible evidence for a long time.

This inevitable  delay is a source of frustration, and the impatient innovator can unwittingly undo their good work.  By pushing too hard they can drag a failure from the jaws of success.

Q: So how do we avoid this trap?

The trick is to understand the effect of the change on the system.  This means knowing where it falls on our Influence Map that is marked with the Circles of Control, Influence and Concern.

Our Circle of Concern includes all those things that we are aware of that present a threat to our future survival – such as a chunk of high-velocity space rock smashing into the Earth and wiping us all out in a matter of milliseconds. Gulp! Very unlikely but not impossible.

Some concerns are less dramatic – such as global warming – and collectively we may have more influence over changing that. But not individually.

Our Circle of Influence lies between the limit of our individual control and the limit of our collective control. This a broad scope because “collective” can mean two, twenty, two hundred, two thousand, two million, two billion and so on.

Making significant improvements is usually a Circle of Influence challenge and only collectively can we make a difference.  But to deliver improvement at this level we have to influence others to change their knowledge, understanding, attitudes, beliefs and behaviour. That is not easy and that is not quick. It is possible though – with passion, plausibility, persistence, patience – and an effective process.

It is here that we can become impatient and frustrated and are at risk of giving up too soon – and our temperaments influence the risk. Idealists are impatient for fundamental change. Rationals, Guardians and Artisans do not feel the same pain – and it is a rich source of conflict.

So if we need to see tangible results quickly then we have to focus closer to home. We have to work inside our Circle of Individual Influence and inside our Circle of Control.  The scope of individual influence varies from person-to-person but our Circle of Control is the same for all of us: the outer limit is our skin.  We all choose our behaviour and it is that which influences others: for better or for worse.  It is not what we think it is what we do. We cannot read or control each others minds. We can all choose our attitudes and our actions.

So if we want to see tangible improvement quickly then we must limit the scope of our action to our Circle of Individual Influence and get started.  We do what we can and as soon as we can.

Choosing what to do and what not do requires wisdom. That takes time to develop too.


Making an impact outside the limit of our Circle of Individual Influence is more difficult because it requires influencing many other people.

So it is especially rewarding for to see examples of how individual passion, persistence and patience have led to profound collective improvement.  It proves that it is still possible. It provides inspiration and encouragement for others.

One example is the recently published Health Foundation Quality, Cost and Flow Report.

This was a three-year experiment to test if the theory, techniques and tools of Improvement Science work in healthcare: specifically in two large UK acute hospitals – Sheffield and Warwick.

The results showed that Improvement Science does indeed work in healthcare and it worked for tough problems that were believed to be very difficult if not impossible to solve. That is very good news for everyone – patients and practitioners.

But the results have taken some time to appear in published form – so it is really good news to report that the green shoots of improvement are now there for all to see.

The case studies provide hard evidence that win-win-win outcomes are possible and achievable in the NHS.

The Impossibility Hypothesis has been disproved. The cynics can step off the bus. The skeptics have their evidence and can now become adopters.

And the report offers a lot of detail on how to do it including two references that are available here:

  1. A Recipe for Improvement PIE
  2. A Study of Productivity Improvement Tactics using a Two-Stream Production System Model

These references both describe the fundamentals of how to align financial improvement with quality and delivery improvement to achieve the elusive win-win-win outcome.

A previously invisible door has opened to reveal a new Land of Opportunity. A land inhabited by Improvementologists who mark the path to learning and applying this new knowledge and understanding.

There are many who do not know what to do to solve the current crisis in healthcare – they now have a new vista to explore.

Do not give up too soon –  there is a light at the end of the dark tunnel.

And to get there safely and quickly we just need to learn and apply the Foundations of Improvement Science in Healthcare – and we first learn to FISH in our own ponds first.

fish

tweet_birdie_flying_between_phones_150_wht_9168Tweet
The sound heralded the arrival of a tweet so Bob looked up from his book and scanned the message. It was from Leslie, one of the Improvement Science apprentices.

It said “If your organisation is feeling poorly then do not forget to measure the Temperamenture. You may have Cultural Change Fever.

Bob was intrigued. This was a novel word and he suspected it was not a spelling error. He know he was being teased. He tapped a reply on his iPad “Interesting word ‘Temperamenture’ – can you expand?” 

Ring Ring
<Bob> Hello, Bob here.

There was laughing on the other end of the line – it was Leslie.

<Leslie> Ho Ho. Hi Bob – I thought that might prick your curiosity if you were on line. I know you like novel words.

<Bob> Ah! You know my weakness – I am at your mercy now! So, I am consumed with curiosity – as you knew I would be.

<Leslie> OK. No more games. You know that you are always saying that there are three parts to Improvement Science – Processes, People and Systems – and that the three are synergistic so they need to be kept in balance …

<Bob> Yes.

<Leslie> Well, I have discovered a source of antagonism that creates a lot of cultural imbalance and emotional heat in my organisation.

<Bob> OK. So I take from that you mean an imbalance in the People part that then upsets the Process and System parts.

<Leslie> Yes, exactly. In your Improvement Science course you mentioned the theory behind this but did not share any real examples.

<Bob> That is very possible. Hard evidence and explainable examples are easier for the Process component – the People stuff is more difficult to do that way. Can you be more specific? I think I know where you may be going with this.

<Leslie> OK. Where do you feel I am going with it?

<Bob> Ha! The student becomes the teacher. Excellent response! I was thinking something to do with the Four Temperaments.

<Leslie>Yes. And specifically the conflict that can happen between them. I am thinking of the tension between the Idealists and the Guardians.

<Bob> Ah! Yes. The Bile Wars – Yellow and Black. The Cholerics versus the Melancholics. So do you have hard evidence of this happening in reality rather than just my theoretical rhetoric?

<Leslie> Yes! But the facts do not seem to fit the theory. You know that I work in a hospital. Well one of the most important “engines” of a hospital is the surgical operating suite. Conveniently called the SOS.

<Bob> Yes. It seems to be a frequent source of both Nuggets and Niggles.

<Leslie> Well, I am working with the SOS team at my hospital and I have to say that they are a pretty sceptical bunch. Everyone seems to have strong opinions. Strong but different opinions of what should happen and who should do it.  The words someone and should get mentioned a lot.  I have not managed to find this elusive “someone” yet.  The some-one, no-one, every-one, any-one problem. 

<Bob> OK. I have heard this before. I hear that surgeons in particular have strong opinions – and they disagree with each other! I remember watching episodes of “Doctor in the House” many years ago. What was the name of the irascible chief surgeon played by James Robertson Justice? Sir Lancelot Spratt The archetype surgeon. Are they actually like that?

<Leslie> I have not met any as extreme as Sir Lancelot though some do seem to emulate that role model. In reality the surgeons, anaesthetists, nurses, ODPs, and managers all seem to believe there is one way that a theatre should be run, their way, and their separate “one ways” do not line up. Hence the high emotional temperature. 

<Bob> OK, so how does the Temperament dimension relate to this? Is there a temperament mismatch between the different tribes in the operating suite as the MBTI theory would suggest?

<Leslie> That was my hypothesis and I decided that the only way I could test it was by mapping the temperaments using the Temperament Sorter from the FISH toolbox.

<Bob> Excellent, but you would need quite a big sample to draw any statistically valid conclusions. How did you achieve that with a group of disparate sceptics? 

<Leslie>I know. So I posed this challenge as a research question – and they were curious enough to give it a try. Well, the Surgeons and Anaesthetists were anyway. The Nurses, OPDs and Managers chose to sit on the fence and watch the game.

<Bob>Wow! Now I am really interested. What did you find?

<Leslie>Woah there! I need to explain how we did it first. They have a monthly audit meeting where they all get together as separate groups and after I posed the question they decided to do use the Temperament Sorter at one of those meetings. It was done in a light-hearted way and it was really good fun too. I brought some cartoons and descriptions of the sixteen MBTI types and they tried to guess who was which type.

<Bob>Excellent. So what did you find?

<Leslie>We disproved the hypothesis that there was a Temperament mismatch.

<Bob>Really! What did the data show?

<Leslie> It showed that the Temperament profile for both surgeons and anaesthetists was different from the population average …

<Bob>OK, and …?

<Leslie>… and that there was no statistical difference between surgeons and anaesthetists.

<Bob>Really! So what are they both?

<Leslie>Guardians. The majority of both tribes are SJs.

There was a long pause. Bob was digesting this juicy new fact. Leslie knew that if there was one thing that Bob really liked it was having a theory disproved by reality. Eventually he replied.

<Bob> Clarity of hindsight is a wonderful thing. It makes complete sense that they are Guardians. Speaking as a patient, what I want most is Safety and Predictability which is the ideal context for Guardians to deliver their best.  I am sure that neither surgeons nor anaesthetists like “surprises” and I suspect that they both prefer doing things “by the book”. They are sceptical of new ideas by temperament.

<Leslie> And there is more.

<Bob> Excellent! What?

<Leslie> They are tough-minded Guardians. They are STJs.

<Bob> Of course! Having the responsibility of “your life in my hands” requires a degree of tough-mindedness and an ability to not get too emotionally hooked.  Sir Lancelot is a classic extrovert tough-minded Guardian! The Rolls-Royce and the ritual humiliation of ignorant underlings all fits. Wow! Well done Leslie. So what have you done with this new knowledge and deeper understanding?

<Leslie>Ouch! You got me! That is why I sent the Tweet. Now what do I do?

<Bob>Ah! I am not sure. We are both in uncharted water now so I suggest we explore and learn together. Let me ponder and do some exploring of the implications of your findings and I will get back to you. Can you do the same?

<Leslie>Good plan. Shall we share notes in a couple of days?

<Bob>Excellent. I look forward to it.


This is not a completely fictional narrative.

In a recent experiment the Temperament of a group of 66 surgeons and 65 anaesthetists was mapped using a standard Myers-Briggs Type Indicator® tool.  The data showed that the proportion reporting a Guardian (xSxJ) preference was 62% for the surgeons and 59% for the anaesthetists.  The difference was not statistically significant [For the statistically knowledgable the Chi-squared test gave a p-value of 0.84].  The reported proportion of the normal population who have a Guardian temperament is 34% so this is very different from the combined group of operating theatre doctors [Chi-squared test, p<0.0001].  Digging deeper into the data the proportion showing the tough-minded Guardian preference, the xSTJ, was 55% for the Surgeons and 46% for the Anaesthetists whichwas also not significantly different [p=0.34] but compared with a normal population proportion of 24% there are significantly more tough-minded Guardians in the operating theatre [p<0.0001]. 

So what then is the difference between Surgeons and Anaesthetists in their preferred modes of thinking?

The data shows that Surgeons are more likely to prefer Extraversion – the ESTJ profile – compared with Anaesthetists – who lean more towards Introversion – the ISTJ profile (p=0.12). This p-value means that with the data available there is a one in eight chance that this difference is due to chance. We would needs a bigger set of data to get greater certainty.  

The temperament gradient is enough to create a certain degree of tension because although the Guardian temperament is the same, and the tough-mindedness is the same, the dominant function differs between the ESTJ and the ISTJ types. As the Surgeons tend to the ESTJ mode, their dominant function is Thinking Judgement. The Anaesthetists tend to perfer ISTJ so their dominant fuction is Sensed Perceiving. This makes a difference.

And it fits with their chosen roles in the operating theatre. The archetype ESTJ Surgeon is the Supervisor and decides what to do and who does it. The archetype ISTJ Anaesthetist is the Inspector and monitors and maintains safety and stability. This is a sweepig generalisation of course – but a useful one.

The roles are complementary, the minor conflict is inevitable, and the tension is not a “bad” thing – it is healthy – for the patient. But when external forces threaten the safety, predictability and stability the conflict is amplified.

lightning_strike_150_wht_5809Rather like the weather.

Hot wet air looks clear. Cold dry air looks clear too.  When hot-humid air from the tropics meets cold-crisp air from the poles then a band of of fog will be created. We call it a weather front and it generates variation. And if the temperature and humidity difference is excessive then storm clouds will form. The lightning will flash and the thunder will growl as the energy is released.

Clouds obscure clarity of forward vision but clouds also create shade from the sun above; clouds trap warmth beneath; and clouds create rain which is necessary to sustain growth. Clouds are not all bad. 

An Improvement Scientist knows that 100% harmony is not the healthiest ratio. Unchallenged group-think is potentially dangerous. Zero harmony is also unhealthy. Open warfare is destructive.  Everyone loses. A mixture of temperaments, a bit of fog, and a bit of respectful challenge is healthier than All or None.

It is at the chaotic interface between different temperaments that learning and innovation happens so a slight temperamenture gradient is ideal.  The emotometer should not read too cold or too hot.

Understanding this is a big step towards being able to manage the creative tension.  

To explore the Temperamenture Map of your team, department and organisation try the Temperament Sorter tool – one of the Improvement Science cultural diagnostic tests.

The current crisis of confidence in the NHS has all the hallmarks of a classic system behaviour called creep-crack-crunch.

The first obvious crunch may feel like a sudden shock but it is usually not a complete surprise and it is actually one of a series of cracks that are leading up to a BIG CRUNCH. These cracks are an early warning sign of pressure building up in parts of the system and causing localised failures. These cracks weaken the whole system. The underlying cause is called creep.

SanFrancisco_PostEarthquake

Earthquakes are a perfect example of this phenomemon. Geological time scales are measured in thousands of years and we now know that the surface of the earth is a dynamic structure with vast contient-sized plates of solid rock floating on a liquid core of molten magma. Over millions of years the continents have moved huge distances and the world we see today on our satellite images is just a single frame in a multi-billion year geological video.  That is the geological creep bit. The cracks first appear at the edges of these tectonic plates where they smash into each other, grind past each other or are pulled apart from each other.  The geological hot-spots are marked out on our global map by lofty mountain ranges, fissured earthquake zones, and deep mid-ocean trenches. And we know that when a geological crunch arrives it happens in a blink of the geological eye.

The panorama above shows the devastation of San Francisco caused by the 1906 earthquake. San Francisco is built on the San Andreas Fault – the junction between the Pacific plate and the North American plate. The dramatic volcanic eruption in Iceland in 2010 came and went in a matter of weeks but the irreversible disruption it caused for global air traffic will be felt for years. The undersea earthquakes that caused the devastating tsunamis in 2006 and 2011 lasted only a few minutes; the deadly shock waves crossed an ocean in a matter of hours; and when they arrived the silent killer wiped out whole shoreside communities in seconds. Tens of thousands of lives were lost and the social after-shocks of that geological-crunch will be felt for decades.

These are natural disasters. We have little or no influence over them. Human-engineered disasters are a different matter – and they are just as deadly.

The NHS is an example. We are all painfully aware of the recent crisis of confidence triggered by the Francis Report. Many could see the cracks appearing and tried to blow their warning whistles but with little effect – they were silenced with legal gagging clauses and the opening cracks were papered over. It was only after the crunch that we finally acknowledged what we already knew and we started to search for the creep. Remorse and revenge does not bring back those who have been lost.  We need to focus on the future and not just point at the past.

UK_PopulationPyramid_2013Socio-economic systems evolve at a pace that is measured in years. So when a social crunch happens it is necessary to look back several decades for the tell-tale symptoms of creep and the early signs of cracks appearing.

Two objective measures of a socio-economic system are population and expenditure.

Population is people-in-progress; and national expenditure is the flow of the cash required to keep the people-in-progress watered, fed, clothed, housed, healthy and occupied.

The diagram above is called a population pyramid and it shows the distribution by gender and age of the UK population in 2013. The wobbles tell a story. It does rather look like the profile of a bushy-eyebrowed, big-nosed, pointy-chinned old couple standing back-to-back and maybe there is a hidden message for us there?

The “eyebrow” between ages 67 and 62 is the increase in births that happened 62 to 67 years ago: betwee 1946 and 1951. The post WWII baby boom.  The “nose” of 42-52 year olds are the “children of the 60’s” which was a period of rapid economic growth and new optimism. The “upper lip” at 32-42 correlates with the 1970’s that was a period of stagnant growth,  high inflation, strikes, civil unrest and the dark threat of global thermonuclear war. This “stagflation” is now believed to have been triggered by political meddling in the Middle-East that led to the 1974 OPEC oil crisis and culminated in the “winter of discontent” in 1979.  The “chin” signals there was another population expansion in the 1980s when optimism returned (SALT-II was signed in 1979) and the economy was growing again. Then the “neck” contraction in the 1990’s after the 1987 Black Monday global stock market crash.  Perhaps the new optimism of the Third Millenium led to the “chest” expansion but the financial crisis that followed the sub-prime bubble to burst in 2008 has yet to show its impact on the population chart. This static chart only tells part of the story – the animated chart reveals a significant secondary expansion of the 20-30 year old age group over the last decade. This cannot have been caused by births and is evidence of immigration of a large number of young couples – probably from the expanding Europe Union.

If this “yo-yo” population pattern is repeated then the current economic downturn will be followed by a contraction at the birth end of the spectrum and possibly also net emigration. And that is a big worry because each population wave takes a 100 years to propagate through the system. The most economically productive population – the  20-60 year olds  – are the ones who pay the care bills for the rest. So having a population curve with lots of wobbles in it causes long term socio-economic instability.

Using this big-picture long-timescale perspective; evidence of an NHS safety and quality crunch; silenced voices of cracks being papered-over; let us look for the historical evidence of the creep.

Nowadays the data we need is literally at our fingertips – and there is a vast ocean of it to swim around in – and to drown in if we are not careful.  The Office of National Statistics (ONS) is a rich mine of UK socioeconomic data – it is the source of the histogram above.  The trick is to find the nuggets of knowledge in the haystack of facts and then to convert the tables of numbers into something that is a bit more digestible and meaningful. This is what Russ Ackoff descibes as the difference between Data and Information. The data-to-information conversion needs context.

Rule #1: Data without context is meaningless – and is at best worthless and at worse is dangerous.

boxes_connected_PA_150_wht_2762With respect to the NHS there is a Minotaur’s Labyrinth of data warehouses – it is fragmented but it is out there – in cyberspace. The Department of Health publishes some on public sites but it is a bit thin on context so it can be difficult to extract the meaning.

Relying on our memories to provide the necessary context is fraught with problems. Memories are subject to a whole range of distortions, deletions, denials and delusions.  The NHS has been in existence since 1948 and there are not many people who can personally remember the whole story with objective clarity.  Fortunately cyberspace again provides some of what we need and with a few minutes of surfing we can discover something like a website that chronicles the history of the NHS in decades from its creation in 1948 – http://www.nhshistory.net/ – created and maintained by one person and a goldmine of valuable context. The decade that is of particular interest is 1998-2007 – Chapter 6

With just some data and some context it is possible to pull together the outline of the bigger picture of the decade that led up to the Mid Staffordshire healthcare quality crunch.

We will look at this as a NHS system evolving over time within its broader UK context. Here is the time-series chart of the population of England – the source of the demand on the NHS.

Population_of_England_1984-2010This shows a significant and steady increase in population – 12% overall between 1984 an 2012.

This aggregate hides a 9% increase in the under 65 population and 29% growth in the over 65 age group.

This is hard evidence of demographic creep – a ticking health and social care time bomb. And the curve is getting steeper. The pressure is building.

The next bit of the map we need is a measure of the flow through hospitals – the activity – and this data is available as the annual HES (Hospital Episodes Statistics) reports.  The full reports are hundreds of pages of fine detail but the headline summaries contain enough for our present purpose.

NHS_HES_Admissions_1997-2011

The time- series chart shows a steady increase in hospital admissions. Drilling into the summaries revealed that just over a third are emergency admissions and the rest are planned or maternity.

In the decade from 1998 to 2008 there was a 25% increase in hospital activity. This means more work for someone – but how much more and who for?

But does it imply more NHS beds?

Beds require wards, buildings and infrastructure – but it is the staff that deliver the health care. The bed is just a means of storage.  One measure of capacity and cost is the number of staffed beds available to be filled.  But this like measuring the number of spaces in a car park – it does not say much about flow – it is a just measure of maximum possible work in progress – the available space to hold the queue of patients who are somewhere between admission and discharge.

Here is the time series chart of the number of NHS beds from 1984 to 2006. The was a big fall in the number of beds in the decade after 1984 [Why was that?]

NHS_Beds_1984-2006

Between 1997 and 2007 there was about a 10% fall in the number of beds. The NHS patient warehouse was getting smaller.

But the activity – the flow – grew by 25% over the same time period: so the Laws Of Physics say that the flow must have been faster.

The average length of stay must have been falling.

This insight has another implication – fewer beds must mean smaller hospitals and lower costs – yes?  After all everyone seems to equate beds-to-cost; more-beds-cost-more less-beds-cost-less. It sounds reasonable. But higher flow means more demand and more workload so that would require more staff – and that means higher costs. So which is it? Less, the same or more cost?

NHS_Employees_1996_2007The published data says that staff headcount  went up by 25% – which correlates with the increase in activity. That makes sense.

And it looks like it “jumped” up in 2003 so something must have triggered that. More cash pumped into the system perhaps? Was that the effect of the Wanless Report?

But what type of staff? Doctors? Nurses? Admin and Clerical? Managers?  The European Working Time Directive (EWTD) forced junior doctors hours down and prompted an expansion of consultants to take on the displaced service work. There was also a gradual move towards specialisation and multi-disciplinary teams. What impact would that have on cost? Higher most likely. The system is getting more complex.

Of course not all costs have the same impact on the system. About 4% of staff are classified as “management” and it is this group that are responsible for strategic and tactical planning. Managers plan the work – workers work the plan.  The cost and efficiency of the management component of the system is not as useful a metric as the effectiveness of its collective decision making. Unfortuately there does not appear to be any published data on management decision making qualty and effectiveness. So we cannot estimate cost-effectiveness. Perhaps that is because it is not as easy to measure effectiveness as it is to count admissions, discharges, head counts, costs and deaths. Some things that count cannot easily be counted. The 4% number is also meaningless. The human head represents about 4% of the bodyweight of an adult person – and we all know that it is not the size of our heads that is important it is the effectiveness of the decisions that it makes which really counts!  Effectiveness, efficiency and costs are not the same thing.

Back to the story. The number of beds went down by 10% and number of staff went up by 25% which means that the staff-per-bed ratio went up by nearly 40%.  Does this mean that each bed has become 25% more productive or 40% more productive or less productive? [What exactly do we mean by “productivity”?]

To answer that we need to know what the beds produced – the discharges from hospital and not just the total number, we need the “last discharges” that signal the end of an episode of hospital care.

NHS_LastDischarges_1998-2011The time-series chart of last-discharges shows the same pattern as the admissions: as we would expect.

This output has two components – patients who leave alive and those who do not.

So what happened to the number of deaths per year over this period of time?

That data is also published annually in the Hospital Episode Statistics (HES) summaries.

This is what it shows ….

NHS_Absolute_Deaths_1998-2011The absolute hospital mortality is reducing over time – but not steadily. It went up and down between 2000 and 2005 – and has continued on a downward trend since then.

And to put this into context – the UK annual mortality is about 600,000 per year. That means that only about 40% of deaths happen in hospitals. UK annual mortality is falling and births are rising so the population is growing bigger and older.  [My head is now starting to ache trying to juggle all these numbers and pictures in it].

This is not the whole story though – if the absolute hospital activity is going up and the absolute hospital mortality is going down then this raw mortality number may not be telling the whole picture. To correct for those effects we need the ratio – the Hospital Mortality Ratio (HMR).

NHS_HospitalMortalityRatio_1998-2011This is the result of combining these two metrics – a 40% reduction in the hospital mortality ratio.

Does this mean that NHS hospitals are getting safer over time?

This observed behaviour can be caused by hospitals getting safer – it can also be caused by hospitals doing more low-risk work that creates a dilution effect. We would need to dig deeper to find out which. But that will distract us from telling the story.

Back to productivity.

The other part of the productivity equation is cost.

So what about NHS costs?  A bigger, older population, more activity, more staff, and better outcomes will all cost more taxpayer cash, surely! But how much more?  The activity and head count has gone up by 25% so has cost gone up by the same amount?

NHS_Annual_SpendThis is the time-series chart of the cost per year of the NHS and because buying power changes over time it has been adjusted using the Consumer Price Index using 2009 as the reference year – so the historical cost is roughly comparable with current prices.

The cost has gone up by 100% in one decade!  That is a lot more than 25%.

The published financial data for 2006-2010 shows that the proportion of NHS spending that goes to hospitals is about 50% and this has been relatively stable over that period – so it is reasonable to say that the increase in cash flowing to hospitals has been about 100% too.

So if the cost of hospitals is going up faster than the output then productivity is falling – and in this case it works out as a 37% drop in productivity (25% increase in activity for 100% increase in cost = 37% fall in productivity).

So the available data which anyone with a computer, an internet connection, and some curiosity can get; and with bit of spreadsheet noggin can turn into pictures shows that over the decade of growth that led up to the the Mid Staffs crunch we had:

1. A slightly bigger population; and a
2. significantly older population; and a
3. 25% increase in NHS hospital activity; and a
4. 10% fall in NHS beds; and a
5. 25% increase in NHS staff; which gives a
6. 40% increase in staff-per-bed ratio; an an
7. 8% reduction in absolute hospital mortality; which gives a
8. 40% reduction in relative hospital mortality; and a
9. 100% increase in NHS  hospital cost; which gives a
10. 37% fall drop in “hospital productivity”.

An experienced Improvement Scientist knows that a system that has been left to evolve by creep-crack-and-crunch can be re-designed to deliver higher quality and higher flow at lower total cost.

The safety creep at Mid-Staffs is now there for all to see. A crack has appeared in our confidence in the NHS – and raises a couple of crunch questions:

Where Has All The Extra Money Gone?

 How Will We Avoid The BIG CRUNCH?

The huge increase in NHS funding over the last decade was the recommendation of the Wanless Report but the impact of implementing the recommendations has never been fully explored. Healthcare is a service system that is designed to deliver two intangible products – health and care. So the major cost is staff-time – particularly the clinical staff.  A 25% increase in head count and a 100% increase in cost implies that the heads are getting more expensive.  Either a higher proportion of more expensive clinically trained and registered staff, or more pay for the existing staff or both.  The evidence shows that about 50% of NHS Staff are doctors and nurses and over the last decade there has been a bigger increase in the number of doctors than nurses. Added to that the Agenda for Change programme effectively increased the total wage bill and the new contracts for GPs and Consultants added more upward wage pressure.  This is cost creep and it adds up over time. The Kings Fund looked at the impact in 2006 and suggested that, in that year alone, 72% of the additional money was sucked up by bigger wage bills and other cost-pressures! The previous year they estimated 87% of the “new money” had disappeared hte same way. The extra cash is gushing though the cracks in the bottom of the fiscal bucket that had been clumsily papered-over. And these are recurring revenue costs so they add up over time into a future financial crunch.  The biggest one may be yet to come – the generous final-salary pensions that public-sector employees enjoy!

So it is even more important that the increasingly expensive clinical staff are not being forced to spend their time doing work that has no direct or indirect benefit to patients.

Trying to do a good job in a poorly designed system is both frustrating and demotivating – and the outcome can be a cynical attitude of “I only work here to pay the bills“. But as public sector wages go up and private sector pensions evaporate the cynics are stuck in a miserable job that they cannot afford to give up. And their negative behaviour poisons the whole pool. That is the long term cumulative cultural and financial cost of poor NHS process design. That is the outcome of not investing earlier in developing an Improvement Science capability.

The good news is that the time-series charts illustrate that the NHS is behaving like any other complex, adaptive, human-engineered value system. This means that the theory, techniques and tools of Improvement Science and value system design can be applied to answer these questions. It means that the root causes of the excessive costs can be diagnosed and selectively removed without compromising safety and quality. It means that the savings can be wisely re-invested to improve the resilience of some parts and to provide capacity in other parts to absorb the expected increases in demand that are coming down the population pipe.

This is Improvement Science. It is a learnable skill.

18/03/2013: Update

The question “Where Has The Money Gone?” has now been asked at the Public Accounts Committee

 

Who_Is_To_BlameThe retrospectoscope is the favourite instrument of the forensic cynic – the expert in the after-the-event-and-I-told-you-so rhetoric. The rabble-rouser for the lynch-mob.

It feels better to retrospectively nail-to-a-cross the person who committed the Cardinal Error of Omission, and leave them there in emotional and financial pain as a visible lesson to everyone else.

This form of public feedback has been used for centuries.

It is called barbarism, and it has no place in a modern civilised society.


A more constructive question to ask is:

Could the evolving Mid-Staffordshire crisis have been detected earlier … and avoided?”

And this question exposes a tricky problem: it is much more difficult to predict the future than to explain the past.  And if it could have been detected and avoided earlier, then how is that done?  And if the how-is-known then is everyone else in the NHS using this know-how to detect and avoid their own evolving Mid-Staffs crisis?

To illustrate how it is currently done let us use the actual Mid-Staffs data. It is conveniently available in Figure 1 embedded in Figure 5 on Page 360 in Appendix G of Volume 1 of the first Francis Report.  If you do not have it at your fingertips I have put a copy of it below.

MS_RawData

The message does not exactly leap off the page and smack us between the eyes does it? Even with the benefit of hindsight.  So what is the problem here?

The problem is one of ergonomics. Tables of numbers like this are very difficult for most people to interpret, so they create a risk that we ignore the data or that we just jump to the bottom line and miss the real message. And It is very easy to miss the message when we compare the results for the current period with the previous one – a very bad habit that is spread by accountants.

This was a slowly emerging crisis so we need a way of seeing it evolving and the better way to present this data is as a time-series chart.

As we are most interested in safety and outcomes, then we would reasonably look at the outcome we do not want – i.e. mortality.  I think we will all agree that it is an easy enough one to measure.

MS_RawDeathsThis is the raw mortality data from the table above, plotted as a time-series chart.  The green line is the average and the red-lines are a measure of variation-over-time. We can all see that the raw mortality is increasing and the red flags say that this is a statistically significant increase. Oh dear!

But hang on just a minute – using raw mortality data like this is invalid because we all know that the people are getting older, demand on our hospitals is rising, A&Es are busier, older people have more illnesses, and more of them will not survive their visit to our hospital. This rise in mortality may actually just be because we are doing more work.

Good point! Let us plot the activity data and see if there has been an increase.

MS_Activity

Yes – indeed the activity has increased significantly too.

Told you so! And it looks like the activity has gone up more than the mortality. Does that mean we are actually doing a better job at keeping people alive? That sounds like a more positive message for the Board and the Annual Report. But how do we present that message? What about as a ratio of mortality to activity? That will make it easier to compare ourselves with other hospitals.

Good idea! Here is the Raw Mortality Ratio chart.

MS_RawMortality_RatioAh ha. See! The % mortality is falling significantly over time. Told you so.

Careful. There is an unstated assumption here. The assumption that the case mix is staying the same over time. This pattern could also be the impact of us doing a greater proportion of lower complexity and lower risk work.  So we need to correct this raw mortality data for case mix complexity – and we can do that by using data from all NHS hospitals to give us a frame of reference. Dr Foster can help us with that because it is quite a complicated statistical modelling process. What comes out of Dr Fosters black magic box is the Global Hospital Raw Mortality (GHRM) which is the expected number of deaths for our case mix if we were an ‘average’ NHS hospital.

MS_ExpectedMortality_Ratio

What this says is that the NHS-wide raw mortality risk appears to be falling over time (which may be for a wide variety of reasons but that is outside the scope of this conversation). So what we now need to do is compare this global raw mortality risk with our local raw mortality risk  … to give the Hospital Standardised Mortality Ratio.

MS_HSMRThis gives us the Mid Staffordshire Hospital HSMR chart.  The blue line at 100 is the reference average – and what this chart says is that Mid Staffordshire hospital had a consistently higher risk than the average case-mix adjusted mortality risk for the whole NHS. And it says that it got even worse after 2001 and that it stayed consistently 20% higher after 2003.

Ah! Oh dear! That is not such a positive message for the Board and the Annual Report. But how did we miss this evolving safety catastrophe?  We had the Dr Foster data from 2001

This is not a new problem – a similar thing happened in Vienna between 1820 and 1850 with maternal deaths caused by Childbed Fever. The problem was detected by Dr Ignaz Semmelweis who also discovered a simple, pragmatic solution to the problem: hand washing.  He blew the whistle but unfortunately those in power did not like the implication that they had been the cause of thousands of avoidable mother and baby deaths.  Semmelweis was vilified and ignored, and he did not publish his data until 1861. And even then the story was buried in tables of numbers.  Semmelweis went mad trying to convince the World that there was a problem.  Here is the full story.

Also, time-series charts were not invented until 1924 – and it was not in healthcare – it was in manufacturing. These tried-and-tested safety and quality improvement tools are only slowly diffusing into healthcare because the barriers to innovation appear somewhat impervious.

And the pores have been clogged even more by the social poison called “cynicide” – the emotional and political toxin exuded by cynics.

So how could we detect a developing crisis earlier – in time to avoid a catastrophe?

The first step is to estimate the excess-death-equivalent. Dr Foster does this for you.MS_ExcessDeathsHere is the data from the table plotted as a time-series chart that shows that the estimated-excess-death-equivalent per year. It has an average of 100 (that is two per week) and the average should be close to zero. More worryingly the number was increasing steadily over time up to 200 per year in 2006 – that is about four excess deaths per week – on average.  It is important to remember that HSMR is a risk ratio and mortality is a multi-factorial outcome. So the excess-death-equivalent estimate does not imply that a clear causal chain will be evident in specific deaths. That is a complete misunderstanding of the method.

I am sorry – you are losing me with the statistical jargon here. Can you explain in plain English what you mean?

OK. Let us use an example.

Suppose we set up a tombola at the village fete and we sell 50 tickets with the expectation that the winner bags all the money. Each ticket holder has the same 1 in 50 risk of winning the wad-of-wonga and a 49 in 50 risk of losing their small stake. At the appointed time we spin the barrel to mix up the ticket stubs then we blindly draw one ticket out. At that instant the 50 people with an equal risk changes to one winner and 49 losers. It is as if the grey fog of risk instantly condenses into a precise, black-and-white, yes-or-no, winner-or-loser, reality.

Translating this concept back into HSMR and Mid Staffs – the estimated 1200 deaths are the just the “condensed risk of harm equivalent”.  So, to then conduct a retrospective case note analysis of specific deaths looking for the specific cause would be equivalent to trying to retrospectively work out the reason the particular winning ticket in the tombola was picked out. It is a search that is doomed to fail. To then conclude from this fruitless search that HSMR is invalid, is only to compound the delusion further.  The actual problem here is ignorance and misunderstanding of the basic Laws of Physics and Probability, because our brains are not good at solving these sort of problems.

But Mid Staffs is a particularly severe example and  it only shows up after years of data has accumulated. How would a hospital that was not as bad as this know they had a risk problem and know sooner? Waiting for years to accumulate enough data to prove there was a avoidable problem in the past is not much help. 

That is an excellent question. This type of time-series chart is not very sensitive to small changes when the data is noisy and sparse – such as when you plot the data on a month-by-month timescale and avoidable deaths are actually an uncommon outcome. Plotting the annual sum smooths out this variation and makes the trend easier to see, but it delays the diagnosis further. One way to increase the sensitivity is to plot the data as a cusum (cumulative sum) chart – which is conspicuous by its absence from the data table. It is the running total of the estimated excess deaths. Rather like the running total of swings in a game of golf.

MS_ExcessDeaths_CUSUMThis is the cusum chart of excess deaths and you will notice that it is not plotted with control limits. That is because it is invalid to use standard control limits for cumulative data.  The important feature of the cusum chart is the slope and the deviation from zero. What is usually done is an alert threshold is plotted on the cusum chart and if the measured cusum crosses this alert-line then the alarm bell should go off – and the search then focuses on the precursor events: the Near Misses, the Not Agains and the Niggles.

I see. You make it look easy when the data is presented as pictures. But aren’t we still missing the point? Isn’t this still after-the-avoidable-event analysis?

Yes! An avoidable death should be a Never-Event in a designed-to-be-safe healthcare system. It should never happen. There should be no coffins to count. To get to that stage we need to apply exactly the same approach to the Near-Misses, and then the Not-Agains, and eventually the Niggles.

You mean we have to use the SUI data and the IR1 data and the complaint data to do this – and also ask our staff and patients about their Niggles?

Yes. And it is not the number of complaints that is the most useful metric – it is the appearance of the cumulative sum of the complaint severity score. And we need a method for diagnosing and treating the cause of the Niggles too. We need to convert the feedback information into effective action.

Ah ha! Now I understand what the role of the Governance Department is: to apply the tools and techniques of Improvement Science proactively.  But our Governance Department have not been trained to do this!

Then that is one place to start – and their role needs to evolve from Inspectors and Supervisors to Demonstrators and Educators – ultimately everyone in the organisation needs to be a competent Healthcare Improvementologist.

OK – I now now what to do next. But wait a minute. This is going to cost a fortune!

This is just one small first step.  The next step is to redesign the processes so the errors do not happen in the first place. The cumulative cost saving from eliminating the repeated checking, correcting, box-ticking, documenting, investigating, compensating and insuring is much much more than the one-off investment in learning safe system design.

So the Finance Director should be a champion for safety and quality too.

Yup!

Brill. Thanks. And can I ask one more question? I do not want to appear to skeptical but how do we know we can trust that this risk-estimation system has been designed and implemented correctly? How do we know we are not being bamboozled by statisticians? It has happened before!

That is the best question yet.  It is important to remember that HSMR is counting deaths in hospital which means that it is not actually the risk of harm to the patient that is measured – it is the risk to the reputation of hospital! So the answer to your question is that you demonstrate your deep understanding of the rationle and method of risk-of-harm estimation by listing all the ways that such a system could be deliberately “gamed” to make the figures look better for the hospital. And then go out and look for hard evidence of all the “games” that you can invent. It is a sort of creative poacher-becomes-gamekeeper detective exercise.

OK – I sort of get what you mean. Can you give me some examples?

Yes. The HSMR method is based on deaths-in-hospital so discharging a patient from hospital before they die will make the figures look better. Suppose one hospital has more access to end-of-life care in the community than another: their HSMR figures would look better even though exactly the same number of people died. Another is that the HSMR method is weighted towards admissions classified as “emergencies” – so if a hospital admits more patients as “emergencies” who are not actually very sick and discharges them quickly then this will inflated their estimated deaths and make their actual mortality ratio look better – even though the risk-of-harm to patients has not changed.

OMG – so if we have pressure to meet 4 hour A&E targets and we get paid more for an emergency admission than an A&E attendance then admitting to an Assessmen Area and discharging within one day will actually reward the hospital financially, operationally and by apparently reducing their HSMR even though there has been no difference at all to the care that patients actually recieve?

Yes. It is an inevitable outcome of the current system design.

But that means that if I am gaming the system and my HSMR is not getting better then the risk-of-harm to patients is actually increasing and my HSMR system is giving me false reassurance that everything is OK.   Wow! I can see why some people might not want that realisation to be public knowledge. So what do we do?

Design the system so that the rewards are aligned with lower risk of harm to patients and improved outcomes.

Is that possible?

Yes. It is called a Win-Win-Win design.

How do we learn how to do that?

Improvement Science.

Footnote I:

The graphs tell a story but they may not create a useful sense of perspective. It has been said that there is a 1 in 300 chance that if you go to hospital you will not leave alive for avoidable causes. What! It cannot be as high as 1 in 300 surely?

OK – let us use the published Mid-Staffs data to test this hypothesis. Over 12 years there were about 150,000 admissions and an estimated 1,200 excess deaths (if all the risk were concentrated into the excess deaths which is not what actually happens). That means a 1 in 130 odds of an avoidable death for every admission! That is twice as bad as the estimated average.

The Mid Staffordshire statistics are bad enough; but the NHS-as-a-whole statistics are cumulatively worse because there are 100’s of other hospitals that are each generating not-as-obvious avoidable mortality. The data is very ‘noisy’ so it is difficult even for a statistical expert to separate the message from the morass.

And remember – that  the “expected” mortality is estimated from the average for the whole NHS – which means that if this average is higher than it could be then there is a statistical bias and we are being falsely reassured by being ‘not statistically significantly different’ from the pack.

And remember too – for every patient and family that suffers and avoidable death there are many more that have to live with the consequences of avoidable but non-fatal harm.  That is called avoidable morbidity.  This is what the risk really means – everyone has a higher risk of some degree of avoidable harm. Psychological and physical harm.

This challenge is not just about preventing another Mid Staffs – it is about preventing 1000’s of avoidable deaths and 100,000s of patients avoidably harmed every year in ‘average’ NHS trusts.

It is not a mass conspiracy of bad nurses, bad doctors, bad managers or bad policians that is the root cause.

It is poorly designed processes – and they are poorly designed because the nurses, doctors and managers have not learned how to design better ones.  And we do not know how because we were not trained to.  And that education gap was an accident – an unintended error of omission.  

Our urgently-improve-NHS-safety-challenge requires a system-wide safety-by-design educational and cultural transformation.

And that is possible because the knowledge of how to design, test and implement inherently safe processes exists. But it exists outside healthcare.

And that safety-by-design training is a worthwhile investment because safer-by-design processes cost less to run because they require less checking, less documenting, less correcting – and all the valuable nurse, doctor and manager time freed up by that can be reinvested in more care, better care and designing even better processes and systems.

Everyone Wins – except the cynics who have a choice: to eat humble pie or leave.

Footnote II:

In the debate that has followed the publication of the Francis Report a lot of scrutiny has been applied to the method by which an estimated excess mortality number is created and it is necessary to explore this in a bit more detail.

The HSMR is an estimate of relative risk – it does not say that a set of specific patients were the ones who came to harm and the rest were OK. So looking at individual deaths and looking for the specific causes is to completely misunderstand the method. So looking at the actual deaths individually and looking for identifiable cause-and-effect paths is an misuse of the message.  When very few if any are found to conclude that HSMR is flawed is an error of logic and exposes the ignorance of the analyst further.

HSMR is not perfect though – it has weaknesses.  It is a benchmarking process the”standard” of 100 is always moving because the collective goal posts are moving – the reference is always changing . HSMR is estimated using data submitted by hospitals themselves – the clinical coding data.  So the main weakness is that it is dependent on the quality of the clinicial coding – the errors of comission (wrong codes) and the errors of omission (missing codes). Garbage In Garbage Out.

Hospitals use clinically coded data for other reasons – payment. The way hospitals are now paid is based on the volume and complexity of that activity – Payment By Results (PbR) – using what are called Health Resource Groups (HRGs). This is a better and fairer design because hospitals with more complex (i.e. costly to manage) case loads get paid more per patient on average.  The HRG for each patient is determined by their clinical codes – including what are called the comorbidities – the other things that the patient has wrong with them. More comorbidites means more complex and more risky so more money and more risk of death – roughly speaking.  So when PbR came in it becamevery important to code fully in order to get paid “properly”.  The problem was that before PbR the coding errors went largely unnoticed – especially the comorbidity coding. And the errors were biassed – it is more likely to omit a code than to have an incorrect code. Errors of omission are harder to detect. This meant that by more complete coding (to attract more money) the estimated casemix complexity would have gone up compared with the historical reference. So as actual (not estimated) NHS mortality has gone down slightly then the HSMR yardstick becomes even more distorted.  Hospitals that did not keep up with the Coding Game would look worse even though  their actual risk and mortality may be unchanged.  This is the fundamental design flaw in all types of  benchmarking based on self-reported data.

The actual problem here is even more serious. PbR is actually a payment for activity – not a payment for outcomes. It is calculated from what it cost to run the average NHS hospital using a technique called Reference Costing which is the same method that manufacturing companies used to decide what price to charge for their products. It has another name – Absorption Costing.  The highest performers in the manufacturing world no longer use this out-of-date method. The implication of using Reference Costing and PbR in the NHS are profound and dangerous:

If NHS hospitals in general have poorly designed processes that create internal queues and require more bed days than actually necessary then the cost of that “waste” becomes built into the future PbR tariff. This means average length of stay (LOS) is financially rewarded. Above average LOS is financially penalised and below average LOS makes a profit.  There is no financial pressure to improve beyound average. This is called the Regression to the Mean effect.  Also LOS is not a measure of quality – so there is a to shorten length of stay for purely financial reasons – to generate a surplus to use to fund growth and capital investment.  That pressure is non-specific and indiscrimiate.  PbR is necessary but it is not sufficient – it requires an quality of outcome metric to complete it.    

So the PbR system is based on an out-of-date cost-allocation model and therefore leads to the very problems that are contributing to the MidStaffs crisis – financial pressure causing quality failures and increased risk of mortality.  MidStaffs may be a chance victim of a combination of factors coming together like a perfect storm – but those same factors are present throughout the NHS because they are built into the current design.

One solution is to move towards a more up-to-date financial model called stream costing. This uses the similar data to reference costing but it estimates the “ideal” cost of the “necessary” work to achieve the intended outcome. This stream cost becomes the focus for improvement – the streams where there is the biggest gap between the stream cost and the reference cost are the focus of the redesign activity. Very often the root cause is just poor operational policy design; sometimes it is quality and safety design problems. Both are solvable without investment in extra capacity. The result is a higher quality, quicker, lower-cost stream. Win-win-win. And in the short term that  is rewarded by a tariff income that exceeds cost and a lower HSMR.

Radically redesigning the financial model for healthcare is not a quick fix – and it requires a lot of other changes to happen first. So the sooner we start the sooner we will arrive. 

press_on_screen_anim_150_wht_7028Today is an important day.

The Robert Francis QC Report and recommendations from the Mid-Staffordshire Hospital Crisis has been published – and it is a sobering read.  The emotions that just the executive summary evoked in me were sadness, shame and anger.  Sadness for the patients, relatives, and staff who have been irreversibly damaged; shame that the clinical professionals turned a blind-eye; and anger that the root cause has still not been exposed to public scrutiny.

Click here to get a copy of the RFQC Report Executive Summary.

Click here to see the video of RFQC describing his findings. 

The root cause is ignorance at all levels of the NHS.  Not stupidity. Not malevolence. Just ignorance.

Ignorance of what is possible and ignorance of how to achieve it.

RFQC rightly focusses his recommendations on putting patients at the centre of healthcare and on making those paid to deliver care accountable for the outcomes.  Disappointingly, the report is notably thin on the financial dimension other than saying that financial targets took priority over safety and quality.  He is correct. They did. But the report does not say that this is unnecessary – it just says “in future put safety before finance” and in so doing he does not challenge the belief that we are playing a zero-sum-game. The  assumotion that higher-quality-always-costs-more.

This assumption is wrong and can easily be disproved.

A system that has been designed to deliver safety-and-quality-on-time-first-time-and-every-time costs less. And it costs less because the cost of errors, checking, rework, queues, investigation, compensation, inspectors, correctors, fixers, chasers, and all the other expensive-high-level-hot-air-generation-machinery that overburdens the NHS and that RFQC has pointed squarely at is unnecessary.  He says “simplify” which is a step in the right direction. The goal is to render it irrelevent.

The ignorance is ignorance of how to design a healthcare system that works right-first-time. The fact that the Francis Report even exists and is pointing its uncomfortable fingers-of-evidence at every level of the NHS from ward to government is tangible proof of this collective ignorance of system design.

And the good news is that this collective ignorance is also unnecessary … because the knowledge of how to design safe-and-affordable systems already exists. We just have to learn how. I call it 6M Design® – but  the label is irrelevent – the knowledge exists and the evidence that it works exists.

So here are some of the RFQC recommendations viewed though a 6M Design® lens:       

1.131 Compliance with the fundamental standards should be policed by reference to developing the CQC’s outcomes into a specification of indicators and metrics by which it intends to monitor compliance. These indicators should, where possible, be produced by the National Institute for Health and Clinical Excellence (NICE) in the form of evidence-based procedures and practice which provide a practical means of compliance and of measuring compliance with fundamental standards.

This is the safety-and-quality outcome specification for a healthcare system design – the required outcome presented as a relevent metric in time-series format and qualified by context.  Only a stable outcome can be compared with a reference standard to assess the system capability. An unstable outcome metric requires inquiry to understand the root cause and an appropriate action to restore stability. A stable but incapable outcome performance requires redesign to achieve both stability and capability. And if  the terms used above are unfamiliar then that is further evidence of system-design-ignorance.   
 
1.132 The procedures and metrics produced by NICE should include evidence-based tools for establishing the staffing needs of each service. These measures need to be readily understood and accepted by the public and healthcare professionals.

This is the capacity-and-cost specification of any healthcare system design – the financial envelope within which the system must operate. The system capacity design works backwards from this constraint in the manner of “We have this much resource – what design of our system is capable of delivering the required safety and quality outcome with this capacity?”  The essence of this challenge is to identify the components of poor (i.e. wasteful) design in the existing systems and remove or replace them with less wasteful designs that achieve the same or better quality outcomes. This is not impossible but it does require system diagnostic and design capability. If the NHS had enough of those skills then the Francis Report would not exist.

1.133 Adoption of these practices, or at least their equivalent, is likely to help ensure patients’ safety. Where NICE is unable to produce relevant procedures, metrics or guidance, assistance could be sought and commissioned from the Royal Colleges or other third-party organisations, as felt appropriate by the CQC, in establishing these procedures and practices to assist compliance with the fundamental standards.

How to implement evidence-based research in the messy real world is the Elephant in the Room. It is possible but it requires techniques and tools that fall outside the traditional research and audit framework – or rather that sit between research and audit. This is where Improvement Science sits. The fact that the Report only mentions evidence-based practice and audit implies that the NHS is still ignorant of this gap and what fills it – and so it appears is RFQC.   

1.136 Information needs to be used effectively by regulators and other stakeholders in the system wherever possible by use of shared databases. Regulators should ensure that they use the valuable information contained in complaints and many other sources. The CQC’s quality risk profile is a valuable tool, but it is not a substitute for active regulatory oversight by inspectors, and is not intended to be.

Databases store data. Sharing databases will share data. Data is not information. Information requires data and the context for that data.  Furthermore having been informed does not imply either knowledge or understanding. So in addition to sharing information, the capability to convert information-into-decision is also required. And the decisions we want are called “wise decisions” which are those that result in actions and inactions that lead inevitably to the intended outcome.  The knowledge of how to do this exists but the NHS seems ignorant of it. So the challenge is one of education not of yet more investigation.

1.137 Inspection should remain the central method for monitoring compliance with fundamental standards. A specialist cadre of hospital inspectors should be established, and consideration needs to be given to collaborative inspections with other agencies and a greater exploitation of peer review techniques.

This is audit. This is the sixth stage of a 6M Design® – the Maintain step.  Inspectors need to know what they are looking for, the errors of commission and the errors of omission;  and to know what those errors imply and what to do to identify and correct the root cause of these errors when discovered. The first cadre of inspectors will need to be fully trained in healthcare systems design and healthcare systems improvement – in short – they need to be Healthcare Improvementologists. And they too will need to be subject to the same framework of accreditation, and accountability as those who work in the system they are inspecting.  This will be one of the greatest of the challenges. The fact that the Francis report exists implies that we do not have such a cadre. Who will train, accredit and inspect the inspectors? Who has proven themselves competent in reality (not rhetorically)?

1.163 Responsibility for driving improvement in the quality of service should therefore rest with the commissioners through their commissioning arrangements. Commissioners should promote improvement by requiring compliance with enhanced standards that demand more of the provider than the fundamental standards.

This means that commissioners will need to understand what improvement requires and to include that expectation in their commissioning contracts. This challenge is even geater that the creation of a “cadre of inspectors”. What is required is a “generation of competent commissioners” who are also experienced and who have demonstrated competence in healthcare system design. The Commissioners-of-the-Future will need to be experienced healthcare improvementologists.

The NHS is sick – very sick. The medicine it needs to restore its health and vitality does exist – and it will not taste very nice – but to withold an effective treatment for an serious illness on that basis is clinical negligence.

It is time for the NHS to look in the mirror and take the strong medicine. The effect is quick – it will start to feel better almost immediately. 

To deliver safety and quality and quickly and affordably is possible – and if you do not believe that then you will need to muster the humility to ask to have the how demonstrated.

6MDesign

 

no_smoking_400_wht_6805It is not easy to kick a habit. We all know that. And for some reason the ‘bad’ habits are harder to kick than the ‘good’ ones. So what is bad about a ‘bad habit’ and why is it harder to give up? Surely if it was really bad it would be easier to give up?

Improvement is all about giving up old ‘bad’ habits and replacing them with new ‘good’ habits – ones that will sustain the improvement. But there is an invisible barrier that resists us changing any habit – good or bad. And it is that barrier to habit-breaking that we need to understand to succeed. Luck is not a reliable ally.

What does that habit-breaking barrier look like?

The problem is that it is invisible – or rather it is emotional – or to be precise it is chemical.

Our emotions are the output of a fantastically complex chemical system – our brains. And influencing the chemical balance of our brains can have a profound effect on our emotions.  That is how anti-depressants work – they very slightly adjust the chemical balance of every part of our brains. The cumulative effect is that we feel happier.  Nicotine has a similar effect.

And we can achieve the same effect without resorting to drugs or fags – and we can do that by consciously practising some new mental habits until they become ingrained and unconscious. We literally overwrite the old mental habit.

So how do we do this?

First we need to make the mental barrier visible – and then we can focus our attention on eroding it. To do that we need to remove the psychological filter that we all use to exclude our emotions. It is rather like taking off our psychological sunglasses.

When we do that the invisible barrier jumps into view: illuminated by the glare of three negative emotions.  Sadness, fear, and anxiety.  So whenever we feel any of these we know there is a barrier to improvement hiding  the emotional smoke. This is the first stage: tune in to our emotions.

The next step is counter-intuitive. Instead of running away from the negative feeling we consciously flip into a different way of thinking.  We actively engage with our negative feelings – and in a very specific way. We engage in a detached, unemotional, logical, rational, analytical  ‘What caused that negative feeling?’ way.

We then focus on the causes of the negative emotions. And when we have the root causes of our Niggles we design around them, under them, and over them.  We literally design them out of our heads.

The effect is like magic.

And this week I witnessed a real example of this principle in action.

figure_pressing_power_button_150_wht_10080One team I am working with experienced the Power of Improvementology. They saw the effect with their own eyes.  There were no computers in the way, no delays, no distortion and no deletion of data to cloud the issue. They saw the performance of their process jump dramatically – from a success rate of 60% to 96%!  And not just the first day, the second day too.  “Surprised and delighted” sums up their reaction.

So how did we achieve this miracle?

We just looked at the process through a different lens – one not clouded and misshapen by old assumptions and blackened by ignorance of what is possible.  We used the 6M Design® lens – and with the clarity of insight it brings the barriers to improvement became obvious. And they were dissolved. In seconds.

Success then flowed as the Dam of Disbelief crumbled and was washed away.

figure_check_mark_celebrate_anim_150_wht_3617The chaos has gone. The interruptions have gone. The expediting has gone. The firefighting has gone. The complaining has gone.  These chronic Niggles have have been replaced by the Nuggets of calm efficiency, new hope and visible excitement.

And we know that others have noticed the knock-on effect because we got an email from our senior executive that said simply “No one has moaned about TTOs for two days … something has changed.”    

That is Improvementology-in-Action.

 

pin_marker_lighting_up_150_wht_6683Last week the Ray Of Hope briefly illuminated a very common system design disease called carveoutosis.  This week the RoH will tarry a little longer to illuminate an example that reveals the value of diagnosing and treating this endemic process ailment.

Do you remember the days when we used to have to visit the Central Post Office in our lunch hour to access a quality-of-life-critical service that only a Central Post Office could provide – like getting a new road tax disc for our car?  On walking through the impressive Victorian entrances of these stalwart high street institutions our primary challenge was to decide which queue to join.

In front of each gleaming mahogony, brass and glass counter was a queue of waiting customers. Behind was the Post Office operative. We knew from experience that to be in-and-out before our lunch hour expired required deep understanding of the ways of people and processes – and a savvy selection.  Some queues were longer than others. Was that because there was a particularly slow operative behind that counter? Or was it because there was a particularly complex postal problem being processed? Or was it because the customers who had been waiting longer had identified that queue was fast flowing and had defected to it from their more torpid streams? We know that size is not a reliable indicator of speed or quality.figure_juggling_time_150_wht_4437

The social pressure is now mounting … we must choose … dithering is a sign of weakness … and swapping queues later is another abhorrent behaviour. So we employ our most trusted heuristic – we join the end of the shortest queue. Sometimes it is a good choice, sometimes not so good!  But intuitively it feels like the best option.

Of course  if we choose wisely and we succeed in leap-frogging our fellow customers then we can swagger (just a bit) on the way out. And if not we can scowl and mutter oaths at others who (by sheer luck) leap frog us. The Post Office Game is fertile soil for the Aint’ It Awful game which we play when we arrive back at work.

single_file_line_PA_150_wht_3113But those days are past and now we are more likely to encounter a single-queue when we are forced by necessity to embark on a midday shopping sortie. As we enter we see the path of the snake thoughtfully marked out with rope barriers or with shelves hopefully stacked with just-what-we-need bargains to stock up on as we drift past.  We are processed FIFO (first-in-first-out) which is fairer-for-all and avoids the challenge of the dreaded choice-of-queue. But the single-queue snake brings a new challenge: when we reach the head of the snake we must identify which operative has become available first – and quickly!

Because if we falter then we will incur the shame of the finger-wagging or the flashing red neon arrow that is easily visible to the whole snake; and a painful jab in the ribs from the impatient snaker behind us; and a chorus of tuts from the tail of the snake. So as we frantically scan left and right along the line of bullet-proof glass cells looking for clues of imminent availability we run the risk of developing acute vertigo or a painful repetitive-strain neck injury!

stick_figure_sitting_confused_150_wht_2587So is the single-queue design better?  Do we actually wait less time, the same time or more time? Do we pay a fair price for fair-for-all queue design? The answer is not intuitively obvious because when we are forced to join a lone and long queue it goes against our gut instinct. We feel the urge to push.

The short answer is “Yes”.  A single-queue feeding tasks to parallel-servers is actually a better design. And if we ask the Queue Theorists then they will dazzle us with complex equations that prove it is a better design – in theory.  But the scary-maths does not help us to understand how it is a better design. Most of us are not able to convert equations into experience; academic rhetoric into pragmatic reality. We need to see it with our own eyes to know it and understand it. Because we know that reality is messier than theory.    

And if it is a better design then just how much better is it?

To illustrate the potential advantage of a single-queue design we need to push the competing candiates to their performance limits and then measure the difference. We need a real example and some real data. We are Improvementologists! 

First we need to map our Post Office process – and that reveals that we have a single step process – just the counter. That is about as simple as a process gets. Our map also shows that we have a row of counters of which five are manned by fully trained Post Office service operatives.

stick_figure_run_clock_150_wht_7094Now we can measure our process and when we do that we find that we get an average of 30 customers per hour walking in the entrance and and average of 30 cusomers an hour walking out. Flow-out equals flow-in. Activity equals demand. And the average flow is one every 2 minutes. So far so good. We then observe our five operatives and we find that the average time from starting to serve one customer to starting to serve the next is 10 minutes. We know from our IS training that this is the cycle time. Good.

So we do a quick napkin calculation to check and that the numbers make sense: our system of five operatives working in parallel, each with an average cycle time of 10 minutes can collectively process a customer on average every 2 minutes – that is 30 per hour on average. So it appears we have just enough capacity to keep up with the flow of work  – we are at the limit of efficiency.  Good.

CarveOut_00We also notice that there is variation in the cycle time from customer to customer – so we plot our individual measurements asa time-series chart. There does not seem to be an obvious pattern – it looks random – and BaseLine says that it is statistically stable. Our chart tells us that a range of 5 to 15 minutes is a reasonable expectation to set.

We also observe that there is always a queue of waiting customers somewhere – and although the queues fluctuate in size and location they are always there.

 So there is always a wait for some customers. A variable wait; an unpredictable wait. And that is a concern for us because when the queues are too numerous and too long then we see customers get agitated, look at their watches, shrug their shoulders and leave – taking their custom and our income with them and no doubt telling all their friends of their poor experience. Long queues and long waits are bad for business.

And we do not want zero queues either because if there is no queue and our operatives run out of work then they become under-utilised and our system efficiency and productivity falls.  That means we are incurring a cost but not generating an income. No queues and idle resources are bad for business too.

And we do not want a mixture of quick queues and slow queues because that causes complaints and conflict.  A high-conflict customer complaint experience is bad for business too! 

What we want is a design that creates small and stable queues; ones that are just big enough to keep our operatives busy and our customers not waiting too long.

So which is the better design and how much better is it? Five-queues or a single-queue? Carve-out or no-carve-out?

To find the answer we decide to conduct a week-long series of experiments on our system and use real data to reveal the answer. We choose the time from a customer arriving to the same customer leaving as our measure of quality and performance – and we know that the best we can expect is somewhere between 5 and 15 minutes.  We know from our IS training that is called the Lead Time.

time_moving_fast_150_wht_10108On day #1 we arrange our Post Office with five queues – clearly roped out – one for each manned counter.  We know from our mapping and measuring that customers do not arrive in a steady stream and we fear that may confound our experiment so we arrange to admit only one of our loyal and willing customers every 2 minutes. We also advise our loyal and willing customers which queue they must join before they enter to avoid the customer choice challenges.  We decide which queue using a random number generator – we toss a dice until we get a number between 1 and 5.  We record the time the customer enters on a slip of paper and we ask the customer to give it to the operative and we instruct our service operatives to record the time they completed their work on the same slip and keep it for us to analyse later. We run the experiment for only 1 hour so that we have a sample of 30 slips and then we collect the slips,  calculate the difference between the arrival and departure times and plot them on a time-series chart in the order of arrival.

CarveOut_01This is what we found.  Given that the time at the counter is an average of 10 minutes then some of these lead times seem quite long. Some customers spend more time waiting than being served. And we sense that the performance is getting worse over time.

So for the next experiment we decide to open a sixth counter and to rope off a sixth queue. We expect that increasing capacity will reduce waiting time and we confidently expect the performance to improve.

On day #2 we run our experiment again, letting customers in one every 2 minutes as before and this time we use all the numbers on the dice to decide which queue to direct each customer to.  At the end of the hour we collect the slips, calculate the lead times and plot the data – on the same chart.

CarveOut_02This is what we see.

It does not look much better and that is big surprise!

The wide variation from customer to customer looks about the same but with the Eye of Optimism we get a sense that the overall performance looks a bit more stable.

So we conclude that adding capacity (and cost) may make a small difference.

But then we remember that we still only served 30 customers – which means that our income stayed the same while our cost increased by 20%. That is definitely NOT good for business: it is not goiug to look good in a business case “possible marginally better quality and 20% increase in cost and therefore price!”

So on day #3 we change the layout. This time we go back to five counters but we re-arrange the ropes to create a single-queue so the customer at the front can be ‘pulled’ to the first available counter. Everything else stays the same – one customer arriving every 2 minutes, the dice, the slips of paper, everything.  At the end of the hour we collect the slips, do our sums and plot our chart.

CarveOut_03And this is what we get! The improvement is dramatic. Both the average and the variation has fallen – especially the variation. But surely this cannot be right. The improvement is too good to be true. We check our data again. Yes, our customers arrived and departed on average one every 2 minutes as before; and all our operatives did the work in an average of 10 minutes just as before. And we had the exactly the same capacity as we had on day #1. And we finished on time. It is correct. We are gobsmaked. It is like a magic wand has been waved over our process. We never would have predicted  that just moving the ropes around to could have such a big impact.  The Queue Theorists were correct after all!

But wait a minute! We are delivering a much better customer experience in terms of waiting time and at the same cost. So could we do even better with six counters open? What will happen if we keep the single-queue design and open the sixth desk?  Before it made little difference but now we doubt our ability to guess what will happen. Our intuition seems to keep tricking us. We are losing our confidence in predicting what the impact will be. We are in counter-intuitive land! We need to run the experiment for real.

So on day #4 we keep the single-queue and we open six desks. We await the data eagerly.

CarveOut_04And this is what happened. Increasing the capacity by 20% has made virtually no difference – again. So we now have two pieces of evidence that say – adding extra capacity did not make a difference to waiting times. The variation looks a bit less though but it is marginal.

It was changing the Queue Design that made the difference! And that change cost nothing. Rien. Nada. Zippo!

That will look much better in our report but now we have to face the emotional discomfort of having to re-evaluate one of our deepest held assumptions.

Reality is telling us that we are delivering a better quality experience using exactly the same resources and it cost nothing to achieve. Higher quality did NOT cost more. In fact we can see that with a carve-out design when we added capacity we just increased the cost we did NOT improve quality. Wow!  That is a shock. Everything we have been led to believe seems to be flawed.

Our senior managers are not going to like this message at all! We will be challening their dogma directly. And they do not like that. Oh dear! 

Now we can see how much better a no-carveout single-queue pull-design can work; and now we can explain why single-queue designs  are used; and now we can show others our experiment and our data and if they do not believe us they can repeat the experiment themselves.  And we can see that it does not need a real Post Office – a pad of Post It® Notes, a few stopwatches and some willing helpers is all we need.

And even though we have seen it with our own eyes we still struggle to explain how the single-queue design works better. What actually happens? And we still have that niggling feeling that the performance on day #1 was unstable.  We need to do some more exploring.

So we run the day#1 experiment again – the five queues – but this time we run it for a whole day, not just an hour.

CarveOut_06

Ah ha!   Our hunch was right.  It is an unstable design. Over time the variation gets bigger and bigger.

But how can that happen?

Then we remember. We told the customers that they could not choose the shortest queue or change queue after they had joined it.  In effect we said “do not look at the other queues“.

And that happens all the time on our systems when we jealously hide performance data from each other! If we are seen to have a smaller queue we get given extra work by the management or told to slow down by the union rep!  

So what do we do now?  All we are doing is trying to improve the service and all we seem to be achieving is annoying more and more people.

What if we apply a maximum waiting time target, say of 1 hour, and allow customers to jump to the front of their queue if they are at risk if breaching the target? That will smooth out spikes and give everyone a fair chance. Customers will understand. It is intuitively obvious and common sense. But our intuition has tricked us before … 

So we run the experiment again and this time we tell our customers that if they wait 50 minutes then they can jump to the front of their queue. They appreciate this because they now have a upper limit on the time they will wait.  

CarveOut_07And this is what we observe. It looks better than before, at least initially, and then it goes pear-shaped.

All we have done with our ‘carve-out and-expedite-the-long-waiters’ design is to defer the inevitable – the crunch. We cannot keep our promise. By the end everyone is pushing to the frontof the queue. It is a riot!  

And there is more. Look at the lead time for the last few customers – two hours. Not only have they waited a long time, but we have had to stay open for two hours longer. That is a BIG cost pessure in overtime payments.

So, whatever way we look at it: a single-queue design is better.  And no one loses out! The customers have a short and predictable waiting time; the operatives are kept occupied and go home on time; and the executives bask in the reflected glory of the excellent customer feedback.  It is a Three Wins® design.

Seeing is believing – and we now know that it is worth diagnosing and treating carveoutosis.

And the only thing left to do is to explain is how a single-queue design works better. It is not obvious is it? 

puzzle_lightbulb_build_PA_150_wht_4587And the best way to do that is to play the Post Office Game and see what actually happens. 

A big light-bulb moment awaits!

 

 

Update: My little Sylvanian friends have tried the Post Office Game and kindly sent me this video of the before  Sylvanian Post Office Before and the after Sylvanian Post Office After. They say they now know how the single-queue design works better.