top of page

Air safety: human fallibility and its consequences

  • Jan 26
  • 3 min read



I wrote this in 2013, as a follow-up to an earlier piece on the Sea King accident on the Indonesian island of Nias in 2005.  But it’s really more general in its application to air safety, human error, and its consequences in this particular field of endeavour.  


And it asks the question: “Why don’t we all drive Volvos?”


Whenever something hugely, catastrophically bad happens, particularly when resulting in multiple deaths and injuries, it is normal for people to assume that someone somewhere has done something hugely, monumentally bad which has caused the accident.  It then follows that there is an obligation to find out who was responsible and “get them”.


The problem with organisational accidents of this kind is that often no-one individual is responsible for the whole catastrophe – that accidents in complex systems result from a causal chain in which holes in one or more defences have coincided.


Prof. James Reason has been researching and writing on this subject for many years, most notably in his book “Managing the Risks of Organisational Accidents,” written ten years ago, and based on earlier work stretching back a long way further back than that. 

Some examples:

  • Omissions during reassembly are the single most common form of maintenance lapse;

  • Many aircraft maintenance jobs are signed off as complete without a thorough inspection;

  • Human fallibility, like gravity, weather and terrain, is just another foreseeable hazard in aviation.  The issue is not why an error occurred, but how it failed to be corrected.


People everywhere, in all walks of life, forget things, make mistakes, or don’t do as thorough job as they might.  The difference in aircraft maintenance, of course, is the consequence of such shortcomings.


In aircraft maintenance, and indeed in the maintenance of all complex systems, this is a given, and will always be with us.  The important thing, then, is to build systems that take account of normal human fallibility so that such lapses do not go undetected.


Applied to this case, the fact that a technician made an error is, in a sense, neither here nor there.  From the description provided in the report it appears he improvised, and did more-or-less what he thought necessary to get the job done.


Of far greater significance is the role of those who were responsible for inspecting the work, and most pointedly, for certifying that the required inspection was carried out to confirm that the critical components of the flight control system were correctly installed.


Beyond that, the causal chain will lead further up the chain of command and the broader operational environment, to consider how much the situation the main actors found themselves in – most importantly, the desire to get to Nias to render assistance to the stricken population – contributed to the unfortunate sequence of events.


Indeed, the account of the events leading up to this accident appear to have all the common hallmarks of such accidents:  the unclear chains of responsibility, operational pressures, the tired workers, the classic incomplete handover to a new shift, halfway through a maintenance operation, and the desire to get the job done.


But perhaps this is sounding a bit too much like making excuses for those concerned.  At the core of this sequence of events, we have people taking short cuts and improvising with flight controls.  The consequences for the individuals concerned will be apparent to all very soon.


A second major issue in this case relates to the subject of crashworthiness.  And in relation to this, it is worth keeping in mind the Volvo problem.  If we accept for the moment, the idea that Volvos are the safest cars available, why is it that not everyone drives a Volvo?  After all, how could anyone put a price on safety?


The obvious answer, is that most of us make purchase decisions based on a range of factors, including, performance, utility, price, and for our cars at least, style.  We then must operate and maintain our asset for a number of years, until we eventually choose to replace it with something more suitable at that future time.  It’s no different when buying fighters, submarines, or helicopters.  From the day the purchase is made, there will always be something newer and better at the various performance parameters, including safety.  But as custodians of an asset we have paid for, there is an obligation to get the best out of it for a reasonable period.


Of course there are always improvements that can be made, and in hindsight one can always point to this or that improvement which might have made a difference.  The difficult choice for asset managers is to try to be prudent and rational in deciding which innovations are most worthwhile, trading off effectiveness against affordability.


In hindsight, it is easy to say what should have been done.  The real challenge for asset managers, maintainers and operators, as always, is “What do we do next?”

Comments


PC9 Risk      ABN 47250501581
bottom of page