The modern world is definitely in love with its noun-based activities. Each week, a paradigm-shifting approach to some human endeavour is announced with a title like value-based health care or outcome-based education. When I delve into the details, I am generally left either confused as to what they are selling or how they are different at all.
Regulation is no different. Just plugging “based regulation” into Google yields, on the first page alone, principle-based, results-based, performance-based, outcomes-based and output-based regulatory approaches,.
In aviation circles, the noun of choice appears to be performance but you still get a couple of hits on risk, outcome and objective. I must confess, however, that for all the reading I’ve done and the courses I’ve attended, I don’t think I can concisely tell you what these system actually are or how their regulations are drafted.
I know that we need to move away from the purely prescriptive approach of the past. The aviation environment has gotten too complex for us to think we can regulate the how something is done in a manner which can consider all the variables.
The Australian airport industry is a great example of this. Not too long ago, all the airports in Australia were either owned or heavily directed by the federal government. They were also either major international airports, regional jet or turbo prop airports or other. Nowadays, airports are operated by a variety of public and private entities, we have rapidly expanding and brand new airports as well as stagnate and declining ports. The industry is dynamic, volatile and complex.
The Move from Prescription
By now, you might have noticed that I can be a bit pedantic, especially with words. So I went on a hunt for a definition of prescription which suited my understanding of what prescriptive meant in the context of regulation. Perhaps unsurprisingly, the hardline definitions don’t quite match general usage.
But first let’s set a common ground that regulation mean rules. We need rules to govern and by rules, I mean an explicit, documented expectation to which people will be held accountable. Those rules can be open (e.g. don’t crash) or they can be narrow (e.g. you must not operate and aircraft unless … (1) …. (2) …. (n)). At least that is the minimum any model of government I am familiar with involves.
So when we say that, traditionally, aviation regulation has been prescriptive, what do we mean? Those hardline definitions talk about setting rules and that creates a bit of a tautology – rule-based rules.
There was one definition that I found which plays in nicely to a recent epiphany I had. Prescription is “not subject to individual determination”.
Context is Everything
The moment of clarity I had the other day was that the question of regulatory approaches was about context. Specifically, the level of contextual variability considered by the rules put forward.
My mental model of this concept was a scale of context ranging from universal, no consideration of context other than the industry as a whole, to individualistic, each organisation’s or individual’s situation must be considered.
Now, I don’t think is earth-shattering just yet. At one end you have the ol’ prescriptive approach of “this is how we will permit you to commit aviation” and at the other, you have Safety Management Systems (SMS) and all that that entails. What I do think is pretty cool about looking at it this way is to recognise that it is a continuum. It’s not two sides of a coin or two pillars supporting a bridge.
We will need regulations utilising the slightly different approaches along the entire continuum. We will never get to the “fully performance based” regime because, I think, there will always be the need to set some universal regulations. Some examples include the need for licensed (maybe not the actual requirements for each licence but the basic requirement for one and the process) and for standardisation of information (such as visual cues, acronyms and operational data).
In the aerodromes world, we will always have the need to standardise (i.e. make common) the markings and lights on an aerodrome. We might not have to standardise all aspects (size and font) but fundamental cues like layout and colour will need to consistent across the industry.
I think the above example shows what I’m talking about but I might be missing one piece of the puzzle.
What About Risk?
That’s actually quite easy. Risk must the basis of all decisions to regulate. Without a risk why set a rule? I think, in the past, regulations have always been written to address a risk, real or perceived. The problem has been that sometimes, or often, these risks are not explicit in the documented regulation. In fact, sometimes they are not even implicit; they are downright opaque.
The industry need to get better about including a statement of the risk that a regulation is designed to address. That way, should a new method or approach to addressing that risk be discovered, an avenue exists to see it implemented without the bureaucratic mess we often see now.
The next challenge is to take this concept and use it to make decisions. Specifically, how to use risk to decide on the regulation’s location along the continuum and what does a regulation a x on that continuum look like. I shall keep pondering these questions and get back to you.
Recently, I sat in on a presentation on a subject I know quite a bit about. I like doing this as it is typically good to get a different perspective on a familiar subject. In this instance, it wasn’t so much the actual subject matter but a couple of associated topics which got stuck in my mind.
The actual discussion point being put forward by this presenter was that instrument approach plates provide very little or no information to pilots on the level of clearance provided to them between their flight path and ground obstacles.
From this, the concept which got stuck in my mind was trust. Pilots have no choice but to trust, almost blindly, that the instrument approach procedure designer has done their job and provided sufficient clearance for them to arrive safely on the ground.
This, of course, is not earthshakingly new or insightful. There are plenty of such relationships in aviation, ATC to pilot and maintenance engineer to pilot are two other examples. What got stuck in my mind is that we rarely talk this concept up (i.e. promote it) as an essential part of our culture on which, we rely everyday. We also tend to see it as a frontline operator thing and not a management, regulator or social concept.
Also recently, I was speaking to an aerodrome inspector from another country about our regulatory approach to aerodrome certification in Australia. She was quite surprised in Australia’s system which can see a large certified aerodrome make major changes to it facilities (e.g. build a new runway) without approval from the regulator.
In the aerodrome sphere at least, Australia has created a system by which the operator of a certified aerodrome has earned the regulator’s trust. CASA has granted that certificate in the belief that the operator has the ability to make safety decisions on their own. This is facilitated by safety management system regulations and CASA’s approach to surveillance. Once an aerodrome operator has their certificate, they are, n many ways, masters of their own destiny.
Nothing should or does stop CASA, or any other interested party, from asking a certified aerodrome operator to provide an account of their actions and decisions. And through the operator’s SMS, this should be easy to provide.
Not all parts of Australia’s aviation system are structured this way, but I think it is the way of the future. To get there, we need to continue to work on SMS as a concept and a practice, as well as reforming our regulations to focus on the decision-making process rather than prescriptive requirements.
Trust is such an essential part of our system from the frontline to the halls of government, but it is so rarely discussed in simple and plain terms. It might be time to go back to basics and discuss with industry participants what we are willing to trust and in whose hands. From there we could structure our regulatory system appropriately. It might be a bit of a dream at the moment but I think we’ll get there.
* I might have muddled that one up. It’s been a few months since I’ve watched Spider-Man.
Recently, I have felt like I’m in danger of becoming complacent with the bedrock of my chosen field. I’ll admit that in the past, I’ve been fairly vocal about this bedrock’s limitations and mantra-like recitation by aviation safety professionals the world over. But the recent apparent abandonment of this concept by one of the first Australian organisations to go “all-in” on it, gave me cause for reflection.
But it wasn’t a critical review of “Reason” that was on my mind. Instead, I started to think about whether we had embraced it enough to allow us to move on.
For me, being a “cheese-head” has just been part and parcel of being in the aviation safety game. Human factors was mother’s milk during my first year of uni with CRM and organisational accidents the solids of second and third years. From there, I’ve continued along the modern system safety trajectory of culture, SMS and so on. I’ve never known it any other way.
But how has the rest of the world taken to it? The general public, I mean. The great-unwashed ;).
To look examine this, I thought I’d look at MSM coverage of aviation accident investigations in Australia. So, I took to Google and searched for pages related to three accidents in the days following the release of the related accident investigation report. I was looking at how the news reported the “causes” of the accident.
The three accidents I chose were:
- Lockhart River – Australia’s worst air disaster in the last 40 years or so and an investigation I knew did follow the accident causation chain right up to the regulator.
- Pel-air – The trigger for all the current controversy and, in opposition to the above, a report that is generally said not to follow the causation chain beyond the frontline operators
- R44 @ Jaspers Brush – The most recent investigation report to be issued which would have received media coverage and also a relatively small accident in which organisational factors might be hard to identify.
In Lockhart River’s case, I could really only identify three stories in the immediate aftermath of the investigation report’s release. One from the SMH, one from ABC News and one from Lateline (ABC as well). Overall, I thought the reporting was quite good. All three pieces discussed multiple contributory factors and generally shied away from the word “cause” – except for Tony Jones’ intro to the Lateline piece which was actually more concerned with the regulator’s role. However, the headlines for the SMH and ABC News stories were old school all the way – “Pilot error blamed for Lockhart River (plane) crash” – I guess we can blame the sub-editors for these ones.
For Pel-air, Google yielded only one real MSM link with a couple of other stemming more from the 4Corners story shown a couple of days after the report’s release and many coming from aviation industry outlets. The Australian‘s story was fairly consistent with the characterisation of the ATSB report in that it focussed on the crew’s actions but it did mention briefly more upstream factors. The other stories were quite critical of the ATSB report in its perceived lack of analysis beyond the Unsafe Acts and Local Workplace Factors levels.
In the final accident, the two MSM stories I found (The Australian & Fairfax Media) put a real emphasis on the “what happened” aspects and ventured little beyond that. In this case the operation was private and, I’m sure, some would argue that “Reason” doesn’t apply. The fleet grounding and safety recommendation for a change to the fuel tank were mentioned.
At the very least, I sure more could have been said about the human factors aspects related to the event. And more could definitely be said about the aircraft and crashability standards for aircraft. As I said a couple of weeks ago, no man is an island. Even, private pilots and even, aircraft designers and manufacturers. Imagine the impact that this investigation could have had if its analysis showed that aircraft structural certification processes showed deficiencies in post-crash fire considerations.
I don’t know if it does but if this is not the case, why then do we need to change the R44′s tanks? This is not a high-level systemic fix. What is there to stop another aircraft type from having this problem in the future?
Okay, I’ll admit that we can’y go on a mass analysis expedition with every accident investigation and we have to select those investigations that have the potential to yield the greatest safety benefit. As an idealist, I do have trouble with the finiteness of the real world even though I do have to deal with this in my day job.
But where does this leave “Reason”?
Well, the Lockhart River articles were (save for the subs) quite heartening and even the Pel-air coverage (overall) tried to encapsulate the complexity of an aviation system breakdown. I guess the disappointment is more the ATSB report which, as we saw with Lockhart River, can drive the media coverage.
I’d like to see the organisational accident or system failure approach to remain as fundamental for all aviation safety analysis and investigation. In fact, it should be extended to try to capture the non-linear, close-coupled nature of complex socio-technical systems like aviation. The “Post-Reason” world may be upon us but I don’t think it is based on the approach offered by the ATSB’s chief:
If we want to go to Professor Reason’s model of investigation—though we think we have come a long way since Professor Reason’s initial work in the 1990s—there is error and there is violation. While the focus of our investigations is on error and understanding error—how to prevent it, how to detect it and how to deal with its consequences—there was also in this case an element of what, in Professor Reason’s model, would be viewed as violation; and that is principally the responsibility of the regulator.
Reason’s error types fit well within his larger model and, to be honest, I don’t see the ATSB-error/CASA-violation distinction. There’s a whole other blog post on that one!
I still quite like the distinction I made in my other post on this subject where I considered the very high-level intent of the operator. If the intent of the operator was to get people safety from A to B on their aircraft, it falls within both CASA and the ATSB’s courts. While the operator may intend on breaking a specific regulation or company policy, their overall intent remains getting their pax on the ground. If the intent of the operator is anything else, then it actually becomes a criminal matter for the police and OTS.
To analogise where I think we are at, “Reason” was a mud hut for safety professionals. It gave us a basic structure and shelter to develop the field a little more. Unfortunately, we’ve out-grown the hut and we need something more. Maybe a hard floor, doors, windows, who nows? There are quite a few options on the table to take us to the next level, its only a matter of time before someone puts it together in a package as neat as the “Reason Model” was. It’s an exciting time to be a safety professional.
I’ve been out in the “real” world for the past six months or so and in that time, my thinking on risk management has changed a little bit. So here it comes, a confession…
I have being using a PIG recently and I have felt its use has probably helped with effective management of overall risk.
How can that be? Don’t you despise PIGs with every fibre of your being? Well, yes. I still do but let me provide two little points which might put this confession in context.
Firstly, the company I work for doesn’t rely solely on the product of impact and probability to assess risk. They also score risk on maximum foreseeable loss. That scale puts most aviation activities into the highest risk bucket straight away. No complex probability calculations required.
The second point is that no business is solely interested in safety. Now, I know that is extremely obvious and on some level I knew that but I don’t think I appreciated it that much when I was Mr “All-About-Safety”. That’s not the way it is anymore, I have other things to think about and my superiors expect me to provide a picture of the overall operation at my airport.
So, now that the business knows that aviation is one of the highest risks, what now? If it is “red” all the time, how to do you manage that?
Okay, now we are back on track. How does one assess the complex safety environment which exists within the aviation risk of the business? Well, I’ve been exploring the how-to-do-it bit on here for a while and I’m getting closer to tying it up but lately I’ve been thinking more about how this fits into the bigger picture.
The best I can come up with is to propose that this type of risk analysis be categorised at intra-risk analysis.
I have been trying to avoid segregating safety risk analysis from general risk analysis but in order to progress the concepts I’ve been working on within my real work, I feel the need to put the whole grand unifying theory of risk to one side.
PIGs and the like have a strong foothold in existing risk management frameworks and pragmatically, it makes sense to create a space in which these concepts can develop.
At the moment, that’s what I’m going to run with for now. I’ll have my generally PIG-based risk register for the entire operation and within it, I’ll have an intra-risk register for aviation safety using a framework based on the concepts of criticality, exposure and control.
Using this approach, I hope to develop a way of informing senior managers what the picture of risk is within that large “red” box labelled aviation and how they can be assured that the risk is both acceptable and as low as reasonably practicable.
I’ll let you know how it goes.
I’ve been a bit out of the loop over the past couple of months as I try to get a handle on my new job and the (almost overwhelming) responsibility that goes along with it. But I can’t ignore the action over at the Federal Senate’s Rural and Regional Affairs and Transport References Committee’s inquiry into Aviation Accident Investigations.
Before I comment, some disclaimers – I’m not going to comment on the particulars being discussed at the Senate hearings. While I worked with many of those involved, I never worked on anything associated with the accident event (before or after) but if I were to comment, it might look as though I have inside information, am bearing a grudge or just being an stirrer. I don’t, I’m not and maybe just a little ;).
I do, however, want to comment on the philosophy surrounding some of the issues at hand.
The particulars of the situation on which, I would like to comment are, basically, that an accident occurred and the resulting investigation focussed on the operating crew. In the 15th February hearing, two comments by Senator Fawcett struck me as warranting further examination. They were:
One thing the committee wants to put on the table upfront is we accept the contention by CASA that there were errors made on behalf of the pilot in command of the flight. There seems to have been some concern raised that this inquiry is all about exonerating an individual and shifting blame elsewhere. That is not the case. We accept the fact that in the view of some it was even a violation is supposed to error. (p. 1)
With the concept of a systems approach, whereby not only the operator and the piloting command but also the regulator are key parts of the safety system… (p. 3)
For all the other problems we seem to be having in this scenario, we still seem to be stuck on the basics.
Part of a Complex System
Senator Fawcett’s second quote there and numerous others throughout the course of the hearing shows that he is quite familiar with the concept of a safety system but he, and I think a large part of the industry, can’t escape the concept of personal responsibility associated with criminal law.
The language of “exonerate” and “shift blame” suggests strongly that the old approach to investigations and safety improvement is still alive. We seem to have slid back into the days of pointing the finger at the front-line operator, stamping the label of “cause” upon them, punting them into touch, dusting our hands and declaring the world a safer place.
Okay, I’ll admit that this could be a harsh analysis of what is possibly a “throw-away” line but the language could indicate a deep-seated belief in the very concepts we are supposed to have left behind. I’m also not singling out Senator Fawcett. I think we all fight these traditional ideas, conditioned within us since an early age. How many of us still use the word “cause” despite its often misleading level of direct influence and independence?
Exonerate, Exshmonerate; Blame, Shame.
It’s a hard thing to let go of but, I think, we have to let go of the criminal view of personal responsibility when we are dealing with accidents in complex socio-technical systems, such as aviation. I’m just going to come out and say it:
No one, who participates in the aviation system, should ever go to jail, be fined or sanctioned as a criminal. Ever. Regardless of the error, violation, failing, mistake, slip, lapse, omission, commission, faux-pas, foul-up, whatever.
If we accept that aviation is indeed a system – a complex set of individuals, machines, procedures, tools, organisations – all working to achieve the objective of moving stuff from A to B – then no single part of that system can be singled out as having “failed”.
As a system there are, or should be, feedback loops. Sub-systems for checking and re-checking. There should be self-correction. If one part has failed, more parts have failed; in fact, the whole system has failed.
If you are going to blame one, you need to blame all. Jail one, jail all. Fine one, fine all.
Whoa Warden, Don’t Open that Door Yet
I am definitely not advocating some criminal reform agenda that would see society’s jails shut-down and personal responsibility disappear. I am arguing for a clear distinction between how we view undesirable events within the aviation endeavour and in society at large. I don’t think it is appropriate to look at the aviation industry as a sub-set of society and apply the same thinking.
The big differences between aviation and society are choice and intent. Pilots, ATC’ers, LAMEs, AROs and many others choose to be part of the aviation with the intent on achieving the industry’s objective of moving stuff from here to there safely.
Society on the other hand is, really, all encompassing. By definition, we don’t really have a choice to join. You could run off into the woods, build a log cabin and live as a hermit but you’d still be a part of society in the broadest sense and still, more importantly, be subject to various laws governing human relationships.
What to do with a broken part?
A while back the industry tried “no-blame” and it didn’t work. I think it was because the concept suggested there would be no ramifications, no consequences to behaviour which contributed to undesirable outcomes.
And this, of course, is untenable. If the system experiences an undesirable state or outcome, it should be able to correct its performance.
The response was to abandon “no-blame” as going too far but I think the problem was that the concept of blame actually ceases to have any meaning within a safety system approach. Much like one cannot meaningfully discuss events “before” the big bang, because time began at the big bang.
So What’s the Lesson?
The tiny lesson I’m trying to get at here is that we need to try harder to fully integrate the system approach into our thinking. It’s not so much that we can’t identify frontline operators as contributors to accidents but that there will (not might) be more to the story. Someone else, actually numerous people, will have contributed, in every case.
And in taking this approach, in identifying as many contributory factors as possible, the actions we take with respect to those people, tools, equipment, etc. will be and be perceived as appropriate. It will support actions like suspending a licence, grounding a fleet or withdrawing a certificate.
Without it, honing in on a frontline operator and booting them out of the system will never look justified regardless of how necessary it is.
PS – Criminal Offences Against Aviation
There should still be criminal offences relating to aviation. For example, morons who shine lasers at aircraft should be tried as criminals because they have not chosen to be part of the aviation system or intend on supporting its objective. Same goes those who wish to use civil aviation as a weapon.
I can’t lie to you. I have been turning myself inside out trying to get a handle on risk evaluation in the aviation safety sphere for close to five years now and I still don’t feel any closer to an answer.
And I say “an” answer and not “the” answer. Since you are always assessing risk in terms of your objectives, there can and will be multiple approaches to assessing the risk of the same scenario depending on whether you are considering your safety, financial or legal objectives.
The Perpetual Problem?
The “problem” with aviation safety risk evaluation popped its head up again for me in a recent discussion. Without going into too much detail I was discussing the impact of an aerodrome defect with a non-aviation colleague.
We both identified safety as the key impact area and then our company process required us to assess the impact according to a scale (not quite a matrix ;)). We couldn’t escape the top box, the highest level category, because as soon as the safety of an aircraft is called into question, you can’t escape the possibility of complete disaster.
When pondering this problem, I keep coming back to the idea that aviation, from a safety perspective, is inherently perilous. You can’t commit aviation without being “all in”. As such, the risk-level question tends to end up as a probability continuum from negligible impact to catastrophe.
Alright, let’s stop there. I’m pretty sure I’ve discussed this stuff before. So, let’s take it as read that I am, essentially, only interested in the probability of the worst case.
That simplifies things, doesn’t it? Unfortunately, my recent readings of Dekker and Taleb have primed me for skepticism when complex systems appear simple. In the last BT post I wrote, I did highlight that a bow-tie diagram is only ever a model of reality. I think it would be inappropriate to evaluate it using an approach more complex than the model itself.
How to Murder an Analogy
When you want to see something in the dark, it is best not to look directly at it. Due to the biology of the eye, low light receptors are more prevalent in the area of the retina outside of the focal points. Therefore, you will better see an object in the dark if you aren’t looking directly at it!
I’m proposing something similar. If you want to evaluate the risk of the bow-tie scenario, don’t look at the top event – look around the top event.
Around the top event, I consider there to be three primary things – threats, consequences and controls (including defeating factors and secondary line controls).
Therefore, I propose we assess a BT based on:
- our exposure to the threats;
- the criticality of the consequences; and
- the effectiveness of the controls.
Exposure is a common word in the risk management game and I really like it. As such, I think it is underused. What I like about it is the implicit idea that risk exists everywhere, at all times but that the context in which we are operating may vary.
If you take my boring predictable runway excursion BT example, those threats really do exist at all airports. All aircraft have the potential to carry out an unstable approach, all runways have the potential to be contaminated but not all contexts have the same exposure to these threats.
Why not use probability or likelihood?
Well, probability tends to convey an air of accuracy and mathematical legitimacy which is rarely justified. Likelihood, not so much but it is tied often to an occurrence of a discrete event. Whereas, linguistically, for me at least, I find exposure better attuned to both discrete events and persistent conditions.
So, step one is to assess one’s exposure to the identified threats.
On the other side of the top event, let’s look at the criticality of the consequences. In an earlier post, I had used the term influence to encompass the concepts of pathways and proximity of events to the final condition (absolute destruction). I’ve had a rethink and today, I’m going with criticality.
Think of the relationship between each consequence and the potential final outcome. Are there many ways this situation can go pear-shaped? Or is this consequence a LOL-cat’s whisker away from disaster itself?
Step two is to assess the criticality of the outcomes.
Once you’ve plugged the holes with your controls, identified new holes, plugged them up again and so on, you will need to sit back and criticality assess the effectiveness of those controls.
Without a BT diagram, this could get very hard but the diagrammatic approach can help and some software makes things even easier. Once you have your measure of effectiveness, I think you’ve got all you need to make an assessment of risk, all without actually assessing the top event.
Step three, assess the effectiveness of controls.
How to actually assess exposure, criticality and effectiveness and how to put them together are questions I have not yet answered. But the brain matter is continually churning and as soon as I know (or think I know), I’ll post it here.
1. I’m sorry. I’ve been reading a few obtuse academic texts lately and perhaps the language is rubbing off on me.
When I joined the aviation safety regulator I was introduced to the concept of systems-based auditing (SBA). Before this I had been carrying out aerodrome inspections and I thought becoming an Aerodrome Inspector for the government was going to be more of the same. How wrong I was! Even after four years, my concept of systems-based auditing is still evolving.
I coming to discover, and it seems everything I read will attest, that most things in life tend to be more complex than we initially think – SBA is no different.
For those not familiar with the concept, let’s look at the features and benefits of this approach.
SBA is often compared to its predecessor, which I will call, product-based auditing. This approach involved comparing examples of finished products to the standards laid out. The image often conjured up is of an auditor with checklist in hand and ticking off the compliant aspects of the product.
The problem with is, of course, that this can only ever make an assessment of the selection of product observed. The auditor can, perhaps, infer that future products will also meet the standard involved but they haven’t really assessed that to a point where a true judgement can be made.
And that is what SBA sets out to do. By looking at the system that produces the product, the auditor is making an assessment of the operator’s ability to consistently achieve the required output standard.
This approach comes to the fore when systems are brittle but the environment hasn’t yet put pressure on the system’s weaknesses. The products have all met the requirements so far but you can see that problems will arise if just one small thing falls out of place – say a key person leaves the company. It also works really well for systems that are rarely put into action – such as an aerodrome emergency response plan at a small aerodrome.
When I’ve discussed systems with colleagues before, we have sometimes descended into a semantic quagmire. Depending on one’s field, education, experience, what constitutes a system differs in the mind. Sometimes a definition can be so restrictive that discussion is pointless and others are so loose that discussion is impossible.
Let’s aim for the middle, at something useful.
A system is some form of endeavour that seeks to convert some input(s) into some predetermined output(s) in a consistent and predictable manner.
Now that can be a similar definition for a process, task, element, activity, etc. but I am going to stick with system to cover any single or collection of human or mechanical changes of state which result in an input changing into an output.
You may note that I haven’t defined the scale of the “endeavour”. It could be making toast or it could be shipping 10,000 new Furbies from their factory in Taiwan to Walmart stores on the west coast of America. The scale of the system is simply that which is to be and can be audited.
Yes, you could define an airline system with a whole bunch of inputs, a single box of action and then one output, the safe transport of people and cargo from A to B. But it wouldn’t really be possible to audit that in one go and make a judgement as to the ability of the system to consistently produce the desired output.
We need, therefore, to breakdown the overall system into smaller chunks to make auditing manageable. And we will need to consider the interrelationships between these chunks. So, how can we do this?
In Need of a Model
There are plenty of system models out there but I’ve grown fond of the SADT or IDEF0 approach. I’ll say fond because we’ve only really flirted so far; maybe a quick dance but definitely not a slow dance or any real alone time.
I stumbled on to this model initially through some reading on the Structured Analysis & Design Technique (SADT). I can’t say I remember much about the actual technique. It was the graphical representation of a system that caught my eye. So in the beginning, it was all about looks.
I did some more reading into the graphical approach and I’ve made a few tweaks to suit the more socio-technical system of which, an aviation organisation is likely to consist.
Now, let me introduce you…
I’ll go through the components of the model first and then give you an example to help explain.
- Inputs – These are the things which are converted into the outputs. They are fundamentally changed by the process and become a constituent part of the output.
- Resources – These are things used within the system to transform the inputs into the outputs. They are not changed by the process but while they are being used in one system they are not available for use by another system or instance of the same system.
- Controls – These things guide the system in its process. They too are not changed by the process but they can be used by multiple systems or instances.
- Outputs – These are the primary products of the system. They are the system’s raison d’être and, in a larger sense, should meet the objective of the designer of the system.
- By-Products – These are also outputs of the system but do not necessarily consist of the inputs or are the primary objective of the system. This feature is something I’ve only just thought of and doesn’t appear in the SADT approach. This is a significant tweak I’ve included to help with the amalgamation of systems into a larger system – more on this later.
Let’s get to a simple example: Baking a cake.
Here’s the breakdown:
- Inputs – The ingredients – flour, sugar, eggs, etc. The ingredients are fundamentally changed by the system – they become the cake.
- Resources – The bowls, spoons, cups, cake tin, oven, the chef, etc. These things are used within the system but do not become part of the cake. However, while they are part of the system, they can’t be used to bake another cake or cook something else.
- Controls – The recipe, procedures, etc. Typically, controls are data or information. They can be used over and over again by many people at once.
- Outputs – The cake. Hopefully, it meets our objective and we weren’t trying to bake a potato.
- By-Products – This one is a bit harder in this simple example but let’s say that this system is part of a professional kitchen. The chef will probably be trying to improve their skills and would be taking notes on the performance of the system. These notes are an output of the system but are not made up of the inputs. They are however, an important part of the larger kitchen system.
Putting It All Together
Two big selling points of the SADT approach is the ability to link the systems together and fold them up or drill down into super or subsystems, as desired.
In my academic reading at the moment, I’m coming across quite a bit of push back against reductionism and modelling. It is thought that the complexity of the world is being masked by oversimplification and that people are failing to consider the bigger picture.
The SADT approach is still reductionist in a sense but when you start tying systems together, you can start to identify interrelationships and dependencies between them. Modelling will always be a simplification, it’s about striking a balance.
I haven’t yet dived into mapping out a large super-system, like an aerodrome, but it is on my to-do list. What I have done so far has highlighted the model’s ability to capture some of the complexity in managing a large socio-technical system. I found that it didn’t take long to see how outputs from one system become inputs, controls or resources for another.
What’s in the Box?
Let’s call time there. This post is already getting too long and I’m still, very much, in the middle of processing these ideas into something useful. As the sub heading notes, I haven’t really looked inside the box. We’ve got arrows going in and coming out but what actually happens in the box is still a mystery.
I also haven’t really identified a way of assessing the various components of the model and therefore, you can’t do much with it, yet.
I’m in jeopardy of becoming a tease on this blog. I’m always signing off with a promise of more to come. Please remember that this blog is more of a journey than a destination and I thank you for coming along for the ride.
More to come…