Once upon a time, I went around the countryside auditing aerodrome safety management systems and dutifully asking SMS-related questions of all and sundry. It didn’t matter who they were, I asked them what they knew about the aerodrome’s SMS, how they managed risks, and what did they do to make sure everything was being well managed. I didn’t ask everyone the exact same questions, like asking the guy mowing the grass how he ensured enough resources are available to manage safety, but I did bang the SMS gong at/to anyone who was around or would listen.
I’m not so sure that was the right approach.
Turning the Tables
Now that I am an aerodrome manager myself, I might be seeing things differently or at least from a slightly different perspective. It came home to me a couple of months ago when an auditor made a slightly adverse comment about my recently re-vamped SMS*. The auditor suggested that my SMS was deficient because, in part, when questioned, my aerodrome reporting officers knew nothing about it and my immediate response was, why should they?
This was an idea I had brewing in the last months of my time with the regulator and its something I have (or am trying) to put into practice. My idea was that safety management probably wasn’t pitched at the right level of an organisation (or maybe it was pitched at too many levels). It might have been better to call them Safety Governance Systems. After all, so much of an SMS hinges on the Accountable Executive with risk management and assurance the two key functions of an SMS and both guided by and reporting back to the upper echelon of the organisation.
Now don’t get me wrong. I’m not suggesting that frontline operators can whizz around without a care in the world ignoring risks and having no accountability themselves. Nor am I advocating an industrial cleansing of safety managers across the globe.
I’m suggesting that an SMS should serve the needs to the executive of an organisation and as it flows down to the frontline, it should become embedded into the everyday activities of the organisation’s do-ers.
I like to think of an SMS like a circulatory system. It sits, generally, deep within the body. On the outside, you see very little – the odd streak of blue, a pulse. But as you go deeper, you see capillaries, veins, arteries, major blood vessels and finally the heart. The fingers don’t really need to understand the complex firing of the heart muscles to use the blood that flows through the system.
Much like the front-line operator doesn’t need to understand all the machinations of an SMS to use the processes and procedures developed by the organisation to achieve and measure a safe operation. If the SMS is working, those processes will mitigate/control/address the operation’s safety risks and will include feedback loops for assurance. To the front-line operator, safety becomes part of the way we do things around here.
Culture Just Is
That last phrase is deliberate. “The way we do things around here” is often the definition of culture used by people when they don’t have a definition of culture. And from a pragmatic point of view, its not too bad in this circumstance.
By moving away from a focus on the system for the frontline, they get back to doing what they do. Only now, with the focus of governance being on safety, what frontline does is inherently driven towards the organisation’s safety objectives.
Just part of the culture.
* I know, right? How dare he😉.
The modern world is definitely in love with its noun-based activities. Each week, a paradigm-shifting approach to some human endeavour is announced with a title like value-based health care or outcome-based education. When I delve into the details, I am generally left either confused as to what they are selling or how they are different at all.
Regulation is no different. Just plugging “based regulation” into Google yields, on the first page alone, principle-based, results-based, performance-based, outcomes-based and output-based regulatory approaches,.
In aviation circles, the noun of choice appears to be performance but you still get a couple of hits on risk, outcome and objective. I must confess, however, that for all the reading I’ve done and the courses I’ve attended, I don’t think I can concisely tell you what these system actually are or how their regulations are drafted.
I know that we need to move away from the purely prescriptive approach of the past. The aviation environment has gotten too complex for us to think we can regulate the how something is done in a manner which can consider all the variables.
The Australian airport industry is a great example of this. Not too long ago, all the airports in Australia were either owned or heavily directed by the federal government. They were also either major international airports, regional jet or turbo prop airports or other. Nowadays, airports are operated by a variety of public and private entities, we have rapidly expanding and brand new airports as well as stagnate and declining ports. The industry is dynamic, volatile and complex.
The Move from Prescription
By now, you might have noticed that I can be a bit pedantic, especially with words. So I went on a hunt for a definition of prescription which suited my understanding of what prescriptive meant in the context of regulation. Perhaps unsurprisingly, the hardline definitions don’t quite match general usage.
But first let’s set a common ground that regulation mean rules. We need rules to govern and by rules, I mean an explicit, documented expectation to which people will be held accountable. Those rules can be open (e.g. don’t crash) or they can be narrow (e.g. you must not operate and aircraft unless … (1) …. (2) …. (n)). At least that is the minimum any model of government I am familiar with involves.
So when we say that, traditionally, aviation regulation has been prescriptive, what do we mean? Those hardline definitions talk about setting rules and that creates a bit of a tautology – rule-based rules.
There was one definition that I found which plays in nicely to a recent epiphany I had. Prescription is “not subject to individual determination”.
Context is Everything
The moment of clarity I had the other day was that the question of regulatory approaches was about context. Specifically, the level of contextual variability considered by the rules put forward.
My mental model of this concept was a scale of context ranging from universal, no consideration of context other than the industry as a whole, to individualistic, each organisation’s or individual’s situation must be considered.
Now, I don’t think is earth-shattering just yet. At one end you have the ol’ prescriptive approach of “this is how we will permit you to commit aviation” and at the other, you have Safety Management Systems (SMS) and all that that entails. What I do think is pretty cool about looking at it this way is to recognise that it is a continuum. It’s not two sides of a coin or two pillars supporting a bridge.
We will need regulations utilising the slightly different approaches along the entire continuum. We will never get to the “fully performance based” regime because, I think, there will always be the need to set some universal regulations. Some examples include the need for licensed (maybe not the actual requirements for each licence but the basic requirement for one and the process) and for standardisation of information (such as visual cues, acronyms and operational data).
In the aerodromes world, we will always have the need to standardise (i.e. make common) the markings and lights on an aerodrome. We might not have to standardise all aspects (size and font) but fundamental cues like layout and colour will need to consistent across the industry.
I think the above example shows what I’m talking about but I might be missing one piece of the puzzle.
What About Risk?
That’s actually quite easy. Risk must the basis of all decisions to regulate. Without a risk why set a rule? I think, in the past, regulations have always been written to address a risk, real or perceived. The problem has been that sometimes, or often, these risks are not explicit in the documented regulation. In fact, sometimes they are not even implicit; they are downright opaque.
The industry need to get better about including a statement of the risk that a regulation is designed to address. That way, should a new method or approach to addressing that risk be discovered, an avenue exists to see it implemented without the bureaucratic mess we often see now.
The next challenge is to take this concept and use it to make decisions. Specifically, how to use risk to decide on the regulation’s location along the continuum and what does a regulation a x on that continuum look like. I shall keep pondering these questions and get back to you.
Recently, I sat in on a presentation on a subject I know quite a bit about. I like doing this as it is typically good to get a different perspective on a familiar subject. In this instance, it wasn’t so much the actual subject matter but a couple of associated topics which got stuck in my mind.
The actual discussion point being put forward by this presenter was that instrument approach plates provide very little or no information to pilots on the level of clearance provided to them between their flight path and ground obstacles.
From this, the concept which got stuck in my mind was trust. Pilots have no choice but to trust, almost blindly, that the instrument approach procedure designer has done their job and provided sufficient clearance for them to arrive safely on the ground.
This, of course, is not earthshakingly new or insightful. There are plenty of such relationships in aviation, ATC to pilot and maintenance engineer to pilot are two other examples. What got stuck in my mind is that we rarely talk this concept up (i.e. promote it) as an essential part of our culture on which, we rely everyday. We also tend to see it as a frontline operator thing and not a management, regulator or social concept.
Also recently, I was speaking to an aerodrome inspector from another country about our regulatory approach to aerodrome certification in Australia. She was quite surprised in Australia’s system which can see a large certified aerodrome make major changes to it facilities (e.g. build a new runway) without approval from the regulator.
In the aerodrome sphere at least, Australia has created a system by which the operator of a certified aerodrome has earned the regulator’s trust. CASA has granted that certificate in the belief that the operator has the ability to make safety decisions on their own. This is facilitated by safety management system regulations and CASA’s approach to surveillance. Once an aerodrome operator has their certificate, they are, n many ways, masters of their own destiny.
Nothing should or does stop CASA, or any other interested party, from asking a certified aerodrome operator to provide an account of their actions and decisions. And through the operator’s SMS, this should be easy to provide.
Not all parts of Australia’s aviation system are structured this way, but I think it is the way of the future. To get there, we need to continue to work on SMS as a concept and a practice, as well as reforming our regulations to focus on the decision-making process rather than prescriptive requirements.
Trust is such an essential part of our system from the frontline to the halls of government, but it is so rarely discussed in simple and plain terms. It might be time to go back to basics and discuss with industry participants what we are willing to trust and in whose hands. From there we could structure our regulatory system appropriately. It might be a bit of a dream at the moment but I think we’ll get there.
* I might have muddled that one up. It’s been a few months since I’ve watched Spider-Man.
Recently, I have felt like I’m in danger of becoming complacent with the bedrock of my chosen field. I’ll admit that in the past, I’ve been fairly vocal about this bedrock’s limitations and mantra-like recitation by aviation safety professionals the world over. But the recent apparent abandonment of this concept by one of the first Australian organisations to go “all-in” on it, gave me cause for reflection.
But it wasn’t a critical review of “Reason” that was on my mind. Instead, I started to think about whether we had embraced it enough to allow us to move on.
For me, being a “cheese-head” has just been part and parcel of being in the aviation safety game. Human factors was mother’s milk during my first year of uni with CRM and organisational accidents the solids of second and third years. From there, I’ve continued along the modern system safety trajectory of culture, SMS and so on. I’ve never known it any other way.
But how has the rest of the world taken to it? The general public, I mean. The great-unwashed😉.
To look examine this, I thought I’d look at MSM coverage of aviation accident investigations in Australia. So, I took to Google and searched for pages related to three accidents in the days following the release of the related accident investigation report. I was looking at how the news reported the “causes” of the accident.
The three accidents I chose were:
- Lockhart River – Australia’s worst air disaster in the last 40 years or so and an investigation I knew did follow the accident causation chain right up to the regulator.
- Pel-air – The trigger for all the current controversy and, in opposition to the above, a report that is generally said not to follow the causation chain beyond the frontline operators
- R44 @ Jaspers Brush – The most recent investigation report to be issued which would have received media coverage and also a relatively small accident in which organisational factors might be hard to identify.
In Lockhart River’s case, I could really only identify three stories in the immediate aftermath of the investigation report’s release. One from the SMH, one from ABC News and one from Lateline (ABC as well). Overall, I thought the reporting was quite good. All three pieces discussed multiple contributory factors and generally shied away from the word “cause” – except for Tony Jones’ intro to the Lateline piece which was actually more concerned with the regulator’s role. However, the headlines for the SMH and ABC News stories were old school all the way – “Pilot error blamed for Lockhart River (plane) crash” – I guess we can blame the sub-editors for these ones.
For Pel-air, Google yielded only one real MSM link with a couple of other stemming more from the 4Corners story shown a couple of days after the report’s release and many coming from aviation industry outlets. The Australian‘s story was fairly consistent with the characterisation of the ATSB report in that it focussed on the crew’s actions but it did mention briefly more upstream factors. The other stories were quite critical of the ATSB report in its perceived lack of analysis beyond the Unsafe Acts and Local Workplace Factors levels.
In the final accident, the two MSM stories I found (The Australian & Fairfax Media) put a real emphasis on the “what happened” aspects and ventured little beyond that. In this case the operation was private and, I’m sure, some would argue that “Reason” doesn’t apply. The fleet grounding and safety recommendation for a change to the fuel tank were mentioned.
At the very least, I sure more could have been said about the human factors aspects related to the event. And more could definitely be said about the aircraft and crashability standards for aircraft. As I said a couple of weeks ago, no man is an island. Even, private pilots and even, aircraft designers and manufacturers. Imagine the impact that this investigation could have had if its analysis showed that aircraft structural certification processes showed deficiencies in post-crash fire considerations.
I don’t know if it does but if this is not the case, why then do we need to change the R44’s tanks? This is not a high-level systemic fix. What is there to stop another aircraft type from having this problem in the future?
Okay, I’ll admit that we can’y go on a mass analysis expedition with every accident investigation and we have to select those investigations that have the potential to yield the greatest safety benefit. As an idealist, I do have trouble with the finiteness of the real world even though I do have to deal with this in my day job.
But where does this leave “Reason”?
Well, the Lockhart River articles were (save for the subs) quite heartening and even the Pel-air coverage (overall) tried to encapsulate the complexity of an aviation system breakdown. I guess the disappointment is more the ATSB report which, as we saw with Lockhart River, can drive the media coverage.
I’d like to see the organisational accident or system failure approach to remain as fundamental for all aviation safety analysis and investigation. In fact, it should be extended to try to capture the non-linear, close-coupled nature of complex socio-technical systems like aviation. The “Post-Reason” world may be upon us but I don’t think it is based on the approach offered by the ATSB’s chief:
If we want to go to Professor Reason’s model of investigation—though we think we have come a long way since Professor Reason’s initial work in the 1990s—there is error and there is violation. While the focus of our investigations is on error and understanding error—how to prevent it, how to detect it and how to deal with its consequences—there was also in this case an element of what, in Professor Reason’s model, would be viewed as violation; and that is principally the responsibility of the regulator.
Reason’s error types fit well within his larger model and, to be honest, I don’t see the ATSB-error/CASA-violation distinction. There’s a whole other blog post on that one!
I still quite like the distinction I made in my other post on this subject where I considered the very high-level intent of the operator. If the intent of the operator was to get people safety from A to B on their aircraft, it falls within both CASA and the ATSB’s courts. While the operator may intend on breaking a specific regulation or company policy, their overall intent remains getting their pax on the ground. If the intent of the operator is anything else, then it actually becomes a criminal matter for the police and OTS.
To analogise where I think we are at, “Reason” was a mud hut for safety professionals. It gave us a basic structure and shelter to develop the field a little more. Unfortunately, we’ve out-grown the hut and we need something more. Maybe a hard floor, doors, windows, who nows? There are quite a few options on the table to take us to the next level, its only a matter of time before someone puts it together in a package as neat as the “Reason Model” was. It’s an exciting time to be a safety professional.
I’ve been out in the “real” world for the past six months or so and in that time, my thinking on risk management has changed a little bit. So here it comes, a confession…
I have being using a PIG recently and I have felt its use has probably helped with effective management of overall risk.
How can that be? Don’t you despise PIGs with every fibre of your being? Well, yes. I still do but let me provide two little points which might put this confession in context.
Firstly, the company I work for doesn’t rely solely on the product of impact and probability to assess risk. They also score risk on maximum foreseeable loss. That scale puts most aviation activities into the highest risk bucket straight away. No complex probability calculations required.
The second point is that no business is solely interested in safety. Now, I know that is extremely obvious and on some level I knew that but I don’t think I appreciated it that much when I was Mr “All-About-Safety”. That’s not the way it is anymore, I have other things to think about and my superiors expect me to provide a picture of the overall operation at my airport.
So, now that the business knows that aviation is one of the highest risks, what now? If it is “red” all the time, how to do you manage that?
Okay, now we are back on track. How does one assess the complex safety environment which exists within the aviation risk of the business? Well, I’ve been exploring the how-to-do-it bit on here for a while and I’m getting closer to tying it up but lately I’ve been thinking more about how this fits into the bigger picture.
The best I can come up with is to propose that this type of risk analysis be categorised at intra-risk analysis.
I have been trying to avoid segregating safety risk analysis from general risk analysis but in order to progress the concepts I’ve been working on within my real work, I feel the need to put the whole grand unifying theory of risk to one side.
PIGs and the like have a strong foothold in existing risk management frameworks and pragmatically, it makes sense to create a space in which these concepts can develop.
At the moment, that’s what I’m going to run with for now. I’ll have my generally PIG-based risk register for the entire operation and within it, I’ll have an intra-risk register for aviation safety using a framework based on the concepts of criticality, exposure and control.
Using this approach, I hope to develop a way of informing senior managers what the picture of risk is within that large “red” box labelled aviation and how they can be assured that the risk is both acceptable and as low as reasonably practicable.
I’ll let you know how it goes.
I’ve been a bit out of the loop over the past couple of months as I try to get a handle on my new job and the (almost overwhelming) responsibility that goes along with it. But I can’t ignore the action over at the Federal Senate’s Rural and Regional Affairs and Transport References Committee’s inquiry into Aviation Accident Investigations.
Before I comment, some disclaimers – I’m not going to comment on the particulars being discussed at the Senate hearings. While I worked with many of those involved, I never worked on anything associated with the accident event (before or after) but if I were to comment, it might look as though I have inside information, am bearing a grudge or just being an stirrer. I don’t, I’m not and maybe just a little😉.
I do, however, want to comment on the philosophy surrounding some of the issues at hand.
The particulars of the situation on which, I would like to comment are, basically, that an accident occurred and the resulting investigation focussed on the operating crew. In the 15th February hearing, two comments by Senator Fawcett struck me as warranting further examination. They were:
One thing the committee wants to put on the table upfront is we accept the contention by CASA that there were errors made on behalf of the pilot in command of the flight. There seems to have been some concern raised that this inquiry is all about exonerating an individual and shifting blame elsewhere. That is not the case. We accept the fact that in the view of some it was even a violation is supposed to error. (p. 1)
With the concept of a systems approach, whereby not only the operator and the piloting command but also the regulator are key parts of the safety system… (p. 3)
For all the other problems we seem to be having in this scenario, we still seem to be stuck on the basics.
Part of a Complex System
Senator Fawcett’s second quote there and numerous others throughout the course of the hearing shows that he is quite familiar with the concept of a safety system but he, and I think a large part of the industry, can’t escape the concept of personal responsibility associated with criminal law.
The language of “exonerate” and “shift blame” suggests strongly that the old approach to investigations and safety improvement is still alive. We seem to have slid back into the days of pointing the finger at the front-line operator, stamping the label of “cause” upon them, punting them into touch, dusting our hands and declaring the world a safer place.
Okay, I’ll admit that this could be a harsh analysis of what is possibly a “throw-away” line but the language could indicate a deep-seated belief in the very concepts we are supposed to have left behind. I’m also not singling out Senator Fawcett. I think we all fight these traditional ideas, conditioned within us since an early age. How many of us still use the word “cause” despite its often misleading level of direct influence and independence?
Exonerate, Exshmonerate; Blame, Shame.
It’s a hard thing to let go of but, I think, we have to let go of the criminal view of personal responsibility when we are dealing with accidents in complex socio-technical systems, such as aviation. I’m just going to come out and say it:
No one, who participates in the aviation system, should ever go to jail, be fined or sanctioned as a criminal. Ever. Regardless of the error, violation, failing, mistake, slip, lapse, omission, commission, faux-pas, foul-up, whatever.
If we accept that aviation is indeed a system – a complex set of individuals, machines, procedures, tools, organisations – all working to achieve the objective of moving stuff from A to B – then no single part of that system can be singled out as having “failed”.
As a system there are, or should be, feedback loops. Sub-systems for checking and re-checking. There should be self-correction. If one part has failed, more parts have failed; in fact, the whole system has failed.
If you are going to blame one, you need to blame all. Jail one, jail all. Fine one, fine all.
Whoa Warden, Don’t Open that Door Yet
I am definitely not advocating some criminal reform agenda that would see society’s jails shut-down and personal responsibility disappear. I am arguing for a clear distinction between how we view undesirable events within the aviation endeavour and in society at large. I don’t think it is appropriate to look at the aviation industry as a sub-set of society and apply the same thinking.
The big differences between aviation and society are choice and intent. Pilots, ATC’ers, LAMEs, AROs and many others choose to be part of the aviation with the intent on achieving the industry’s objective of moving stuff from here to there safely.
Society on the other hand is, really, all encompassing. By definition, we don’t really have a choice to join. You could run off into the woods, build a log cabin and live as a hermit but you’d still be a part of society in the broadest sense and still, more importantly, be subject to various laws governing human relationships.
What to do with a broken part?
A while back the industry tried “no-blame” and it didn’t work. I think it was because the concept suggested there would be no ramifications, no consequences to behaviour which contributed to undesirable outcomes.
And this, of course, is untenable. If the system experiences an undesirable state or outcome, it should be able to correct its performance.
The response was to abandon “no-blame” as going too far but I think the problem was that the concept of blame actually ceases to have any meaning within a safety system approach. Much like one cannot meaningfully discuss events “before” the big bang, because time began at the big bang.
So What’s the Lesson?
The tiny lesson I’m trying to get at here is that we need to try harder to fully integrate the system approach into our thinking. It’s not so much that we can’t identify frontline operators as contributors to accidents but that there will (not might) be more to the story. Someone else, actually numerous people, will have contributed, in every case.
And in taking this approach, in identifying as many contributory factors as possible, the actions we take with respect to those people, tools, equipment, etc. will be and be perceived as appropriate. It will support actions like suspending a licence, grounding a fleet or withdrawing a certificate.
Without it, honing in on a frontline operator and booting them out of the system will never look justified regardless of how necessary it is.
PS – Criminal Offences Against Aviation
There should still be criminal offences relating to aviation. For example, morons who shine lasers at aircraft should be tried as criminals because they have not chosen to be part of the aviation system or intend on supporting its objective. Same goes those who wish to use civil aviation as a weapon.
I can’t lie to you. I have been turning myself inside out trying to get a handle on risk evaluation in the aviation safety sphere for close to five years now and I still don’t feel any closer to an answer.
And I say “an” answer and not “the” answer. Since you are always assessing risk in terms of your objectives, there can and will be multiple approaches to assessing the risk of the same scenario depending on whether you are considering your safety, financial or legal objectives.
The Perpetual Problem?
The “problem” with aviation safety risk evaluation popped its head up again for me in a recent discussion. Without going into too much detail I was discussing the impact of an aerodrome defect with a non-aviation colleague.
We both identified safety as the key impact area and then our company process required us to assess the impact according to a scale (not quite a matrix ;)). We couldn’t escape the top box, the highest level category, because as soon as the safety of an aircraft is called into question, you can’t escape the possibility of complete disaster.
When pondering this problem, I keep coming back to the idea that aviation, from a safety perspective, is inherently perilous. You can’t commit aviation without being “all in”. As such, the risk-level question tends to end up as a probability continuum from negligible impact to catastrophe.
Alright, let’s stop there. I’m pretty sure I’ve discussed this stuff before. So, let’s take it as read that I am, essentially, only interested in the probability of the worst case.
That simplifies things, doesn’t it? Unfortunately, my recent readings of Dekker and Taleb have primed me for skepticism when complex systems appear simple. In the last BT post I wrote, I did highlight that a bow-tie diagram is only ever a model of reality. I think it would be inappropriate to evaluate it using an approach more complex than the model itself.
How to Murder an Analogy
When you want to see something in the dark, it is best not to look directly at it. Due to the biology of the eye, low light receptors are more prevalent in the area of the retina outside of the focal points. Therefore, you will better see an object in the dark if you aren’t looking directly at it!
I’m proposing something similar. If you want to evaluate the risk of the bow-tie scenario, don’t look at the top event – look around the top event.
Around the top event, I consider there to be three primary things – threats, consequences and controls (including defeating factors and secondary line controls).
Therefore, I propose we assess a BT based on:
- our exposure to the threats;
- the criticality of the consequences; and
- the effectiveness of the controls.
Exposure is a common word in the risk management game and I really like it. As such, I think it is underused. What I like about it is the implicit idea that risk exists everywhere, at all times but that the context in which we are operating may vary.
If you take my boring predictable runway excursion BT example, those threats really do exist at all airports. All aircraft have the potential to carry out an unstable approach, all runways have the potential to be contaminated but not all contexts have the same exposure to these threats.
Why not use probability or likelihood?
Well, probability tends to convey an air of accuracy and mathematical legitimacy which is rarely justified. Likelihood, not so much but it is tied often to an occurrence of a discrete event. Whereas, linguistically, for me at least, I find exposure better attuned to both discrete events and persistent conditions.
So, step one is to assess one’s exposure to the identified threats.
On the other side of the top event, let’s look at the criticality of the consequences. In an earlier post, I had used the term influence to encompass the concepts of pathways and proximity of events to the final condition (absolute destruction). I’ve had a rethink and today, I’m going with criticality.
Think of the relationship between each consequence and the potential final outcome. Are there many ways this situation can go pear-shaped? Or is this consequence a LOL-cat’s whisker away from disaster itself?
Step two is to assess the criticality of the outcomes.
Once you’ve plugged the holes with your controls, identified new holes, plugged them up again and so on, you will need to sit back and criticality assess the effectiveness of those controls.
Without a BT diagram, this could get very hard but the diagrammatic approach can help and some software makes things even easier. Once you have your measure of effectiveness, I think you’ve got all you need to make an assessment of risk, all without actually assessing the top event.
Step three, assess the effectiveness of controls.
How to actually assess exposure, criticality and effectiveness and how to put them together are questions I have not yet answered. But the brain matter is continually churning and as soon as I know (or think I know), I’ll post it here.
1. I’m sorry. I’ve been reading a few obtuse academic texts lately and perhaps the language is rubbing off on me.