Elusive Etiological Risks ~ Every once in a while, especially when a specific situation is resonating with society at large, partial intellectuals tend to come forth to put everything into perspective but unless they are careful with their 'etiological typecasting', they chance to further mislead everyone including themselves.
Why are people so bad at assessing risk?
The infographic of popular things most likely to kill you is etiologically flawed [LINK], but before we understand why that's the case, let's run with the intellectually plebeian narrative put forward by it, just for a moment. The statistical reality that you are likely to die from a heart attack over an act of terrorism is a reality and while the average person may consider terrorism as a bigger threat than obesity or say car accidents, they are and deep down they probably know it, horribly mistaken.
The inability for people to weight two or more threats correctly when compared probabilistically has been studied for decades, perhaps longer. Kahneman and Tversky received a Nobel Prize for their research into this area of human psychology and prospect theory [LINK] depicts this human dynamic well.
"Prospect Theory describes the way people choose between probabilistic alternatives that involve risk, where the probabilities of outcomes are known."
So given all of this, the infographic shown to the left in the diagram below should not come as a surprise to you.
However, this infographic is busted in ways beyond the obvious; notwithstanding it has been extracted, copied and adapted from an original source incorrectly and without comprehension of the purpose the original source (Peter Ubel) carried.
So what's the problem?
Well, the thing is each causal category is not mutually exclusive when it is represented in a diagram in such a manner. That is you could die from a numerous set of risks but if they don't kill you, a heart attack certainly will, one would have thought that is obvious. That renders heart attacks as comparably mutually inclusive rather than exclusive to the other risks they are compared against and that puts the entire schematic into a questionably misleading position.
There is also another problem with time effects in the diagram, a latent risk variable so often ignored and very much so in this picture. From the chart, heart attacks are a lifelong disorder in the making and for every individual, while terrorism is perhaps being in the wrong place at the wrong time for anyone of us, there is a material difference that makes comparability incompatible.
Not all risks can be compared alongside each other in a 'native manner' because they may have a predisposition or set of features that handicap (used in a sporting sense ie a golf handicap LINK) their inference on the final outcome being assessed. These predispositions have to be recognised and disclosed.
Most concerningly, the diagram encourages people to have a narrow focus on death as being the only worthy considerable outcome from each risk category, when the quality of life under each scenario can carry important effects that may also need attention but which are now reduced to a level of insignificance. This lack of etiological insight often leads managers into making morally maladjusted decisions when they analyse flawed risk reports such as the infographic we have shown above.
For example, say our biggest threat last year was Market Risk because we suffered a three million dollar loss in the treasury department but say we also had no casualties from the domain of Occupational Health and Safety.
Given the position between these two risk threats alone (Market Risk and OHS), can we draw the following conclusions?
[1] All our risk resources should be directed to Market Risk and
[2] As OHS risks are at best intangible and insignificant when compared with Market Risk, is there any need to control OHS threats? Perhaps we can save money by firing all the OHS control staff?
These Market Risk / OHS conclusions are a fail as much as the fake diagram on terrorism being reduced to insignificance when it is compared with heart attacks is equally busted. Both situations are etiologically flawed and at best they only support partial intellectual thinking.
Etiological Typecasting
There are various 'Critical Thinking' questions risk analysts can put across sources of risk data that should give them insight into whether they are drawing dangerously inappropriate comparisons when analyzing two or more pools of uncertainty and we have taken to list some of these questions below.
What we are not saying is that you must not compare risk factors that don't meet all the criteria listed above and in many cases, this criterion can't be entirely satisfied. However, you should not compare anything without being aware of the limitations (sporting handicaps) of what you are assessing and "our infographic on things most likely to kill you" is just that, flawed.
Nice article Martin. In the end there's only the perception of risk, even when people look at the facts! Facts also need to be contextualized and interpreted. But that should never be an excuse for not managing the risks we perceive.
ReplyDeleteThank you for your kind remarks, and I agree with your comment: Risk is very much a part of a "Stakeholders' Perception" as you correctly describe it and people, even whole communities can be biased when analysing information that is presented to them.
ReplyDeleteI have even observed some enterprise risk managers source their entire risk framework information sets from a source that is derived from nothing more than heavily opinionated perceptions that are drawn from surveys or questionnaires about what threatens an organisation.
This biased approach to risk assessment alone is flawed for several reasons including:
[1] It lacks the presence of evidence
[2] Fails to test belief
[3] Misses a comparable baseline
[4] The 'risk opinions' aren't connected to objectives detaching the risk detailing from the opportunity cost.
Who would give business managers only threats without purpose and in this we have one of the reasons why risk management fails.