As another semester ends at the University, I wonder if students truly understand that risk analysis is more of a process than a result. Risk analysis is necessary for decision-making with various levels of uncertainty. Some things we know will occur within an organization; turnover, product defects, and missed performance goals. But maybe we focus too much on single-unit measurements and skip the bigger picture altogether. In this article, we will discuss three factors that should be considered when analyzing the results of any risk assessment.
The first factor is that risk and probability are not synonymous. Sure, the definition commonly accepted within institutions is that risk equals the probability of event times the event’s severity. But these two variables are different, mathematically speaking. Probability implies a mathematical construct that is observed over some period. Only some things are observed within our timeline of recorded history. Never mind, we can’t predict the probability of an emerging event since the scenario has never occurred. Probability is valued on known things, and unknown events are unknowable.
We use ordinal constructs such as high, medium, or low to account for imprecise measurements. If we can measure something in observations but then discount it to an ordinal value for uniformity in the model, then we are thus diminishing the value of the data. Not everything can be measured, and we must reduce uncertainty to something that works, such as high, medium, or low levels. But by doing such an act, we must remember that our numbers do not represent the distance between the variables. We can no longer say one variable is two times more likely to occur. Instead, we must talk more qualitatively; there is a significantly greater chance of event x occurring over event y.
The second point is that normal distribution models are subject to tail effects. I will assume that most of us were just happy to finish a college statistics class – I am no different in this. Most college statistics classes teach Gaussian distribution- commonly referred to as bell curves. This approach is rational, considering much of math, science, and engineering center around normal probability distributions. The problem with Gaussian distributions is the tail risks.
Now what is a tail risk? Tail risk is the potential of an extreme event occurring within a normal distribution model. These types of risk happen in the second or third standard deviation. We often discount these risks as not happening because they don’t fall within the expected standard deviation. These risks, though, can be very costly to an organization as these risks often have widespread effects on the system – think systemic risks. We often talk about stress testing or scenario analysis to manage these risks. Those tests are something to explore, and generally, organizations handle many tail risks through diversification or hedging.
But anyone who does value-at-risk modeling should know the model’s flaw is the lack of time dependency. Most risk analysis endeavors assume that risk is static and consistent over time. This couldn’t be further from the truth. Risk is constantly changing based on the influence of external factors – possibly correlations between the factors. As these changes occur, the data collected changes. If we conduct a risk analysis, we should ask the question: Is this variable a leading or trailing indicator in the model?
A leading indicator is a variable or metric that precedes another metric and signals early signs of movement within a system. A leading indicator in a cyber security assessment may be the completed patch update variable. A low number of patch updates within a specific time frame indicates a greater likelihood that a known threat can exploit a system. A trailing indicator is a metric or variable demonstrating what has occurred. Standard trailing variables are performance measures or incident response metrics. Knowing whether something is forward-looking, or rear-looking is necessary to understand how to respond best in reducing uncertainty.
The last point is that no matter how objective we strive in risk analysis, the results are always subjective to the organization. I have 20 years of risk analysis experience while serving in the military and working in corporate America. As a scholar, though, the question which continues to wreck my analysis is: So, what?
Risk only matters if it impacts or influences people, processes, and places. No matter how sophisticated our models may be, the result still comes down to, does this matter to the organization. In most cases, it may not matter to the executive management teams. If it does matter, you probably already know about it.
To successfully communicate risks, you must understand what matters to your boss and their boss, up to the chief executive officer. Your results will often be noticed if you know what matters to them. Risks will always exist. We know many things are for sure or least highly expected to happen within the business life cycle. Many firms purchase insurance and establish loss retention funds to pay for such occurrences. Instead, to drive improvement and visibility of the risk analysis process, eliminate so much attention on the things that will happen. Focus on building models that help drive the answer to the “so what” question.
With all the uncertainty today in the business world, finding the signal in the noise can become tricky. Many things may need to be improved to focus your limited resources and attention. To help deliver high-quality risk analysis results, remember: risk and probability are not synonymous; think about the tails of your models, and answer the “so what” question before signaling the alarm. The risk analysis process is as important, if not more important, than the results. Work to improve the risk decision-making process and the results will improve by coincidence.