What Is Wrong With a Typical Risk Register?
I recently presented at an IIA Qatar Zoom meeting on the topic of "Risk Management for Success." At one point, I shared an example of a risk register I had found on the web. I explained how it was removed from the context of achieving objectives (i.e., risk to what?) and that periodically managing a list of risks is not sufficient for effective risk management.
The Problem With Risk Registers
During the Q&A session, somebody asked how the risk register could be improved.
There are multiple problems that need to be overcome, including:
- As mentioned above, it is a static list of risks, updated occasionally. Managing a list of what could go wrong is not the same as considering how best to achieve objectives. That requires understanding what might happen as part of every decision and that changes often, which requires more than a periodic discussion. However, there is a measure of value in the periodic review of those sources of potential harm that need to be addressed, and typically monitored, on a continuing basis. I will come back to that.
- Also as noted above, it's unclear what these are risks and what the devil does a “high” rating mean? It doesn’t help us understand how an adverse event would affect the objectives of the organization. That question is not addressed at all, potentially leading those who review a risk register to note it with interest but not know how important the issues are, especially when compared to other matters needing their time and money.
- A risk register leads to managing and mitigating individual risks in silos instead of considering all the things that might happen — a.k.a. the big picture — to determine the best cause of action and how much to take of which risks.
- A list of risks focuses only on what might go wrong, ignoring the possibilities of things going well. For example, excellent performance by the project team might lead to early completion of the project.
The risk register has more problems, but I want to talk about one that seems to confound many risk practitioners: that risks (and opportunities) are not a single point; there is a range of potential effects or consequences and each point in that range has its own likelihood.
A Risk Register Scenario: What Could Go Wrong?
Take the first “risk” in the register above: “Project purpose and need is not well-defined” and ask the people involved in the project for their “risk assessment.”
- The business unit manager considers the meetings she has attended with the project team. She believes there is a 15% possibility that they have misunderstood her people’s needs, which could be quite significant. If that is the case, she can see a combination of revenue and cost impacts that she estimates as $300,000 over the next quarter, more and for longer if the problems are not corrected promptly. If you asked her to rate the likelihood and impact, she would say that is medium likelihood and medium impact, for a medium overall severity.
- The COO tells you that he has confidence in both the business and IT people working on the project and there is a very low probability, maybe 5%, of an issue that he says would not amount to more than $100,000 (the cost of additional work) and would not affect revenue goals. He rates that as low likelihood and impact, for a low overall severity.
- The project leader exudes confidence. He is 100% confident there will not be any serious issues. He dismisses the idea of small snags as something that always happens. He also assesses likelihood, impact and severity as low.
- The analyst responsible for working with the vendor to identify and implement any customizations is reluctant to give her estimate. Eventually, she admits there is a 30% chance that something will go wrong and it would cost up to $1,000 per day of consultant time to make corrections. She doesn’t know how that might affect the business. When pushed, she whispers that the likelihood is high, effect is medium, and she doesn’t know how to assess overall severity from her junior position.
Are they wrong? Or, are they all right? How can they have different answers?
In all likelihood (pun intended) they are all right.
Gradients of Failure
Like those who only see or touch one part of an elephant, each person has a different perspective, bias and interest. They also have different information and insight.
A typical risk practitioner would report either the most likely effect and its likelihood, ignoring the others, or the most severe and its likelihood. Some would try to come up with an average of some sort.
That would mean that they would pick the assessment of 30% and $1,000 per day, or 15% and $300,000. But that would then run into a problem when more senior management, the COO, tries to overrule those who don’t (in his opinion) see the big picture. (This is something I have encountered multiple times in my career, but that’s not the topic today.)
Attempting to boil these different answers down to one value for likelihood and impact isn't what I consider part of effective risk management. (I describe that as addressing what might happen every day so you can have an acceptable likelihood of achieving your objectives.) It is also questionable whether you can calculate ‘severity’ either by multiplying severity and impact or using a heat map.
The fact is that there is no single point. The fact is that there may be different gradations of "failure," each with its own level of consequence and each with its own likelihood.
The risk register talks about the likelihood of the risk event when it should be talking about the likelihood of the effect. When you can have multiple levels of effect, you have a range.
Learning Opportunities
A Better Approach to Risk Registers
A better approach involves bringing all the players (and there would likely be more than these four) into a room and asking these and other questions to come to a shared assessment that makes business sense — recognizing that this is just one of several risks and opportunities to consider.
- Why is this project needed? How does it relate to enterprise objectives? Why does it matter and how much does it matter? What is important about it?
- How would a failure to define the “purpose and need” affect the business? What would happen if the project is, for example, delayed? What about if it doesn’t deliver all the required functionality?
- How should we measure the consequences? Are traffic light ratings (high, medium, low) meaningful? Should we use a dollar figure, for example in estimating additional costs and revenue losses? Would that help us make the right business decisions? How about making the assessment based on how one or more enterprise objectives would be affected, such as how a failure could affect the likelihood they would no longer be achieved?
- What is the worst that could happen? Now, what is its likelihood?
- How likely is it that everything is perfect?
- Assuming that we are using a dollar figure to estimate potential consequences, what is the likelihood of a $300,000 impact? (This would be modified if instead we are assessing based on the effect on objectives.)
- How about $100,000?
- ... and so on until a range of potential effects (or consequences) and their likelihoods are agreed upon.
There are tools (such as Monte Carlo) that can calculate a value for the range of effects and their likelihood. However, while it is possible to have a value, I would talk to the consumers of risk information, the decision-makers, whether they want to see a single value or understand the full range of possible consequences.
This is only the assessment of a single source of risk and it is likely that other risks and opportunities might have to be considered before agreeing (a) whether the situation is acceptable, and (b) what actions to take if it is not.
Even though I talk about risk management providing the information about what might happen (both risks and opportunities) that is required for informed and intelligent decisions, there is still value in the periodic taking stock (to quote my friend, John Fraser) of those risks and opportunities that are so significant they merit a more continuing level of attention.
But such a list has to show why these risks and opportunities are important. Saying it is “high” means nothing. It is imperative to explain how it relates to the achievement of objectives.
It is also imperative to show the range of potential effects or consequences. The only exception I would make is where the decision is made that only the likelihood of particularly severe consequences needs to be monitored.
As I explain in my books, what makes the most sense (in addition to the continuous enabling of decision-making) is reporting the likelihood of achieving objectives considering all the things that have happened, are happening, and might happen.
This is actionable information that helps leaders understand whether they are likely to achieve what they have set out to achieve. They can determine whether that likelihood is acceptable and decide what actions are needed, if any.
Where Does All of This Leave Us?
This is my recommendation:
- Ensure appropriate attention is given on a daily basis to what might happen (both for good and harm) as part of both strategic and tactical decision-making.
- Monitor on a regular basis the likelihood of achieving objectives, considering what has happened, what is happening, and what might happen.
- Monitor on a continuing basis those risks and opportunities that merit attention because of their potential to affect the business and the achievement of its objectives, both short and longer-term.
I welcome your thoughts.
Learn how you can join our contributor community.
About the Author
Norman Marks, CPA, CRMA is an evangelist for “better run business,” focusing on corporate governance, risk management, internal audit, enterprise performance, and the value of information. He is also a mentor to individuals and organizations around the world, the author of World-Class Risk Management and publishes regularly on his own blog.