1 Unit
CPD technical article
The number of risk hot spot updates has grown considerably over the years but there are catches with using them and you need to exercise quality control over their use.
Reading this article and answering the related questions can count towards your verifiable CPD if you are following the unit route to CPD and the content is relevant to your learning and development needs. One hour of learning equates to one unit of CPD. We suggest you use this as a guide when allocating yourself CPD units.
It's an excellent time to offer a few reflections on this topic because there have been and will be a whole host of "risk hot spot" publications coming out in the next few months. In addition, the number of risk hot spot updates seems to have grown considerably over the years, so it's worth taking a quick look back before looking ahead. You can also listen to this article as a podcast.
Background to risk/issue “hot spot” polling
Risk updates gained growing prominence in the 1950s, 1960s, and 1970s when strategy consulting firms wanted to promote the benefits of “horizon scanning” and “taking a step back” as a way of promoting the benefits of better strategic planning. Arising out of this came numerous CEO surveys by those firms, and then the big four accounting and advisory firms (and other boutique consulting firms) started to engage in this arena.
The idea was to poll CEOs, carry out some one-to-one interviews (to get some soundbites), and then summarise the things “keeping them up at night” in terms of short-term issues and risks, as well as things “on their radar screen” with a longer-term time horizon. The power of this approach was that there was less of an angle to grind from the consulting firms because this was peers sharing what they were thinking. As time went on, CEOs and Senior Executive polling evolved so that sector-specific risk updates (in Mining and Oil and Gas, Technology, Pharmaceuticals, Financial Services, etc.) were available.
Since then, surveys have engaged other members of the C-suite (e.g., CFO, CIO and CRO). And Chief Audit Executives have been consulted by consulting and accounting firms and via the IIA CBOK process and, more recently, through the ECIIA "Risk in Focus" survey.
Why are these surveys interesting/important?
As readers will appreciate, many senior managers receive a lot of reports from their organisation. These reports can be in the form of updates on business performance (with a KPI focus) as well as updates on projects programmes and other activities, where Key Risk Indicators (KRIs) are presented. And as technological solutions improve, there is increasing use of analytics and "data visualisation" capabilities so managers can drill down into the details of a topic of interest/concern. This is good but can lead to "information overload."
This is why risk and issue “hot spot” reviews are so appealing. In a short report, you can see the critical issues that peers are thinking about and rapidly cross-check whether these issues are on your radar screen. All being well, you’ll get a sense of comfort that nothing important has been missed, but – from time to time – you will get a prompt that triggers the need to do a more in-depth analysis on a topic. Therefore, the external input can act as a "jolt" to stop executives from developing risk blind-spots.
Is there a catch?
Despite clear benefits of risk/issue hot spot updates, there are potential shortcomings to be aware of:
- Risks and issues may be captured in generic terms - it will depend on the polling/survey methodology. Still, sometimes those who are engaged are given a list of generic risks to consider, (e.g. cyber security, digitalisation, and supply chain resilience). These may make sense in overall terms, but the problem can be that key risks end up in a rather "motherhood and apple pie" form, and may not capture the specific, complex challenges individual companies face.
- Bounded rationality - an associated problem is that if you have been reading articles on popular risk topics and these then feature in a poll, you may have already been "primed" to answer in a particular way. Linked to this is the risk of "groupthink," where respondents raise the same, popular, issues. Consequently, it's essential to understand the processes used to gather key risks, including whether there are “free text” options to allow the possibility of new/different risks to be flagged. Likewise, it can be interesting to understand “minority report” issues that were not popular with everyone, but were flagged up by some, because these may be the ones that are relevant in your organisation.
- Confirmation bias -a further limitation is that some leaders may quickly scan key risks and think they are “on the radar screen” and “under control” – but with limited cross-checking. In other words, there can be something “self-serving” about looking at risk updates in a frame of mind to "tick off" the risks listed, when a more detailed analysis of specific controls would reveal gaps.
- Limited evaluation of the robustness of the risk/issue polling process - this is the most important thing you need to bear in mind: Is there evidence that these predictions are insightful? Take CV19 to start. I've seen high-profile risk surveys carried out over several years where the risk of a pandemic was judged to be declining right up to and including the beginning of 2020. Then CV19 hit, and the following year (2021), pandemic preparedness and resilience became number one in the charts! It's the same with the war in the Ukraine/inflation risk. If you are not careful, the poll of risks doesn't register something as a critical until it’s a real issue. Not much of a horizon scanning tool to have a "hot spot analysis," which is (mostly) one year behind the pace.
Please don't misunderstand me here. I fully appreciate how difficult it is to forecast things in the future; just predicting the weather a week ahead is a challenge. But at least weather forecasters are open and honest about "uncertainty in our models" with a "60% chance that the storm will head our way". Further, weather forecasters regularly review the robustness of their forecasts and use this feedback loop to learn for the future. As I see it, we should be expecting a learning mindset from hot spot surveys where we hear more explicitly how well past predictions faired and how they can be improved from year to year.
Quality control over risk hot spot updates
My personal view is that if we are to make risk hot spot updates more robust, we need to blend stakeholder polling with other, more objective intelligence/information. For example:
- Public disclosures on risks and governance matters – e.g., take a sample of organisations like your own and look at what they say their key risks are.
- Understand how the scope and depth of public disclosures is changing – e.g., the Grant Thornton Corporate Governance review in the UK.
- Map any of risk hot spots to a relevant “Value Chain”, so you can more clearly see where any “hot spot” gaps might be.
- Consider a range of other sources of information and map these against one another, as set out in the following two illustrative diagrams.
- Finally, use incident information reported via newspapers or information about regulatory fines (and other “weak signal” information sources etc.) to see what might be coming along.
Note that I am not saying Internal Audit should carry out all these steps itself. I am highlighting that for a risk hot spot update to be of value you need to understand the “quality control” process they go through before they are published.
Not just an annual process
Linked to all of this, it should be self-evident that annual hot spot updates should probably be transitioned to a six-monthly or quarterly basis. This would likely mean more “short and sweet” risk hot spot updates. This happened during CV19 when there were a series of reports on the risks and challenges of working in the virtual world, etc. throughout 2020 and 2021; but I think this should be a template for the future (think about the latest issues unravelling from the Ukraine conflict). Of course, increased frequency would have to be balanced against the quality control points I have already discussed.
Turning to Internal Audit "risk hot spots"
It follows from the foregoing that I am a believer that risk hot spot updates can be a valuable tool in the internal audit planning process and, more broadly, but with the important caveat that we take these updates "with a pinch of salt," and adapt them to the specific context of our own organisation.
In high level terms I recommend:
- Using a range of risk hot spot summaries (from CEOs, CROs, and CAEs) (sector specific and general), and other sources of intelligence (external and internal), as a discussion tool with risk colleagues and senior stakeholders. For example, "If these other organisations think this might be a risk, why do we think this doesn't apply to us, and/or why would this be just a small risk for us?" and “If this problem hit one of our competitors (and has been in the news), why might this not happen to us”?
- Explicitly incorporate a risk hot spots review as part and parcel of any audit planning process. I wrote a detailed paper on internal audit planning for ACCA in 2019, where I explained and mapped out an effective audit planning process – see link above and diagram below:
3. Go through three steps before putting a risk hot spot on your audit plan:
- First, based on a reasonable worst-case scenario, could this risk be important to our organisation, and if so, who is accountable for this risk area, and what do they think our specific challenge is? (IIA standard 2010 and 2110)
- Are there any risk mitigation/process improvement activities underway that mean the risk will be better managed in 3-6-9 months? In other words, is this a key risk for us now and looking ahead? (IIA standard 2120-2130)
- What are the current sources of assurance that we have concerning the risk, and is there anything that suggests there would be value in Internal Audit looking at the area, either on an advisory basis (especially at the early stages), an assurance review (mid-way through improvements), or an audit assignment (perhaps if the issue is claimed to have been fixed)? (IIA standard 2000 and 2050)
Concluding remarks
I trust this brief review highlights the value we can get from using risk hot spot reviews, but the importance of going through a process that tests the relevance, importance, and insight from these updates and – most of all – that recognises Internal Audit's place in the three lines model. It’s the job of management and the risk function etc. to identify key risks and manage these effectively in line with an agreed risk appetite. And its Internal Audit's job to advise/assure as appropriate.
And finally, I hope these reflections highlight why organisations need to go beyond the generic phrases of becoming more “resilient” and “agile” to addressing specific challenges and barriers behind these words. After all, there are some capabilities and contingency plans that will require a lead time to implement if they are to be effective, especially where there are key dependencies in people or infrastructure. And this is just the sort of insight that a good risk or audit function can help senior leaders start to think about and to develop “warning mechanisms” and “contingency plans” so that there is less firefighting.
I conclude by listing a range of useful general links which I hope colleagues will find useful, recognising sector specific surveys can also be found by a Google search.
James C Paterson is a former head of internal audit, consultant, trainer (face to face and webinars) and the author of: Lean Auditing. www.RiskAI.co.uk
CEO level
PwC's 25th Annual Global CEO survey
Winter 2022 Fortune/Deloitte CEO survey
The Conference Board C-Suite Outlook 2022 - Reset and Reimagine
Risk
World Economic Forum Global Risks Report 2022
Protiviti Top Ten Risks for 2022: A Global Perspective
MIT Sloan Management Review - How to Make Sense of Weak Signals
Internal audit
ECIIA Risk in Focus 2022: hot topics for internal auditors
Gartner 2022 Audit Plan Hot Spots
MNP - Risk trends in 2022 and beyond: what Internal Audit must assess
1 Unit