AI has unsurprisingly infiltrated internal audit conversations. And while I do not profess to be an expert on AI, and I am a retired rather than active head of internal audit (CAE), I don't think I would be doing everything I see people doing with AI.
Let me preface my comments by saying that before I became a CAE (a position I held for ~20 years), I was a VP in IT management for a couple of financial institutions. Before that I was with one of the Big Four audit firms and led the Los Angeles IT audit team, and going back even further I was the senior manager responsible for technical IT audit in London. That’s in addition to being a Chartered Accountant and then a CPA.
So I have some understanding and appreciation for technology and technology risks. As a CAE, IT auditors would be as many as 25% of my staff.
What Do I See People Doing With AI?
- Most internal audit professionals recognize that AI is introducing risks to any business that is using it. The information it delivers may be incomplete, inaccurate, out of date and not tailored to the specific environment in which it is being used. These IA shops are spending a lot of time making sure management recognizes and addresses those risks.
- Plans are in the works to use AI to expand from testing a sample of transactions to testing 100% of activity. This may include monitoring or analyzing current and past transactions to identify anomalies requiring investigation.
- IA shops plan to use AI to identify risks that management has not considered, using that information as they build and maintain their audit plan.
- People are dedicating full-time resources to build AI tools and techniques to streamline or improve their audit processes, from planning to reporting.
My first step would be to understand how management is already using and is planning to use AI in the business.
How much usage will there be (with a focus as always on the risks of today and tomorrow), and how serious would the consequences be if the controls over its use were inadequate?
In addition, how great is the opportunity and is the management team collaborating and sharing information and experiences among themselves? (I am reminded of my team member Debra Davies, who told an in-house reporter that internal audit is like a bee, sharing the pollen from flower to flower. We can have that opportunity, sharing stories of success and spreading ideas about AI use.)
The level of risk and opportunity would determine who I would assign to do what and for how long to provide assurance and advice (and maybe insight) on the business’ use of AI.
I would also make sure my team monitors changes in actual and planned use.
How I Would Think About AI Use
My next step would be to consider point #2 above.
My focus has always been to provide assurance on the processes, systems, controls and organization management relies upon to manage risks to enterprise objectives. Using AI to check whether there were errors in the past does not give me a basis for that assurance.
I need to perform audit procedures that enable me to provide assurance on today’s and tomorrow’s management of risks and opportunities.
In other words that the system of internal controls provides reasonable assurance that risks will be managed and opportunities seized. Maybe AI can test controls in a way that provides continued assurance. I have yet to see sufficient evidence on that.
I am far more interested in seeing whether management has recognized and is seizing the opportunity to use analytics and AI as detective controls. This could both improve and streamline processes.
During an audit, my team might use AI to test an automated control or to see if somebody took advantage of a control weakness. I would then see if we can get management to use that or a similar technology as a detective control.
The third point might be of interest, but again my concern is more whether management is using or planning to use AI for this purpose.
Turning to #4, I don’t buy a tool until I know how I am going to use it. I would watch what others in the audit world are doing and steal ideas that appeal. Maybe people on my team would come up with great ideas (they usually did). But I would spend more time seeing how management is using it.
Management will blaze the trail for me, selecting great tools and implementing controls (with our help) to address related risks. Then I will get on their bus and use the tools they have shown work.
That’s what I am thinking. Your thoughts?
Learn how you can join our contributor community.