I’m not going to get involved in the debate about whether internal audit should be leaping (hopefully forward) to leverage AI in our work.
I remain convinced that we should understand the more significant risks to enterprise objectives, identify the audits we want to perform, and only then select the best tools for the job – which may or may not include AI.
AI may be great at detecting errors or even fraud and cyber breaches. But that is management’s job, not internal audit’s job.
Our job is to provide assurance, advice, and insight.
That can include:
- Whether they have appropriate controls and security over the use of AI
- Whether they are optimizing the use of technology in general
- Whether they have the ability to know when to use what
With that last in mind, I am sharing two pieces you might enjoy:
Here are just a few nuggets:
- Incorrectly used, AI may make up facts, be prejudiced, and leak data. In board packs, this means a real risk for directors of being misled or failing to discharge regulatory duties.
- …we can easily mistake it for an “everything” tool and use it on the wrong problems. And when we do, our performance suffers. A Harvard study showed this in action, taking smart, tech-savvy BCG consultants and asking them to complete a range of tasks with and without generative AI tools. The consultants were 19 percentage points less likely to reach correct conclusions when using generative AI on tasks that appeared well-suited for it but were actually outside of its capabilities. In contrast, on appropriate tasks, they produced 40% higher quality results and were 25% quicker. The researchers concluded that the “downsides of AI may be difficult for workers and organizations to grasp.”
- …because AI models reflect the way humans use words, they also reflect many of the biases that humans exhibit
- …while AI is great at making its answers appear plausible and written by a human, the way they’re generated means that they’re not necessarily factually correct — the model simply extrapolates words from its training data and approximates a solution. As Dr Haomiao Huang, an investor at renowned Silicon Valley venture firm Kleiner Perkins, puts it: “Generative AI doesn’t live in a context of ‘right and wrong’ but rather ‘more and less likely.’”
- …in leading the finance function, the CFO can’t implement gen AI for everyone, everywhere, all at once. CFOs should select a very small number of use cases that could have the most meaningful impact for the function.
- The best CFOs are at the vanguard of innovation, constantly learning more about new technologies and ensuring that businesses are prepared as applications rapidly evolve. Of course, that doesn’t mean CFOs should throw caution to the wind. Instead, they should relentlessly seek information about opportunities and threats, and as they allocate resources, they should continually work with senior colleagues to clarify the risk appetite across the organization and establish clear risk guardrails for using gen AI well ahead of the test-and-learn stage of a project.
Is management sufficiently ‘intelligent’ to know when and where to use AI for maximum ROI?
Are you helping? Or are you auditing them after the fact, shooting the wounded?
Sign up for HRA Lite for compliance news and much more!
Subscribe to HRA Lite to receive directly in your inbox, for free, weekly HR, payroll and internal control news, upcoming legislative changes, guidance, case commentaries, industry standards and policy management directives and solutions.
View our privacy notice
here