AI and risk management - 5 actions your peers are taking now
Public awareness of artificial intelligence has exploded in 2023, as technologies that have bubbled away in the background for years are suddenly becoming the focus of international debate. Yet with disruptions comes opportunity. Artificial intelligence and machine learning are changing the world and there is plenty of upside risk for organisations to consider.
In a recent network meeting, risk leaders at some of the world's largest technology companies discussed how they are preparing for both the threats and opportunities presented by these risks.
We've extracted five key actions risk leaders can take now to prepare for tomorrow.
1. Develop an ethical position with executive and board
One risk leader advised other members to step back and think about AI from an ethical standpoint:
“Instead of starting down in the weeds, trying to solve each operational issue as it arises, convene a session now with your board and executive and get ahead of the game."
Risk Leadership Network member
CRO at FTSE organisation
Decide as a company what you will stand for in this space. If you don't have in-house expertise, hire a facilitator to run a workshop. Be specific about the output you're looking for.
2. Define your AI risk appetite
Companies that have defined their AI risk appetite are more likely to innovate in this space.
3/7 of the organisations in our discussion |
One risk leader noticed that the AI appetite statement had also enabled him to say 'no' to a couple of projects, while offering practical guidelines around proposing alternative solutions.
3. Establish AI governance
There is no need to implement anything too advanced at this stage, but setting up a regular forum to discuss AI with key stakeholders can be a useful first step. This may help avoid 'broom cupboard' projects taking off beyond your control.
A few members stated they had published internal AI guardrails; one risk leader said they preside over a formal AI governance committee and another whose company was more advanced in developing AI capabilities said they have a full Responsible AI team.
Advice from experienced practising CROs and heads of risk at some of the world's largest tech firmsThis insight came from a network discussion among seven of our members at large technology firms around the world. Many of them had approached Risk Leadership Network for support in managing AI-related risk, so we arranged a collaborative virtual meeting so they could discuss how they're tackling AI and what lessons they've learned so far. We'll continue to work with our members on AI-related issues. To get involved, please fill in this form. |
4. Read the EU draft legislation
All risk leaders on the call agreed that global AI regulation will follow Europe's lead. A draft Artificial Intelligence Act has been released by the European Union and is worth reading. Risk leaders drew parallels between this Act and the EU General Data Protection Regulation (GDPR). Some commented that the two may overlap; as AI intersects with data and privacy.
5. Separate tactical from existential (but both could end your business)
Several members said that while their boards were focused on strategic, existential questions around the future of the company and AI, directors might be missing the fact that a tactical AI misstep could just as easily end the business. Some members proposed creating two separate work streams to adequately prioritise each component.
Collaborate with your peers on AI