91ÊÓƵ

Article

How AI is changing the way we thing about risk

This article was first published by Oliver Wyman .

Ìý

The advantages of AI have become apparent, and no one can afford to be left behind in its uptake. Many institutions cite lack of confidence in governance as a barrier to their uptake of AI, andÌýAI risk is gaining an increasing focus from governments, regulators and the media, despite no real consensus on what AI or AI risk is. In Oliver Wyman’s recent census of 33 US and EU financial institutions, all reported they were using AI, but two thirds were concerned about the associated risks.

AI exacerbates existing risk, and impact is not constrained to model risk. AI calls for a cross-cutting risk approach beyond model risk to technology, data, process, legal, compliance, regulatory, ethical, reputational, and cybersecurity. It sits more naturally under operational or enterprise risk but requires the analytical ability to assess model performance and future reliability.

Regulatory risk

Government agencies and regulators have increased their focus with guidelines or new AI regulations proposed in the EU, Hong Kong, Japan, Saudi Arabia, Singapore, the UK, and the US.

The EU AI Act (2021) proposes fines of up to 6% of global turnover for inadequate governance. This year’s EU AI Liability Act makes clear that developers, producers, and users are responsible not just for errors in AI but for any potential impact the AI can have, smoothing the way for EU-wide class action. With AI there is no longer a requirement for the injured person to prove a negligent or intentionally damaging act or omission.

Before we can govern AI we have to identify it. How does a tool get into the inventory if no one understands the scope? AI extends our idea of a frontline modeler beyond data scientists. For example, employees in human resources need to be clear that the third-party CV filter they’ve deployed, or the aggregation tools used, are in fact high-risk AI.

The regulations give us many definitions of AI. The first EU definition encompassed every computational system that exists, and objections from member states led to compromise texts and increasingly complex descriptions but have not materially narrowed the scope.

Ultimately our long-term goal should be streamlined risk tiering rather than exclusive definitions. Policies need consistent technical definitions, quality standards, and management processes, but this is an upgrade and modernization of our risk capabilities, and that applies beyond AI.

Ethical and reputational risk

AI risk exacerbates concerns around bias and explainability, increasing ethical and reputational risk, however these topics are not entirely new. The regulatory obligation to check for bias, and media scrutiny in this area has increased, but our intention not to discriminate – and legal obligation to offer equal opportunities – was already there.

To monitor fairness requires a specific metric, targeted group, threshold, and resulting action. Choices must be justified and approved: which of the mutually-exclusive fairness metrics to use, which groups to prioritize for protection, which trade-off level between fairness and accuracy is appropriate. Similarly, interpretability tools and explainability expectations have advanced, but no gold standard has emerged for KPIs, making policies essential and complex to create.

Bias and interpretability metrics to an extent must be bespoke but not based on one person’s ad hoc decision. The use of considered standards, policies and risk appetite is key, to standardize and streamline what is acceptable both to developer and board level, and how often we need to challenge and look for other potential impacts. There is also a commercial argument here, when we misunderstand a subgroup, we leave money on the table.

Privacy, technology, and data risks

These more advanced models have increased privacy and security concerns and hence cross-cutting process, technology, and data risks. The need to govern data is not new, but AI has increased our access to unstructured data, such as images, voice, and text, and these must now be explicitly covered in governance, audit, process, and storage standards.

Our ability to use large volumes of data and fit to the level of the individual data point is powerful but undermines many methods of anonymisation. Differential privacy, minimizing cross-border data flows and use of homomorphic encryption have become increasingly important.

The large volumes of data used for dynamic calibration, and the high processing power needed to run many of these models also push technical and storage capabilities, and existing tech debt left ignored for years has become an urgent issue.

To meet climbing complexity and expectation in data maintenance, traceability and audit, organizational standards need data quality, bias, minimization, privacy, and security monitored across the full data-lineage pipeline, not just at model stage. These are issues that can only be tackled with a holistic approach, including a dedicated risk appetite statement and a framework that pro-actively manages this risk.

If we recognize that AI has arrived, primarily we’re not talking about new risks, merely upgrading our approach to close vulnerabilities that were always there. There is some upskilling needed and some jargon to learn here. Validators need to understand hyperparameters and CROs need to be comfortable distinguishing unsupervised from dynamic. However, a tighter, cross-cutting network that prevents risks from falling through the cracks is a modernization that was already long overdue. AI brings the potential for a more accurate and equitable approach, but when it goes wrong it goes wrong in a bigger and faster way, and we need to assess and monitor it more closely.

Exhibit 1: Focus Area