This report explores some of the risks that could arise as the use of artificial intelligence (AI) becomes increasingly prevalent in financial services – not just at the fringes, but at mainstream institutions that touch billions of people worldwide.
The report identifies three principal ‘risk drivers’:
Opacity and complexity: A trade-off at the heart of many AI models is that the more effective the algorithms, the more difficult they are to scrutinise.
Distancing of humans from decision making: AI is different from previous ‘rule-based’ forms of automation because it enables many actions to be taken without explicit instruction.
Changing incentive structures: The benefits to successful firms and the risks of getting left behind create powerful incentives to implement AI solutions faster than may be warranted.
ML models are just as fallible as rule-based ones.
New ethical challenges include algorithmic biases that could lead to discriminatory practices. These biases can be extremely difficult to root out because ML excels at finding complex ‘hidden’ relationships in data.
A purported benefit of AI is that it dispassionately draws conclusions from data, without prejudice. In practice, however, the beliefs and values of the people who build the models affect the outcomes.
AI systems can perform poorly in previously unencountered situations – potentially amplifying the impact of “black swan” events.
ML-driven solutions may undermine social benefits.
In insurance, greater risk differentiation could lead to high-risk individuals being priced out of the market, even though they may be the ones most in need of insurance.
ML’s ability to combine data on individuals from diverse sources might challenge our concept of fairness, as well as raising privacy concerns.
More personalised financial products could come at the expense of price transparency.
AI could contribute to a future financial crisis.
Full report on CSFI
Hover over the blue highlighted
text to view the acronym meaning
over these icons for more information
No Comments for this Article