This white paper has been developed by AFME’s AI Task Force to consider how to approach transparency in AI/ML, which is a key factor in demonstrating and ensuring the safe and effective deployment of trustworthy AI/ML in capital markets.
As the adoption of Artificial Intelligence (AI) and Machine Learning (ML) in capital markets continues at pace, attention is increasingly being focused on how capital markets firms can demonstrate a responsible approach to their use of the technology. The paper suggests a technology-neutral, principles-based approach to transparency, built around the assumptions used in the development of AI/ML models and testing of those models, to meet stakeholder needs. In any use of AI/ML, transparency is important to a wide range of stakeholders, as it can demonstrate how an AI/ML model has been developed, how it will be used and monitored, and how it can stand up to scrutiny and challenge. This is crucial for building trust in the technology, both within a firm and with external stakeholders such as clients and regulators.
Discussions on transparency often quickly develop a specific focus on concepts such as explainability, which involves expressing the complex internal mechanics or workings of an AI/ML model. This is problematic because, while currently available explainability techniques are useful in certain scenarios, in most cases they provide only a partial understanding of complex AI/ML models. In AFME’s view therefore, mandating a certain level of accuracy and validity of technical explainability is actually likely to unnecessarily limit the use of the technology, by restricting the breadth and complexity of AI/ML models that can be used and also lead to the provision of ‘explanations’ that may be misleading and therefore counterproductive.
Instead, AFME proposes that AI/ML transparency should be considered more broadly, as a framework built around (i) qualitative and quantitative assumptions and (ii) testing. Such frameworks should be tailored to the individual risk profile of the AI/ML application and to the needs and knowledge of the various internal and external stakeholders. The framework should also be evaluated and updated throughout the application’s lifecycle.
As a heavily regulated sector, capital markets firms recognise that their use of AI/ML must be consistent with their obligations in key areas such as governance, accountability, duty to clients and data protection. The existing regulatory framework for capital markets is largely technology-neutral and principles-based. AFME suggests that this approach should be maintained for AI/ML, but that a gap analysis should also be performed to ensure that regulations (both existing and new) do not place unnecessary constraints on a firm’s use of the technology, or contain granular provisions which may quickly become obsolete as the technology continues to develop.
The proposed approach to transparency as a way of meeting stakeholder needs fits with our suggested focus on a technology-neutral and principles-based regulatory framework within capital markets. It will enable the demonstration of a responsible and ethical approach to AI/ML and support the development of the technology to the maximum benefit of the industry and its clients. AFME looks forward to working with regulators and the industry to further discuss the issue of transparency in AI/ML and to embed this approach.
Hover over the blue highlighted
text to view the acronym meaning
over these icons for more information
No Comments for this Article