top of page
Macfarlanes

Current and future regulatory landscape for AI and machine learning in the investment management sec

On Tuesday this week, Mark Lewis, senior consultant in IT, fintech and outsourcing at Macfarlanes, took part in an event hosted by The Investment Association covering some of the use cases, successes and challenges faced when implementing AI and machine learning (AIML) in the investment management industry.

Mark led the conversation on the current regulatory landscape for AIML and on the future direction of travel for the regulation of AIML in the investment management sector. He identified several challenges posed by the current regulatory framework, including those caused by the lack of a standard definition of AI generally and for regulatory purposes. This creates the risk of a “fragmented regulatory landscape” (an expression used recently by the World Federation of Exchanges in the context of lack of a standard taxonomy for fintech globally) as different regulators tend to use different definitions of AIML. This results in the risk of over- or under-regulating AIML and is thought to be inhibiting firms adopting new AI systems. While the UK Financial Conduct Authority (FCA) and the Bank of England seem to have settled, at least for now, on a working definition of AI as “the use of a machine to perform tasks normally requiring human intelligence”, and of ML as “a subset of AI where a machine teaches itself to perform tasks without being explicitly programmed” – these working definitions are too generic to be of serious practical use in approaching regulation.

The current raft of legislation and other regulation that can apply to AI systems is uncertain, vast and complex, particularly within the scope of regulated financial services. Part of the challenge is that, for now, there is very little specific regulation directly applicable to AIML (exceptions include GDPR and, for algorithmic high-frequency trading, MiFID II). The lack of understanding of new AIML systems, combined with an uncertain and complex regulatory environment, also has an impact internally within businesses as they attempt to implement these systems. Those responsible for compliance are reluctant to engage where sufficient evidence is not available on how the systems will operate and how great the compliance burden will be. Improvements in explanations from technologists may go some way to assisting in this area. Overall, this means that regulated firms are concerned that their current systems and governance processes for technology, digitisation and related services deployments remain fit-for-purpose when extended to AIML. They are seeking reassurance from their regulators that this is the case. Firms are also looking for informal, discretionary regulatory advice on specific AIML concerns, such as required disclosures to customers about the use of chatbots.

Aside from the sheer volume of regulation that could apply to AIML development and deployment, there is complexity in the sources of regulation. For example, firms must also have regard to AIML ethics and ethical standards and policies. In this context, Mark noted that, this year, the FCA and The Alan Turing Institute launched a collaboration on transparency and “explainability” of AI in the UK financial services sector, which will lead to the publication of ethical standards and expectations for firms deploying AIML. He also referred to the role of the UK government’s Centre for Data Ethics and Innovation (CDEI) in the UK’s regulatory framework for AI and, in particular to the CDEI’s AI Barometer Report (June 2020), which has clearly identified several key areas that will most likely require regulatory attention, and some with significant urgency. These include:

  • difficulties in ensuring that algorithms remain free from human and machine-learned bias;

  • digital exclusion, because some in society cannot be as active online as others, meaning that they do not leave substantial “data footprints” to enable algorithms to qualify them for the same range of products and services, and prices for them, as are offered to others who have more digital presence;

  • data monopolies, because a small number of larger firms hold substantial, varied and reliable data sets that would enable them to offer products and services to customers that other firms cannot;

  • transparency in, and data subject consent to, new data types, where firms are using the personal data of customers or potential customers derived from social media, without being transparent about such use, and therefore denying those data subjects the chance to consent to such use; and

  • higher-impact cyber attacks.

In the absence of significant guidance, Mark provided a practical, 10-point, governance plan to assist firms in developing and deploying AI in the current regulatory environment, which is set out below. He highlighted the importance of firms keeping watch on regulatory developments, including what regulators and their representatives say about AI, as this may provide an indication of direction in the absence of formal advice. He also advised that firms ignore ethics considerations at their peril, as these will be central to any regulation going forward. In particular, for the reasons given above, he advised keeping up to date with reports from the CDEI. Other topics discussed in the session included lessons learnt for best practice in the fintech industry and how AI has been used to solve business challenges in financial markets.

Watch This : https://www.youtube.com/watch?v=Z5vxRC8dMvs

Post: Blog2_Post
bottom of page