Artificial Intelligence, and more specifically, generative AI, continue to gain significant traction across the capital markets landscape, providing new opportunities for business efficiencies from the back office to the front office in financial firms. However, at the same time, generative AI is provoking concern and excitement, as financial firms seek the best ways to harness this revolutionary yet uncertain technology.
This week, GreySpark analyst Elliott Playle sat down with GreySpark director and subject matter expert Charles Mo to discuss all things generative AI, including the functionality of generative AI, how firms can best optimise generative AI, and regulatory concerns associated with the technology.
Elliott: It is no secret that generative AI is one of the key buzzwords in the capital markets space at the moment. By generative AI (Gen AI), I am referring to the process where a machine scours the internet to create unique text - or other outputs - in answer to an instruction. We are starting to see large financial institutions such as BlackRock and Morgan Stanley deploy their own proprietary generative AI models. For example BlackRock allows its clients to extract information from its portfolio management software Aladdin using Gen AI. Charles, how do you see the use of Gen AI in investment banks evolving?
Charles: I think we probably need to be quite careful about how we define generative AI because using the term generative AI I think can mean quite a lot of other things. We can think of AI as a container for variants of the technology, such as large language models (LLMs) and natural language processing (NLP). The overlap between those two technologies is what we see today as generative AI. The LLMs are basically the toolkit to understand and comprehend written text and are able to essentially, estimate the next word. NLPs help machines to interpret this human text by breaking it down in ways that is understandable to it. Both of them together create generative AI. Banks have been using AI for years. Generative AI is a new thing that is still yet to be explored, but is one area of AI that is talked about most. It has created much debate on what it could be used for and there is lots of conceptual thinking around it. An obvious benefit of generative AI is its ability to provide sentiment analysis, summarise market data reports, which is especially useful for asset managers and fund managers who need to quickly gather lots of information on a particular market. The other side of the coin is coding. Gen AI is very code at creating test-coding, which can be used by developers in DevOps frameworks.
Elliott: What do you mean by test-coding?
Charles: In order to test code and ensure it is doing what needs to be done, you can create a model, that is used to essentially exercise that piece of code that was developed, right? So generative AI is actually an answer to that. Let's say you create a very complex statistical model for maybe some value-at-risk (VAR) calculations or something like that. You want to test that. You could use generative AI in order to interrogate that code, and to look at all the kinds of inputs and outputs in order in order to generate test cases. You can build up a library of test models that you can apply to the actual test code. This aspect of generative AI is easily missed, but can be just as important as other more mainstream use cases.
Elliott: That’s interesting and probably a generative AI feature that most people are not typically aware of. In terms of generative AI as part of a wider system in a financial firm, I am just interested to know how this would work, are there any integration and interoperability components that firms should be aware of?
Charles: When it comes to AI models it's not necessarily a engine that sits outside of the trading system that needs to be integrated to to utilise it, and often comes as part of an entire trading system. For example, settlement systems might have a bit of AI to help to help process settlement quicker. Risk reporting might use AI in order to highlight some insights. The trading API may have some AI to help with notifications and so on. However, these AI components do need to somehow link to each other, which requires a feed in order for each component to work properly. So for example, if let's say you have an AI engine that does sentiment analysis on news. That is probably designed very specifically to do only that. You might have another AI component upstream that has been built just to sift public information and look for a wide variety of company reports, right? This creates a chain of AIs with the information eventually needing to be used by the operator downstream. Firms should be aware of how these chains may interact.
Elliott: At this early stage, are there any obvious risks or uncertainties that you are envisioning from the use of generative AI in financial institutions? I know in our recent generative AI report, we touched on the concern regarding data bias, which is when data used to train a generative AI model is incomplete or inaccurate, leading to poor outputs. Are there any sort of major concerns you have toward generative AI?
Charles: Yes, data bias is a big concern. The concern is really how the bias gets into the system. Firms should identify the potential sources of bias. The bias could come from different places. Now, it could be individuals, it could be bias coming from another AI model. For me, it is about identifying where bias has been injected into the value chain and understanding what impact that bias could have. Bias could also come from only looking at one data source and not multiple data sources. If, broadly speaking, you are looking at sugar content in drinks and you aimed the study only at fizzy drinks, you are going to get a higher bias of sugar, right? There is also hard-coded bias where people have programmed AI code layers, where they introduce bias into the code language in the coding itself. That makes it potentially is harder to see, because you need to literally go into the code. Then, you might have smaller AI models feeding off a larger AI model, with the bias filtering down and getting amplified. So there is a lot of concern around that.
However, another risk is blindness. You might get a perfect AI model with no bias there at all. It does the job so well, that there is a lack of regulatory oversight on it. Then over time, people add more responsibility and input more risk into the system, when in in reality, the system could faulter, especially if it encounters new scenarios that have not been fully tested. Another problem is hallucination. Generative AI ultimately creates content, and will only utilise as much data as it has been inputted with. If the model is asked to produce an output, based on something that the model hasn’t been entirely trained on, it will ‘guess’ or come up with a convincing answer that the user believes to be correct, but will not necessarily be correct. What you think the model knows, it actually does not know. That creates a big issue, and creates erroneous data. This wrong data can then be passed to another AI model. Although banks currently have little dependency on generative AI models, going forward, there is worry as this influence of generative AI will undoubtedly grow. That explains why the EU AI Act has come into focus.
Elliott: This brings me nicely onto my next point about AI regulation, actually. I know that AI (including Gen AI) is generally regulated in patchwork across different regulations, such as MiFID II for example, without being defined under a specialist, comprehensive AI regulatory framework. I’d just like to get your take on the EU AI Act. It is obviously a very generic regulatory framework and not necessarily only for financial firms. Do you think the legislation, if it comes into force (which is likely be in 2026 ,if it does) will provide adequate protections for financial firms, or will more be needed?
Charles: I think the EU AI Act is the most comprehensive set of AI rules the industry has so far seen. It is definitely a framework to be built on and early stage, like generative AI. At the moment, I would say it is sufficient for today. The EU AI Act is a bit more wide encompassing and is not just for capital markets, is it for different firms and businesses that utilise AI in their services. It is a broad brush approach. It has some fundamental principles, such as oversight risks and management, where AI tools need to be classified under different levels of risk, ranging from unacceptable to low risk. Banks should be aware of this anyway, but the regulations will force a firm to tangibly do something to ensure their use of AI is complaint. The EU AI Act proposals have come at the right time - we are definitely lagging behind the AI revolution to other industries like the pharmaceutical sector and the medical sector who have been deploying it at scale for years. There are some statistics that have shown how good they have been in integrating AI throughout the whole operational process. Banking and back office systems are way behind on that right. Banks have only really implemented pockets of AI. The EU AI Act will give the banks a chance to put something in place, while not currently being overwhelmed by AI in the technology stack. So, I think the proposals now are good enough to get the ball rolling. I think there is more scope for the back and middle offices to utilise AI more, where operational processes can become more efficient. Personnel will need to have enough knowledge of the technology to recognise where the risks are and mitigate any issues. Monitoring will be key. The legislation will no doubt develop over time and become more sophisticated in line with this. It is worth noting that we see a lot more evidence of AI being utilised in the fintech market. In some ways, the legislation could be much more applicable to them to begin with, so we might see fintech showing increased interest towards the EU AI Act over the coming months and possibly re-consider their AI usage.
End of transcript.