Go Back

The FCA has declared this year as the year of evaluating AI in surveillance

In the UK the FCA is preparing to scrutinise the systems and processes firms have in place around AI technology adoption, to ensure regulatory expectations are met.

Dr. Shlomit Labin, VP of Data Science at Shield, didn’t mince words as she spoke with industry influencer and founder of Thought Networks, Jess Jones, about the escalating importance of AI in the eyes of the regulator.

Here are some key points from the discussion. To see the whole interview, click here.

The age of AI maturity

Jess: Is AI technology sufficiently mature enough right now to really fulfill the needs of compliance and surveillance professionals?

Shlomit: AI is everywhere now which is why customers and clients are just expected to use it professionally. It should be applied because of its great advantages, but it also has to be applied wisely and according to the capabilities and limitations of what it can and cannot do.

In the domain of compliance and surveillance, not only can we monitor more, we can also understand far more nuance. If we are trying to catch specific words or phrases or train according to specific examples, now we’re in a world where we can identify far more sophisticated risks or elements or suspicious talks during a conversation that once were too illusive for us.

Jess: Can you give us a flavour of your excitement specifically around some of the ways it can really assist in surveillance and compliance?

Shlomit: It can be leveraged to do the classical surveillance tasks as well as being incorporated in the solution for financial risk detection. More than that, it can provide additional layers that can help compliance teams do their work better.

I think, in general, we need to look at the role of an AI capability as an assistant to an expert and not a replacement. At every step of the AI revolution, I hear concerns from people about what it might replace. Will it take their job? And I say, no, it’s the other way around. It will make your job more interesting. Because all the tedious things you needed to do before, no longer need to be done by you.

As a simple example in the day-to-day work of compliance, you no longer need to filter out a lot of irrelevant alerts: AI can do it for you. But will AI understand what the overall underlying risk is? No. That will be the work of an expert, of you.

Conversing with your data

Jess: I’ve heard you say in the past that you can have a conversation with the data. Can you tell us what that process for conversation with data will look like?

Shlomit: I think that there are two major innovations in the world of generative AI. Firstly, that it can understand human language, so I can talk to it like I talk to you. We no longer need to write scripts, queries, or code to program models. You can talk to it like a human being.

Secondly, you can have a continuous conversation. When I ask something and the AI answers me, I can ask a follow-up question and it can be programmed to remember the history and the previous answer and do a follow-up analysis. This is a human-level understanding and a conversational mode that allows us to deepen the things that we can establish from it.

Hallucinations and other risks

Jess: Everyone’s heard about the hallucinations of generative AI. How will that be excluded from everything that’s going on in compliance and surveillance? Is that a risk?

Shlomit: It’s a major risk. Not only hallucinations, generative AI is a closed model, so we don’t really know what is happening inside it. We can only judge according to its performance. All governance and evaluations of models will make a shift and change towards evaluation on performance and not on the way it is being made. And still, every single model has its limitations and its blind spots, the same way as a single person may neglect to see some of the things or may have missing knowledge. So, it might look perfect in 99% of cases, but it may fail on stuff that it was not trained on that is slightly different from what it knows.

In the world of risk analysis, the way to handle this is to use multi-models—to have all kinds of breaks and balances. At Shield, we use a multi-layered solution, which means we’re looking at each inquiry or each question or each evaluation from different aspects using different models trained in different ways. This way, we avoid the blind spots, and we are constantly validating against hallucinations, wrong answers, and things like that.

Regulatory engagement and transparency

Jess: Do you think the current process is going to be sufficient for regulators to deem it a transparent system?

Shlomit: I think that regulation is also something that is shifting. Eventually, as new technology rises, it forces adoption on everyone – even on the ones that we call ‘late adopters’. Regulators, especially in the compliance industry, are conservative and they want proof for any technology that is being used. And the breakthrough in AI technology forces them to not only to acknowledge it, but also to expect it – because it’s bringing far better results. And the same is on the way to happening with AI. I know that FCA have declared this year as a year of evaluating AI in surveillance. And they will no longer only evaluate, but also expect compliance teams to have it in place.

Jess: The regulations and the products are incredibly complicated. Can AI be trained to understand multiple languages, jurisdictions, complexities, jargon and what stage are we at now, how mature is that?

Shlomit: The technology is there, but it’s up to the vendors to make sure that when you offer such a solution, you adapt it to the specific use case of your customers. You cannot just take ChatGPT out of the box, for example, and provide it as a solution.

I’ll give you a simple example of what we have experienced. The first thing that Generative AI was good at was performing summarization of calls. And we said, what a great thing, let’s give surveillance officers a summary of their conversation. Sounds excellent, but what happened was that when it provided a summary of a communication, for example, between two traders, it omitted the exact thing that was actually a risk in the communication. And that’s because it was not the gist of the call.

The gist of the call was talking about a lot of other things, but the outlier was a trader telling  another trader to call them later for that issue as they don’t want to talk on the phone about it. This is not important for a general summary. So, you must be prepared, if you are doing a summary for compliance purposes, you must make sure as a vendor that the summary includes the important elements that are relevant to your customer.

Jess: Financial institutions have a lot of regulation in terms of data and data privacy. Once data has gone to ChatGPT, is that sent outside the institution?

Shlomit: ChatGPT is just one of the generative AI models, it was the first and it deserves respect for being the first, but there are already some good competitors. Google is, I think, almost as close, and Claude is a model that is also at pace with ChatGPT.

All users of such models need to make sure that there are secure ways of using it. Make sure that their data is not being used, that GDPR rules are being followed, and that data is not getting out of the region. There are ways to deal with it, it’s something that buyers should check, and it will not be allowed by any regulator.

The excitement and the fear

Jess: To what extent do you think there’s increased scepticism from regulators and financial institutions because some vendors have brought things to the market too early or things have been implemented that weren’t at the right maturity level. Do you think that’s made people more wary about using GenAI?

Shlomit: I think belief in new technology is something that takes everyone some time to adapt. With ChatGPT, at first, there was a lot of excitement, and then there was a lot of fear. Some countries decided to ban ChatGPT. Universities decided not to allow it.

And now there is a trend, when is the next version coming? Can it already write code for me? Can I already incorporate it in other stuff? Adapting to something new takes time, but I do believe that this kind of technology is here to stay. It is a real revolution. I think that this technology will make everybody’s life easier and more comfortable, and that’s why eventually it will be adopted.

Jess: Does AI have the potential to identify market abuse better than people? Or will it be at the same level but just done by smaller teams more quickly?

Shlomit: It’s a good question. It depends on how it’s being used. There are a lot of obstacles in the way. For example, all GenAI models are trained with restrictions to not perform any illegal acts or give illegal answers. So, can it detect market abuse? You need to know what kind of questions it will agree to answer and what not. It is very cautious, so it might create a huge amount of false alerts if just used out of the box, because it will be suspicious of everything. So, it is a tool that needs to be incorporated together with additional tools in order to make sure that you have the best solution in place.

No more alert fatigue!

Jess: What do you think compliance teams should be optimistic about?

Shlomit: I think that a sentence I heard recently, and I really believe that it’s true in this case, that AI is helping us move from being an army, to a team of experts. And this is again referring to the fact that we will need to put fewer resources on staff but have those real experts do their work in a far more sophisticated way.

No more issues like alert fatigue. No more high turnaround in this profession because it will be far more sophisticated. All the tedious work will be done for you. And you can do your expert thing.

Note: this above text has been summarised and edited, and may differ slightly from the delivered version

Watch the full Interview:

Subscribe

Follow Us

Subscribe to Shield’s Newsletter

Capture everything. Deploy anywhere. Store in one place.