Go Back

AI in Financial Services – What are the Rules?

Financial services and market fraud; two phrases that you’d never expect to hear uttered in the context of a technology wild, wild West. Yet that’s the picture painted behind the recent RFI (Request for Information) posted by five of the world’s largest financial regulators. These include the Federal Reserve, Consumer Financial Protection Bureau, Federal Deposit Insurance Corp., National Credit Union Administration, and Office of the Comptroller of the Currency. Probing deeper, the RFI explicitly seeks to learn how artificial intelligence (AI) and machine learning are currently being used by financial firms – because they don’t know. That in of itself raising an alarm bell. And, without any clear insights or standards related to the application of these technologies in RegTech, the question that they really want answered is, is there any unconscious bias that has inadvertently been built into the models?

Indeed, that potentially exposes a murkier side of financial transactions, one that we haven’t seen in nearly 20 years since Enron’s headquarters was raided by the FBI. “Do they, or don’t they?” has once again become the burning question. This time, however, it’s being posed to the whole financial industry as a collective. And it’s not only the financial regulators asking the question. Today, that burning question is at the front and center of initiatives by the watchdog groups like the American Civil Liberties Union who have publicly called out predatory lending and the implicit bias now built-in to the lending models upheld by every bank in America (and beyond its borders).

Implicit Bias in Machine Learning

Let’s step back a moment for a quick primer into how implicit bias has apparently crept into financial risk assessment models. First, machine learning (ML) is an applied form of AI. The term originated to describe the practice of how the machine (in this case, the mathematical algorithm versus a physical machine entity like a robot) learns to associate X with Y after it has been exposed thousands or millions of times to data that explicitly “states” X = Y.

Think about it this way, when you do a Google images search, you’re actually on the receiving end of an application of AI and ML. The software engineers behind that technology chose which data sets would be initially presented to the search algorithm. This first step is known as “training the algorithm.” As is the case for training of any sort, you learn whatever you’re trained to learn. Once the algorithm has been trained, it is then tested on randomized data.

Results of those randomized tests inform how the algorithm will be refined until the prediction accuracy achieves an “acceptable rate.” In the case of the Google images search, the algorithm “learned” that images with specific attributes like fur, whiskers, pointy ears, long tails, and little noses are cats and not dogs. Comparatively, in the case of financial risk assessment algorithms, the training data included an extensive collection of loan defaults correlated with historically low incomes of people of color. Hence, the implicit bias has now likely been built-in to all these models. What the RFI is attempting to elucidate is how deep that bias is and what is that “acceptable rate” of accuracy in the financial risk assessment models which is assumed to be highly variable from one financial institution to another.

A Call for Governance

The wild, wild West analogy here refers to a lack of self-agency and policing which leads to an “anything goes” type of environment. Even in an industry as tightly regulated as financial services, there is no existing set of standards or rules governing how AI and ML (and the underlying training data) can be used in the assessment of financial risk.

The EU is carrying the torch for the future of AI and ML in finance with its proposal to govern the application of these technologies. Financial risk assessments have been flagged as “high-risk AI systems” due to their known inherent bias derived from historical training data. Governance requirements including implementing a risk management system with mitigation and control measures, training, and regular testing to ensure there are protections in place upon any alerts of potential implicit bias. Technical documentation, record-keeping, transparency, human oversight, and cyber-security will be essential components of adherence and financial firms are gearing up on how to best prepare for this level of exposure. US institutions are spared – for now. But it likely won’t be long before the US implements a similar governance structure.

What’s Next?

The RFI will likely not probe deeply enough on how financial firms are flagging transactions to alert compliance officers to potential fraud and money-laundering schemes. Nor will they likely be able to elucidate how firms are analyzing unstructured data to obtain insights alerting them to suspected instances of market abuse. Doing so would potentially threaten the competitive advantage that one bank has over another. And in this era of real-time data analysis in the context of billions of annual transactions, even decimal point differences in accuracy can tilt millions of dollars per year towards profits – or losses.

Ultimately, the goal is a better understanding of and rules governing how AI and ML should be applied to alert compliance officers to suspected market abuse. Along with that permission to operate and utilize such technologies now comes the responsibility and accountability to report if such practices are implicitly biased against certain populations.

Your AI can no longer remain a black box. 

Subscribe

Follow Us

Subscribe to Shield’s Newsletter

Capture everything. Deploy anywhere. Store in one place.