Go Back

Do You Trust Black Box Compliance?

Another way to ask this is, can you trust your compliance needs to a “Black Box” solution offered by a vendor? Perhaps the answer depends on your appetite for risk.

As artificial intelligence (AI) gains prominence in every aspect of our lives, an interesting phenomenon is playing out. On the one hand, an increasing number of AI solutions are being purchased every day to address a growing range of use cases. However, on the other hand, there’s persistent push-back on deploying solutions that rely on decisions made purely by AI alone. But, as AI continues to make more inroads across industries, financial services firms are increasingly tapping AI to help them manage compliance.

The problem with familiarity is that it breeds complacency. And that’s precisely the last thing that you want when you’re tasked with ensuring compliance. Fair enough that your leadership team and staff are comfortable with the “magic” inside the proverbial “AI Black Box,” but pause for a moment to consider how the regulatory authorities are going to react when you tell them, “We don’t know how it works. It just does.”

It’s definitely not going to go over very well …

Why AI sulotions for compliance where pushed back for so long? 

Here’s the problem. For years now, opponents of AI have hotly debated anyone’s ability to truly know, de facto, if the decisions made by the algorithm based on its machine learning data set training are correct. It’s almost impossible to understand even how the decisions are made – let alone correct – given the complexity of deep neural networks which are typically at the heart of the AI Black Box. As mere mortals, we simply can’t fathom geometric patterns in that many dimensions, so we can’t say for certain that the algorithm’s decision made about perceived intent or non-compliance is accurate.

Yet it’s essential that we do so to impose compliance infractions; that’s why human compliance officers must remain part of the solution. Context is critical and black box AI just works – it doesn’t ever reveal why or how it works. And that’s a showstopper for regulatory authorities because they require that a bank demonstrate how their compliance system is “fair, safe, and explainable.”

Deloitte tackled the problem head-on in their Future of Risk in the Digital Era report. This quote sums up their key findings: “As algorithms become more powerful, pervasive, and sophisticated, the methods for monitoring and troubleshooting them lag behind adoption. Organizations should consider seeking transparency and accountability in how decisions are made by algorithms … and adopt new approaches to effectively manage the novel risks introduced by complex AI algorithms.”

Said another way, with explainability comes confidence. And trust. If RegTech vendors utilizing AI made it a best practice to better document examples and case studies that demonstrate how their Black Box compliance works, the obvious conclusion is that the financial firms and regulatory authorities who govern them would likely nod in agreement that the applied AI offers numerous advantages. The challenge is that there’s a lingering skepticism. And the only way to nip those doubts about AI is for everyone to be more transparent about how their solution works.

A key point regarding the application of AI in compliance is that users are in control. AI algorithms generally don’t run amuck behind the firewall of banks. Employees may not fully understand how the algorithms work – because their vendors did not adequately explain the technology – but they can remain in control. More specifically, they can leverage AI as a screening tool to whittle down suspicious events to a more manageable caseload, then retain the rights to have a final say on the matter of compliance. Many compliance officers regularly monitor transactions manually to cross-check the accuracy of the algorithm(s) they depend on.

The issue around Black Box compliance has recently gone mainstream. Earlier this year, the Consumer Financial Protection Bureau (CFPB) affirmed that credit card providers must adequately explain to applicants why they’ve been denied. This is per the updated Equal Credit Opportunity Act (ECOA). Here’s an interesting twist to this story: regulatory authorities are relying on whistleblowers to expose the inadequacies of banks’ black box solutions. Andrew Morris, NAFCU senior counsel for research and policy, recently cautioned those adopting AI Black Box solutions. He said that [relying on underwriting technologies] “necessitates a deconstruction of AI-driven models to satisfy regulatory curiosity will be unsustainable for all but the largest and most sophisticated financial technology companies.”

The corollary is that AI solutions are good.

Great, even. They can cut costs dramatically, boost productivity, and analyze volumes of financial transactions that no human team could do – even if they had a lifetime. So, financial firms need to be diligent in their adoption of AI solutions. They must ensure that they meet regulatory requirements and offer a satisfactory level of understanding exists to ensure there is enough transparency to maintain trust across all stakeholders. Customer communication must also be optimized to foster clarity and further that trust.

Bias is inherent in so many of our business operations. For example, recruiting. Resumes are rarely redacted, and studies show that the “Whiter” a resume, the more likely that a candidate will get a callback. And facial recognition is an infamous example of bias in machine learning training. In fact, recent efforts were so controversial – and not “explainable” – that Google shut its facial recognition technology program down.

So, with all this, should you risk your firm’s reputation and the potential for massive non-compliance fines all for the sake of an AI solution that promises to reduce the cost of your surveillance program and increase its productivity? That’s a loaded question.

Start by addressing the level of “explainability” offered by the vendor. Assess the level of understanding of their experts: ensure that there is competency within the company around all the skills related to the development, training, and maintenance of the algorithms in use. Evaluate how collaboratively your experts can work with the vendor you’re bringing in. Don’t be shy – ask questions – and if you don’t get the answers you’re looking for, then move on!

Subscribe

Follow Us

Subscribe to Shield’s Newsletter

Capture everything. Deploy anywhere. Store in one place.