Go Back

AI In Compliance: Automate And You Will Reduce Risks

Learn how to use AI in compliance to detect illegal trades and flag texts containing racial and toxic language.

The dark side of artificial intelligence (AI) is fraught with dystopian ideals of technology gone bad. Some zealots are so vulnerable to the persuasion and influence of others that they’ve spent their lives wearing tinfoil caps and other seemingly crazy behaviors to “protect themselves.” You can argue, “protect themselves from what?” Or you could draw parallels with the burden of FINRA, the SEC, and all the other regulatory authorities.

More specifically, these regulators have the unenviable responsibility of protecting consumers from each other – and themselves. You can program AI to sniff out illegal trades and flag texts or emails that contain inappropriate racial or sexual language. It’s now become possible to enable platforms with the ability to even anticipate the likelihood of a “worker going bad” scenario based on an escalation in message tone and other behaviors. Thereby, you can actually enable intervention before catastrophic damage is done to the financial firm’s brand or its customers’ collective savings. Mind-blowing!

If you can program AI for good, then surely you can program AI to do that illicit stuff in a manner that goes undetected. This is the new threat that’s upon us. There’s more to it than a sinister undertone.

AI and machine learning, specifically, is both a product of our wildest imaginations and the output of our intelligence.

Our world’s best software engineers can devise code to do just about anything and everything. Everything is a product of our environment. Those of us raised in dire situations and feeling like we have no choice other than to embrace dark choices have been seeded with circumstances that drove us to behave that way. With data and algorithms, it’s a whole lot more cut and dried but situational all the same.

We can develop algorithms that seek specified situations, data ranges, and so on. We can hone those algorithms to exploit certainly anomalies in pre-defined patterns by training them on controlled data sets. Aberrations can be detected and then used to refine the algorithm by tightening the range, adding exclusions, and other conditions (i.e. rules) that dictate what it does when it’s activated against a database.

Digging in a little further, software developers tasked with designing compliance or RegTech solutions source data transactions that have been flagged – and vetted by a human expert – as the model to train the algorithms on. In this way, everyone can be confident that the nefarious behaviors, which have been validated as non-compliant, are the examples to watch out for. Whenever they’re observed, they can be flagged as suspicious and automatically trigger an investigation – by a human.

This is the essence of most RegTech solutions today

Yes, human intervention is always coupled with AI to ensure that the person(s) suspected of market abuse experiences an evaluation that is fair and accurate. With each positive confirmation, the model is reinforced for accuracy, and everyone involved can feel more confident about the predictive ability of the algorithm. At some point, the solution may be so finely tuned that human intervention is unnecessary. However, that’s a bit unnerving (and Dystopian) to think about our freedom and rights determined by a bot. Everyone deserves due process and there is always some margin of error.

On the flip side, it’s not a stretch to think about training an algorithm (or a bot) on the same data set. The only difference is that you train it to give you, the bad actor, a warning that you’re at the edge of “safety” and that your next move is likely going to trigger an alert. Wisely, now that you’ve been warned, you stop everything.

You have a few options regarding your next move

You could test the algorithm to see if it has a “memory” and continues monitoring you from wherever it left off or if the timer starts anew. Then you’d know if you simply had to wait X number of hours or days before you made the illicit move you were planning to make anyway. Knowing that your account has triggered a higher level of scrutiny, you can tap someone (or something, like a fake identity with a digital wallet) to execute the same move and hence, distance yourself from the non-compliant behavior. In this gray space that we currently live in when it comes to regulatory definition around AI, you could be brazen and go to the authorities citing undue attention on you and profiling. What then?

That latter situation is a bit of a conundrum right now given where things are – and where they’re not – with respect to the legal definition of intent as it pertains to market abuse and the use of AI for monitoring. It’s a fuzzy space that bad actors can currently exploit. As it is with all cases of illicit trading activity, there’s always someone, and some way, to push the envelope and attempt illicit monetary gain.

But here’s where things get interesting. For every bad actor out there trying to hustle or game the system, there are at least ten times more good actors who are toiling away every day to design systems that shut the bad behavior down. Like it is for just about everything, you may not get caught this time but, in time, you will.

Source: Linkedin

Subscribe

Follow Us

Subscribe to Shield’s Newsletter

Capture everything. Deploy anywhere. Store in one place.