All female, all AI: Discussions around governance, ethics and innovation

When leading women in tech talk AI, compliance gets interesting—fast. Forget buzzwords and boardroom lingo. Our recent panel pulled back the curtain on how AI is actually being built, challenged, and put to work in the real world of risk and regulation. Spoiler: it’s not always pretty, but it is pretty powerful.
The discussion was moderated by Jess Jones, Surveillance SME for Thought Networks, and the panel featured Dr. Shlomit Labin, Shield’s VP of Data Science, Kay Firth-Butterfield, CEO of Good Tech Advisory, and Erin Stanton, the AI & Data Lead at Virtu Financial.
While you may have missed the webinar, we captured it for you below (and if you want to tune in, click here)
Transparency and self-governance in AI
ChatGPT supports 300 million weekly active users—and it was only launched 2 years ago. While this level of adoption presents countless opportunities, it also brings complex ethical challenges. Unlike traditional rule-based systems, today’s machine-learning models are notoriously difficult to govern.
Growing fast but with regulation still playing catch-up, all panelists agreed that firms need to protect themselves by updating their internal govenance strategies. Labin specifically urges users to put guardrails in place to address declining AI explainability.
GenAI results should be validated against external knowledge, or with more traditional technologies such as Google search. Compliance teams can reestablish governing power by prompting LLMs to explain chains of reasoning and comparing answers from multiple models.
Another concern raised is security. While there are no easy answer, the simple action of being transparent about data inputs and outputs can boost public perception. AI model-builders like Stanton are setting the stage for a more open-source industry by leading with transparency and publishing the data they use, but more importantly, the data they wish they had. She explains that
calling out shortfalls encourages data sharing and helps to bridge gaps in datasets.
Firth-Butterfield notes that being aware of shortfalls extends beyond just the data provided and into the sustainability of the technology period. “LLMs are extremely thirsty for energy, consuming ¼ litres of water every time you ask a question”.
Being open about inefficiencies in technology leaves space for you to actively teach users how to navigate those areas – in this case, raising awareness of AI’s resource consumption reduces waste because users may opt to use a more resource efficient option instead.
AI’s role in Financial Services
AI’s ‘black box problem’ poses issues for compliance teams, especially in high-stakes industries like finance. Compliance teams must understand how decisions are made, and Stanton’s team at Virtu aids this by documenting every dataset inclusion and exclusion, as well as every algorithm used. They then share this information in layman’s terms to ensure everyone can understand and challenge the model’s logic. “Our compliance team loves that we’ve built this into every step of our process.”
Stanton explains that developers have an ethical and social responsibility to place guardrails inside of their models, as LLMs will learn bias from plain data if left unchecked. She makes a great point that model builders ultimately have responsibility over deploying models “even if I’ve spent a year building this model, if I don’t love how it works then I just won’t deploy it.”
For example, if a dataset lacks strong representation from a region like Asia, developers can block outputs for that geography to avoid unreliable predictions.
The bias problem: Inclusion in AI
The panel didn’t shy away from discussing AI’s diversity gap. LLMs are trained on internet data, data that overwhelmingly reflects the perspectives of white males from the Global North. “Just the fact that this is an all-female panel,” said Firth-Butterfield, “helps to diversify the data pool.”
For people of color the rate of representation is even worse, as ⅓ of the global population isn’t connected to the internet, and therefore none of their data is represented.
And the challenge is getting worse. With increasing reliance on AI-generated data, we’re witnessing a phenomenon called ‘model cannibalism’ where AI models are being trained on their own outputs, compounding bias over time. It’s estimated that as early as the middle of 2026, there’ll be more AI than human-created data! The EU AI Act and other international AI policies aim to reduce risks stemming from these biases; for example, creating a risk profile around a person is now prohibited because vendors can’t ensure that their AI models will be free of bias.
Shield’s role in the future of AI
The panel agreed that while AI is revolutionizing compliance, the challenge lies in how we as a community govern its use. But with the right voices at the table, we can work towards a future where AI is inclusive, accountable, and free of bias.
We’re committed to creating space for important conversations to happen, because the future of AI isn’t just about models and data—it’s about people.
Related Articles
So, you want to move to the cloud?
Subscribe to Shield’s Newsletter
Capture everything. Deploy anywhere. Store in one place.