AI Adoption in Australia's Regulated Industries. How Do You Move Fast and Avoid Risk?
- Sarah Stempowski
- 3 hours ago
- 7 min read

What Australia's superannuation leaders are learning about AI adoption.
AI adoption in regulated industries across Australia was always going to be complex and slow, and this is compounded in the superannuation space. The regulatory environment, the obligations to members, the scrutiny from government and media, all of it creates a context that is materially different from less regulated industries charging ahead with AI. And yet the expectation to keep up is just as real. The questions in the room were ones most executives in this space are quietly carrying: what are others actually doing? Are we ahead or behind? Which use cases are worth pursuing, and how do we move forward without creating regulatory, member, or reputational risk?
In March, we brought together leaders from across the superannuation industry for two intimate lunches to explore exactly that.
Left to right: Sydney Leaders Lunch at Cafe Sydney. The Nuj team: Matthew McKenzie CEO & founder of Nuj Super, David Hyman co-founder and former CEO of Lendi Group, Sarah Stempowski Head of Brand Nuj Super, Jason Were Head of Sales Nuj Super. The Melbourne Leaders Lunch at Rockpool Bar & Grill.
We are on this journey too. At Nuj, we are actively exploring how AI can help us do more for our clients and improve the way we work. We use it for planning, building, automating, content creation, and rapid product prototyping, with a goal of spinning up ideas quickly and getting user feedback on whether they deliver value before we commit further. We are also about to launch Nujie Beta, the first version of what will ultimately become a tool for interrogating your data in plain English. We are starting with APRA's publicly available product structure data and expanding from there, deliberately, in stages, with security front of mind at every step. None of this has come from a standing start. It has come from the same process we apply to any change: challenge, test, tweak, challenge, test again. So these lunches were as much about our own learning as they were about creating a valuable forum for the industry.
Joining us at the table was David Hyman, co-founder and former CEO of Lendi Group. David led the AI transformation of Australia's largest digital property platform in a regulated, high-stakes environment, and brought that hard-won perspective to the conversation when it was called for.
Here is what stayed with us.
The pressure is real. So is the uncertainty.
Across both cities, the starting point was the same: there is significant pressure to move on AI, from boards, from government, from the competitive environment, and most leaders are still working out exactly how to do it well.
It reflects the genuine complexity of operating in a regulated industry with real obligations to members. Boards want AI action without risk, and that is a hard brief to deliver on. The tools are available. The capability is there. What takes longer to build is organisational readiness, governance structures, cultural shift, and the clarity of purpose that makes AI adoption sustainable rather than reactive.
The most grounding reframe from our conversations: this is not primarily a technology challenge. It is a business, culture, and compliance challenge. Funds that are making meaningful progress treat it that way.
Regulation as a design parameter
For those navigating AI adoption in regulated industries, this framing is worth sitting with.
APRA, ASIC, the ATO, and the Privacy Act define the environment in which funds operate with AI. The instinct is often to treat regulation as a reason to wait. The more useful approach is to treat it as a design parameter, something you build around from the start, rather than trying to navigate around it after the fact.
That means embedding policy into workflow, not just documenting it. And it means treating information governance as part of the build, not a prerequisite that delays it. AI applied to poor data governance does not fix the problem; it amplifies it. Getting both right together is exactly where the opportunity sits.
The nexus between speed and safety is real. But it is navigable. The funds that are building the most confidence are those that establish clear guardrails early, stay lightweight and iterative, and treat their compliance posture as a foundation rather than a ceiling.
Engagement over instruction
A main takeaway from both lunches was this: experiencing AI directly in a room is much more effective than simply hearing about it.
David shared that, early in Lendi's transformation, they chose not to brief the board through strategy papers and progress updates. Instead, they introduced NED, an AI-powered non-executive director, directly into the boardroom. This model could be questioned, challenged, and engaged with in real time. It worked because it replaced abstraction with experience. People who might have remained sceptical or uncertain from a distance found their own way in.
The same principle applies at every level of an organisation. Executives who felt most confident about AI were those who had rolled up their sleeves and experimented, even informally. In contrast, the least equipped were those who had only observed it from afar.
You cannot think your way to AI literacy. You have to use it. And the reality is, people already are. It is far better to know how it is being used across your organisation and ensure it is done safely than to find out later that it has been happening without your knowledge. Creating safe spaces to experiment at every level and sharing what you find are among the most practical ways a fund can build capability and confidence right now.
Know what your AI is doing
Something that came through clearly in both cities: observability is not optional.
In a regulated environment, you need to know which decisions your AI influences, where it is used, and what happens when something goes wrong. And something will go wrong at some point. The question is whether you are set up to catch it quickly, respond clearly, and maintain trust with your members and your regulator.
David's advice was practical: build your failure response protocol before you need it. Run it as an executive exercise. Know who is accountable, what the escalation path looks like, and how you communicate. That kind of readiness is what separates a manageable incident from a reputational one.
AI is a supporting role in regulatory reporting
It is worth being direct about where AI fits in the regulatory reporting space, because the stakes here are different.
Member data and fund reporting carry obligations around security, accuracy and integrity that are absolute. This data cannot be compromised, tampered with or exposed in any way. The consequences for members, for funds, and for regulatory standing are too significant to treat this space the same as other areas of AI experimentation.
AI can play a valuable supporting role: assisting with data preparation, surfacing anomalies, and improving the efficiency of review processes. But it does not own the output. The accountability for what is submitted rests entirely with the people and the fund, and that line needs to be clear and consistent across the organisation.
That is not a constraint on progress. It is the right foundation to build from.
On quality, hallucinations, and getting the architecture right
Hallucinations were on everyone's mind. The concern is valid and widely shared. David's approach at Lendi was to run multiple AI agents in parallel on high-stakes tasks and to use a further model to review and reconcile the outputs. The cost of running multiple models is lower than most people assume, and the confidence it buys on high-stakes outputs is well worth it.
The broader principle: in a regulated environment, the model itself is only part of the picture. The architecture around it, the review layers, the human checkpoints, and the output standards matter just as much. AI is getting better quickly. But quality assurance cannot wait for the technology to be perfect. It needs to be designed now.
There was also a useful caution raised about the risk of AI generating volume over value. More output is not better output. Setting clear expectations for quality standards and ensuring people understand they are accountable for what they submit, regardless of how it was produced, is foundational.
Where to start with AI Adoption in Regulated Industries?
The most consistent practical guidance across our conversations pointed in a few directions.
Start with internal, controlled use cases. Team-facing tools with human review layers carry lower risk, build organisational confidence, and generate real learning without member-facing exposure.
Look for your grassroots innovators. In both cities, there was a consistent observation that the people most actively experimenting with AI tend to be further down the organisation. Creating structured opportunities for them to share what they are doing is one of the fastest ways to build momentum and surface what actually works in your context.
Move fast on reversible decisions. David's approach at Lendi was to treat technology stack decisions as two-way doors: choices you can walk back through if the landscape shifts. On vendor contracts, he advises, tread carefully with long-term commitments to providers who are not yet AI-capable. The landscape is moving quickly enough that flexibility has real value right now.
And if you are not sure where to start as an executive, start with your own practice. Use AI in your day-to-day work. Prototype something small. The fastest path to understanding what AI can do for your organisation is understanding what it can do for you personally.
What we took away
AI does not replace good judgment. It frees up the people who have it.
The question is not whether AI has a role in your organisation. It does. The question is how to find the right one, and that only comes from getting your hands on it, sharing what works, and learning from what does not.
These conversations are ongoing. If you would like to be part of the next one, we would love to have you at the table.
At Nuj, we work with superannuation funds on regulatory reporting compliance. We are actively building AI into how we serve our clients, from rapid prototyping on the platform to Nujie, our upcoming natural language tool for interrogating APRA superannuation data. If you would like to talk through how any of this intersects with your own journey, get in touch.
Ready to see how we can help?
Book a demo or get in touch to chat about your reporting challenges. We're here to help.
P.S. Spotted something in the news we should cover. Please let us know; we're always keen to hear what matters to you, email team@nujsuper.com






