On 27 February, media reported that Commonwealth Bank of Australia had self-reported concerns to police and the corporate regulator regarding potential mortgage fraud estimated at around $1 billion.
While Australia’s largest bank has not publicly detailed the alleged fraud, reports suggest the lender identified issues in home loan applications introduced through both brokers and introducers.
The bank’s investigation reportedly began in July last year, but intensified after rival big four lender National Australia Bank faced around $150 million in suspected fraud linked to an operation known as the Penthouse Syndicate.
The developments come amid growing concern across the broking sector. According to research from Equifax, almost three-quarters of Australian mortgage brokers said they had been impacted by scams or fraud in the 12 months to September 2025 – a sharp increase from 26 per cent in the same period the previous year.
While brokers are increasingly aware of the risks, identifying fraudulent documents can still be difficult.
Speaking on Broker Daily Uncut, Eva Loisance, principal at Finni Mortgages, said obvious inconsistencies can often be identified through standard checks, but increasingly sophisticated forgeries can be far harder to detect.
“I have seen very obvious mistakes where the numbers just don’t add up and with a few checks we can see that something is wrong,” Loisance said.
“But now, if they’re so good that they’re making sure everything ticks the box, how would I know it’s a fake?”
Technology driving both fraud and detection
In the case of CBA, reports suggested AI may have been used in the forgery of documents. As the technology improves, both fraud techniques and fraud detection methods are advancing rapidly.
Brett Spencer, chair of the Finance Brokers Association of Australia and founder and CEO of AI-driven document and data verification platform DocuScan, said fraudsters are increasingly using widely available AI tools to generate convincing documents.
“Fraud is 100 per cent becoming more sophisticated,” Spencer said.
“Often that’s quite community driven, through the use of generic AI technology to create documents that look real and feel real, but are not.”
One method used to detect fraudulent documents involves analysing metadata – the underlying data describing how a document was created or modified.
“There’s various websites that will create pretty convincing fake bank statements and other documents, but when you look into the metadata layer of that document, this type of forgery becomes quite a blunt tool because the metadata won’t add up,” Spencer said.
AI can also analyse data points within documents at scale, identifying inconsistencies that may not be immediately obvious to a human reviewer.
“What an AI platform can do is cross-reference the bank account details on a payslip, verify the calculations, and check the dates and data,” Spencer said.
“It can also look beyond the typical 90-day window to determine whether income is short-term or ongoing.”
DocuScan, which uses AI to process and extract information from financial documents, is also developing tools designed for brokers.
The company said it is launching two broker-focused platforms – ComplyX and FraudX – that allow brokers to upload client files for automated compliance, fraud, and verification checks.
At the aggregator level, AI is also increasingly being used to identify patterns of suspicious behaviour across loan applications.
Shirley Elliot, head of compliance at AFG, said the technology had significantly improved the organisation’s ability to detect fraud trends.
“AI has proven genuinely useful in identifying fraud trends across high application volumes, something that simply wasn’t scalable before,” Elliot said.
“Where it adds the most value is in pattern recognition: flagging applications that share characteristics with previously substantiated fraud cases and escalating those for further investigation.”
She added that newer machine learning tools are capable of analysing entire groups of applications at once to identify suspicious patterns.
“What’s more interesting is where this is heading. We’re now seeing machine learning models that don’t just assess applications individually – they look across entire queues to surface suspicious patterns, like clusters of applications sharing IP addresses or suspiciously similar email formats,” Elliot said.
“That’s a meaningful shift in capability.
“Another particularly valuable advancement is the emergence of fraud detection software that streamlines the process of verifying documents submitted by customers for loan applications. By automating these checks, lenders and mortgage originators can efficiently identify potential fraud, reducing manual effort and ensuring greater accuracy in their assessments.”
Human oversight still essential
Despite the growing use of AI in fraud detection, Elliot warned brokers against relying solely on automated tools.
“Brokers need to be careful not to outsource their judgement to AI,” she said.
“Under the NCCP Act there’s a clear obligation to verify that the information you’re relying on is accurate and free of anomalies.
“AI, at least as brokers are currently using it, isn’t going to catch everything. Salary staging is a good example of the kind of fraud that won’t show up on an automated check. That still requires a human eye and the right questions being asked.”
Spencer similarly emphasised the need for human oversight alongside automated verification tools.
“The decision still needs to be a human-in-the-loop process,” he said.
“AI can assist with fraud detection, document verification and data verification, but someone still needs to look at something and say, ‘That just doesn’t look right’.”
[Related: Threat actors pull back on publishing stolen youX borrower data]