Powered by MOMENTUM MEDIA
Broker Daily logo

Overcoming ethical and security risks when integrating AI 

Overcoming ethical and security risks when integrating AI 
expand image

Data and AI executive discusses key challenges of AI adoption in anticipation of this session at the Broker Innovation Summit 2025.

AI-related fake online content, cyber attacks, and loss of privacy are most likely to impact Australians over the next 10 years, according to a survey by the University of Queensland.

Ahead of his session at the Broker Innovation Summit 2025, Silvio Giorgio, data and AI executive, shared what controls brokers should implement as ethical and security safeguards to minimise these risks.

“Increased attention is being placed on the safe and responsible use of AI, in particular the use of our data or personal information,” he told Broker Daily.

“In addition, the evolution of AI capabilities into image and audio are expanding where we need to add protections to our identifiers, for example our likeness or our voice which can be manipulated into fakes for harm.”

Australia is one of the largest AI adopters across the globe, according to 9news, with recently released data from Minister for Industry and Innovation and Minister for Science Tim Ayres reporting that 41 per cent of small and medium-sized enterprises are currently adopting AI.

The technology has immense benefits for businesses, with the report saying 22 per cent saw improvements in decision-making speed and another 18 per cent noted increased productivity.

While the potential of AI is endless, integrating the technology comes with significant risk.

“Australia is adapting to the risk of using data and AI, with caution,” said the data and AI leader.

“Our government has provided organisations with ethical standards to guide the development and use of artificial intelligence.

“Privacy reform places increased controls on the use of personal information in automated decision making technologies that significantly affect individual rights, with stricter consequences of getting it wrong.

“Voluntary AI Safety Standards have been designed to help organisations safely and responsibly develop, deploy and use AI systems across different risk levels.”

In an AI-driven future, Giorgio warned brokers to be cautious when adopting the technology, highlighting deepfakes and misinformation as key concerns.

“We will find it increasingly difficult to know what’s real. Images developed by AI look real, so do the videos they generate,” he said.

“Voice generated by AI sounds like a real person, and can be made to sound exactly like you.

“Deep fakes have caused harm, and their potential to cause greater harm increases as the technology gets better.

“Lots of discussion in this space, from clearly labelling when people are engaging with an AI agent or when looking at AI generated content, to creating better awareness of scams and how they can take place with advertising, to people taking greater precautions on how they protect their visual likeness, voice and video recordings.”

To hear Giorgio speak more on how to prepare for the ethical and security risks of AI adoption, come along to the Broker Innovation Summit 2025.

Run in partnership with principal partner NextGen, the event will take place on Wednesday, 25 June 2025, at the Sydney Customer Insight Centre in Sydney. Click here to buy tickets and don’t miss out.

To learn more about the summit, including the agenda and speakers, click here.

More on Technology