“Whether it’s helping with learning, work, or planning itineraries, AI is a game-changer.”
Closing that gap in Australia will mean organisations like Optus need to partner with organisations like Perplexity to ensure an “easier stepping stone” into AI with the right guardrails in place, he noted.
That’s why Optus has spent “a lot of time working through” its approach to responsible AI, said the telco’s head of artificial intelligence solutions and strategy, Jesse Arundell.
“From the birth and idea to an AI system operating production, making sure that we’re actually understanding the risks associated with use of AI, then also controlling the risks of the use of AI; To me, what that means is really ensuring that our systems are fundamentally safe, sound and secure,” he said.
This means being able to look at the use of AI in software applications, as well as making sure that Optus has strong technology controls in place around data risk management, cyber security controls, data sovereignty and availability resilience, to make sure that the applications that what’s being built is trustworthy, said Arundell.
“For me, being able to ensure that the building [of AI] is trustworthy, actually,” he said. “Inherently, they are compliant by design and, where possible, we have the use of AI, because applications and use cases [need to be] ethical, grounded [and] fair.”