Using a swearword in your Google search can stop that annoying AI overview from popping up. Some apps let you switch off their artificial intelligence.
You can choose not to use ChatGPT, to avoid AI-enabled software, to refuse to talk to a chatbot. You can ignore Donald Trump posting deepfakes, and dodge anything with Tilly the AI actor in it.
As the use of AI spreads, so do concerns about its dangers, and resistance to its ubiquitousness.
Dr Kobi Leins – an AI management and governance expert – chooses to opt out when medical practitioners want to use AI.
She told a specialist she didn’t want AI transcription software used for her child’s appointment but was told it was necessary because the specialist was “time poor” and if she did not want it used she would need to go somewhere else.
“You can’t resist individually. There is also systemic resistance. The push from the industry to use these tools above and beyond where it makes sense [is so strong],” she says.
Where is AI?
AI is spreading inexorably through digital systems.
It’s embedded in applications such as ChatGPT, Google’s AI overview, and Elon Musk’s creation Grok, the super-Nazi chatbot. Smartphones, social media and navigation devices are all using it.
It has also infiltrated customer service, the finance system, online dating apps and is being used to assess resumes and job applications, rental applications – even legal cases.
It’s increasingly part of the healthcare system, easing the administrative burden on doctors and helping to identify illnesses.
A global study from the University of Melbourne released in April found half of Australians use AI on a regular or semi-regular basis, but only 36% trust it.
Prof Paul Salmon, the deputy director of the University of the Sunshine Coast’s Centre for Human Factors and Sociotechnical Systems, says it’s getting harder and harder to avoid.
“In work contexts, there is often pressure to engage with it,” he says.
“You either feel like you’re being left behind – or you’re told you’re being left behind.”
Should I avoid using AI?
Privacy leakage, discrimination, false or misleading information, malicious use in scams and fraud, loss of human agency, lack of transparency and more are just some of the 1,600 risks in the Massachusetts Institute of Technology’s AI Risk Database.
It warns of the risk of AI “pursuing its own goals in conflict with human goals or values” and “possessing dangerous capabilities”.
Greg Sadler, the chief executive officer of the Good Ancestors charity and coordinator of Australians for AI Safety, says he often refers to that database and, while AI can be useful, “you definitely don’t want to use AI in the times where you don’t trust its output, or you’re worried about it having the information”.
Aside from all those risks, AI has an energy cost. Google’s emissions are up by more than 51% thanks at least in part to the electricity consumption of datacentres that underpin its AI.
The International Energy Agency estimates that datacentres’ electricity consumption could double from 2022 levels by 2026, while analysis shows they will be using 4.5% of global energy generation by 2030.
How can I avoid using AI?
AI overview has a “profanity trigger”. If you ask Google “What is AI?”, its Gemini AI interface will deliver you a potted (and sometimes inaccurate) answer. It’s functioning as an “answer engine” rather than a “search engine”.
But if you ask “What the fuck is AI?”, you will be delivered straight search results, linking to other pages.
There are various browser extensions that can block AI sites, images and content.
You can circumvent some chatbots and speak to a human if you repeatedly say “speak to a human”, use the words “urgent” and “emergency”, or, according to this report, “blancmange” – a sweet dessert popular throughout Europe.
James Jin Kang, a senior lecturer in computer science at RMIT University Vietnam, writes in the Conversation that to live entirely without it means “stepping away from much of modern life”.
“Why not just add a kill switch?” he asks. The issue, he says, is that it is so embedded in our lives it is “no longer something we can simply turn off”.
“So as AI spreads further into every corner of our lives, we must urgently ask: will we still have the freedom to say no?
“The question isn’t whether we can live with AI but whether we will still have the right to live without it before it’s too late the break the spell.”
What’s the future of AI?
Governments around the world, including in Australia, are struggling to keep up with AI, what it means, what it promises and how to govern it.
As big tech companies seek access to material including journalism and books to train AI models, the federal government is under pressure to reveal how it plans to regulate the technology.
The Conversation has asked five experts where AI is heading.
And three out of five of them say AI does not pose an existential risk.
Of those who say it doesn’t, Queensland University of Technology’s Aaron J Snoswell says it is “transformative” and the risk isn’t AI becoming too smart, it is “humans making poor choices about how we build and deploy these tools”.
CSIRO’s Sarah Vivienne Bentley agrees it is only as good as its users while the University of Melbourne’s Simon Coghlan says despite the concern and hype there is “little evidence that a superintelligent AI capable of wreaking global devastation is coming any time soon”.
Australian Catholic University’s Niusha Shafiabady is more grave. She says today’s systems only have limited capacity, but they are gaining capabilities that make misuse more likely to happen at scale, and that it poses an existential threat.
Seyedali Mirjalili, an AI professor from Torrens University Australia, says he is “more concerned humans will use AI to destroy civilisation [through militarisation] than AI doing so autonomously by taking over”.
Leins says she uses AI tools where it makes sense, but not everywhere.
“I know what it does environmentally and I like to write. I have a PhD, I think through my writing,” she says.
“It’s about what is evidence based and makes sense. It’s not getting caught up in the hype, and not getting caught up in the doom.
“I think we’re complex and smart enough to hold both ideas at the same time – that these tools can be positive or negative.”