Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

10 Must-Have AI Tools for Students

UniVideo: Unified Understanding, Generation, and Editing for Videos – Takara TLDR

MIT rejects Trump administration’s higher education funding agreement

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
AI Search

Using a swearword in your Google search can stop the AI answer. But should you? | Artificial intelligence (AI)

By Advanced AI EditorOctober 10, 2025No Comments6 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Using a swearword in your Google search can stop that annoying AI overview from popping up. Some apps let you switch off their artificial intelligence.

You can choose not to use ChatGPT, to avoid AI-enabled software, to refuse to talk to a chatbot. You can ignore Donald Trump posting deepfakes, and dodge anything with Tilly the AI actor in it.

As the use of AI spreads, so do concerns about its dangers, and resistance to its ubiquitousness.

Dr Kobi Leins – an AI management and governance expert – chooses to opt out when medical practitioners want to use AI.

She told a specialist she didn’t want AI transcription software used for her child’s appointment but was told it was necessary because the specialist was “time poor” and if she did not want it used she would need to go somewhere else.

“You can’t resist individually. There is also systemic resistance. The push from the industry to use these tools above and beyond where it makes sense [is so strong],” she says.

Where is AI?

AI is spreading inexorably through digital systems.

It’s embedded in applications such as ChatGPT, Google’s AI overview, and Elon Musk’s creation Grok, the super-Nazi chatbot. Smartphones, social media and navigation devices are all using it.

It has also infiltrated customer service, the finance system, online dating apps and is being used to assess resumes and job applications, rental applications – even legal cases.

It’s increasingly part of the healthcare system, easing the administrative burden on doctors and helping to identify illnesses.

A global study from the University of Melbourne released in April found half of Australians use AI on a regular or semi-regular basis, but only 36% trust it.

Prof Paul Salmon, the deputy director of the University of the Sunshine Coast’s Centre for Human Factors and Sociotechnical Systems, says it’s getting harder and harder to avoid.

“In work contexts, there is often pressure to engage with it,” he says.

“You either feel like you’re being left behind – or you’re told you’re being left behind.”

Should I avoid using AI?

Privacy leakage, discrimination, false or misleading information, malicious use in scams and fraud, loss of human agency, lack of transparency and more are just some of the 1,600 risks in the Massachusetts Institute of Technology’s AI Risk Database.

It warns of the risk of AI “pursuing its own goals in conflict with human goals or values” and “possessing dangerous capabilities”.

Greg Sadler, the chief executive officer of the Good Ancestors charity and coordinator of Australians for AI Safety, says he often refers to that database and, while AI can be useful, “you definitely don’t want to use AI in the times where you don’t trust its output, or you’re worried about it having the information”.

Aside from all those risks, AI has an energy cost. Google’s emissions are up by more than 51% thanks at least in part to the electricity consumption of datacentres that underpin its AI.

The International Energy Agency estimates that datacentres’ electricity consumption could double from 2022 levels by 2026, while analysis shows they will be using 4.5% of global energy generation by 2030.

How can I avoid using AI?

AI overview has a “profanity trigger”. If you ask Google “What is AI?”, its Gemini AI interface will deliver you a potted (and sometimes inaccurate) answer. It’s functioning as an “answer engine” rather than a “search engine”.

But if you ask “What the fuck is AI?”, you will be delivered straight search results, linking to other pages.

There are various browser extensions that can block AI sites, images and content.

You can circumvent some chatbots and speak to a human if you repeatedly say “speak to a human”, use the words “urgent” and “emergency”, or, according to this report, “blancmange” – a sweet dessert popular throughout Europe.

James Jin Kang, a senior lecturer in computer science at RMIT University Vietnam, writes in the Conversation that to live entirely without it means “stepping away from much of modern life”.

“Why not just add a kill switch?” he asks. The issue, he says, is that it is so embedded in our lives it is “no longer something we can simply turn off”.

“So as AI spreads further into every corner of our lives, we must urgently ask: will we still have the freedom to say no?

“The question isn’t whether we can live with AI but whether we will still have the right to live without it before it’s too late the break the spell.”

What’s the future of AI?

Governments around the world, including in Australia, are struggling to keep up with AI, what it means, what it promises and how to govern it.

As big tech companies seek access to material including journalism and books to train AI models, the federal government is under pressure to reveal how it plans to regulate the technology.

The Conversation has asked five experts where AI is heading.

And three out of five of them say AI does not pose an existential risk.

Of those who say it doesn’t, Queensland University of Technology’s Aaron J Snoswell says it is “transformative” and the risk isn’t AI becoming too smart, it is “humans making poor choices about how we build and deploy these tools”.

CSIRO’s Sarah Vivienne Bentley agrees it is only as good as its users while the University of Melbourne’s Simon Coghlan says despite the concern and hype there is “little evidence that a superintelligent AI capable of wreaking global devastation is coming any time soon”.

Australian Catholic University’s Niusha Shafiabady is more grave. She says today’s systems only have limited capacity, but they are gaining capabilities that make misuse more likely to happen at scale, and that it poses an existential threat.

Seyedali Mirjalili, an AI professor from Torrens University Australia, says he is “more concerned humans will use AI to destroy civilisation [through militarisation] than AI doing so autonomously by taking over”.

Leins says she uses AI tools where it makes sense, but not everywhere.

“I know what it does environmentally and I like to write. I have a PhD, I think through my writing,” she says.

“It’s about what is evidence based and makes sense. It’s not getting caught up in the hype, and not getting caught up in the doom.

“I think we’re complex and smart enough to hold both ideas at the same time – that these tools can be positive or negative.”



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleIBM can’t avoid arbitrating former worker’s age bias charge, court says
Next Article Startup founded by former DeepMind researchers Reflection AI raises $2 billion
Advanced AI Editor
  • Website

Related Posts

We partnered with AI-powered answer engine Perplexity – here’s why it matters for local news

October 10, 2025

Why we partnered with AI-powered answer engine Perplexity

October 10, 2025

Microsoft Ad Blog Posts On How To Optimize For AI Search Answers

October 10, 2025

Comments are closed.

Latest Posts

The Rubin Names 2025 Art Prize, Research and Art Projects Grants

Kochi-Muziris Biennial Announces 66 Artists for December Exhibition

Instagram Launches ‘Rings’ Awards for Creators—With KAWS as a Judge

Frieze to Launch Abu Dhabi Fair in November 2026

Latest Posts

10 Must-Have AI Tools for Students

October 11, 2025

UniVideo: Unified Understanding, Generation, and Editing for Videos – Takara TLDR

October 11, 2025

MIT rejects Trump administration’s higher education funding agreement

October 11, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • 10 Must-Have AI Tools for Students
  • UniVideo: Unified Understanding, Generation, and Editing for Videos – Takara TLDR
  • MIT rejects Trump administration’s higher education funding agreement
  • Reinforcing Diffusion Models by Direct Group Preference Optimization – Takara TLDR
  • it takes more than chips to win the AI race

Recent Comments

  1. Juniorfar on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  2. EnigmaCrownT9Nalay on AI code suggestions sabotage software supply chain • The Register
  3. EnigmaCrownT9Nalay on Mistral AI signs $110m deal with shipping giant CMA CGM
  4. EnigmaCrownT9Nalay on NVIDIA Mined Hours Of Classic Tom & Jerry Shorts To Generate New AI Horrors
  5. Anbieter Sportwetten on C3 AI Stock Is Soaring Today: Here’s Why – C3.ai (NYSE:AI)

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.