Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

EU Commission: “AI Gigafactories” to strengthen Europe as a business location

United States, China, and United Kingdom Lead the Global AI Ranking According to Stanford HAI’s Global AI Vibrancy Tool

Foundation AI: Cisco launches AI model for integration in security applications

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Advanced AI News
Home » The Great Language Flattening – The Atlantic
Writing Tools

The Great Language Flattening – The Atlantic

Advanced AI BotBy Advanced AI BotApril 29, 2025No Comments7 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


In at least one crucial way, AI has already won its campaign for global dominance. An unbelievable volume of synthetic prose is published every moment of every day—heaping piles of machine-written news articles, text messages, emails, search results, customer-service chats, even scientific research.

Chatbots learned from human writing. Now the influence may run in the other direction. Some people have hypothesized that the proliferation of generative-AI tools such as ChatGPT will seep into human communication, that the terse language we use when prompting a chatbot may lead us to dispose of any niceties or writerly flourishes when corresponding with friends and colleagues. But there are other possibilities. Jeremy Nguyen, a senior researcher at Swinburne University of Technology, in Australia, ran an experiment last year to see how exposure to AI-generated text might change the way people write. He and his colleagues asked 320 people to write a post advertising a sofa for sale on a secondhand marketplace. Afterward, the researchers showed the participants what ChatGPT had written when given the same prompt, and they asked the subjects to do the same task again. The responses changed dramatically.

“We didn’t say, ‘Hey, try to make it better, or more like GPT,’” Nguyen told me. Yet “more like GPT” is essentially what happened: After the participants saw the AI-generated text, they became more verbose, drafting 87 words on average versus 32.7 in the first round. The full results of the experiment are yet to be published or peer-reviewed, but it’s an intriguing finding. Text generators tend to write long, even when the prompt is curt. Might people be influenced by this style, rather than the language they use when typing to a chatbot?

Read: The words that stop ChatGPT in its tracks

AI-written text is baked into software that millions, if not billions, of people use every day. Even if you don’t use ChatGPT, Gemini, Claude, or any of the other popular text-generating tools, you will inevitably be on the receiving end of emails, documents, and marketing materials that have been compiled with their assistance. Gmail offers some users an integrated AI tool that starts drafting responses before any fingers hit the keys. Last year, Apple launched Apple Intelligence, which includes AI features on Macs, iPhones, and iPads such as writing assistance across apps and a “smart reply” function in the Mail app. Writing on the internet is now more likely than even a year or two ago to be a blended product—the result of a human using AI somewhere in the drafting or refining phase while making subtle tweaks themselves. “And so that might be a way for patterns to get laundered, in effect,” Emily M. Bender, a computational-linguistics professor at the University of Washington, told me.

Bender, a well-known critic of AI who helped coin the term stochastic parrots, does not use AI text generators on ethical grounds. “I’m not interested in reading something that nobody said,” she told me. The issue, of course, is that knowing if something was written by AI is becoming harder and harder. People are sensitive to patterns in language—you may have noticed yourself switching accents or using different words depending on whom you’re speaking to—but “what we do with those patterns depends a lot on how we perceive who’s saying them,” Bender told me. You might not be moved to emulate AI, but you could be more susceptible to picking up its linguistic quirks if they appear to come from a respected source. Interacting with ChatGPT is one thing; receiving a ChatGPT-influenced email from a highly esteemed colleague is another.

Language evolves constantly, and advances in technology have long shaped the way people communicate (lol, anyone?). These influences are not necessarily good or bad, although technological developments have often helped to make language and communication more accessible: Most people see the invention of the printing press as a welcome development from longhand writing. LLMs follow in this vein—it’s never been easier to turn your thoughts into flowing prose, regardless of your view on the quality of the output.

Recent technological advances have generally inspired or even demanded concision—many text messages and social-media posts have explicit character limits, for instance. As a general rule, language works on the principle that effort increases with length; five paragraphs require more work than two sentences for the sender to write and the receiver to read. But AI tools could upset this balance, Simon Kirby, a professor of language evolution at the University of Edinburgh, told me. “What happens when you have a machine where the cost of sending 10,000 words is the same or roughly the same as the cost of sending 1,000?” he said.

Kirby offered me a hypothetical: One person may give an AI tool a few bullet points to turn into a lengthy, professional-sounding email, only for the recipient to immediately use another tool to summarize the prose before reading. “Essentially, we’ve come up with a protocol where the machines are using flowery, formal language to send very long versions of very short, encapsulated messages that the humans are using,” he said.

Read: The end of foreign-language education

Beyond length, the linguists I spoke with speculated that the proliferation of AI writing could lead to a new form of language. “It’s pretty easy to imagine that English will become more standardized to whatever the standard of these language models is,” said Jill Walker Rettberg, a professor of digital culture at the University of Bergen’s Center for Digital Narrative, in Norway. This already happens to an extent with automated spelling- and grammar-checkers, which nudge users to adhere to whichever formulations they consider to be “correct.” As AI tools become more commonplace, people may see their style as the template to follow, resulting in a greater homogenization of language: Just yesterday, Cornell University presented a study suggesting that this is happening already. In the experiment, an AI writing tool “caused Indian participants to write more like Americans, thereby homogenizing writing toward Western styles and diminishing nuances that differentiate cultural expression,” the authors wrote.

Philip Seargeant, an applied linguist at the Open University in the U.K., told me that when students use AI tools inappropriately, their work reads a little too perfect, “but in a very bland and uninteresting way.” Kirby says that AI text lacks the errors or awkwardness he’d expect in student essays and has an “uncanny valley” feel. “It does have that kind of feeling [that] there’s nothing behind the eyes,” he said.

Several linguists I spoke with suggested that the proliferation of AI-written or -mediated text may spark a countermovement. Perhaps some people will rebel, leaning into their own linguistic mannerisms in order to differentiate themselves. Bender imagines people turning off AI features or purposely choosing synonyms when prompted to use certain words, as an act of defiance. Kirby told me he already sees some of his students taking pride in not using AI writing tools. “There is a way in which that will become the kind of valorized way of writing,” he said. “It’ll be the real deal, and it’ll be obvious, because you’ll deliberately lean into your idiosyncrasies as a writer.” Rettberg compares it to choosing handmade goods over cheap, factory-made fare: Rather than losing value as a result of the AI wave, human writing may be appreciated even more, taking on an artisanal quality.

Ultimately, as language continues to evolve, AI tools will be both setting trends and playing catch-up. Trained on existing data, they’ll always be somewhat behind how people are using language today, even as they influence it. In fact, we may end up with AI tools evolving language separately to humans, Kirby said. Large language models are usually trained on text from the internet, and the more AI-generated text ends up permeating the web, the more these tools may end up being trained on their own output and embedding their own linguistic styles. For Kirby, this is fascinating. “We might find that these models start going off and taking the language that’s produced with them in a particular direction that may be different from the direction language would have evolved in if it had been passed from human to human,” he said. This, he believes, is what could set generative AI apart from other technological advances when it comes to impact on language: “We’ve inadvertently created something that could itself be culturally evolving.”



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleEU Commission: “AI Gigafactories” to strengthen Europe as a business location
Next Article Redburn tells investors to sell Tesla as EV maker faces tough outlook
Advanced AI Bot
  • Website

Related Posts

Revealed: Thousands of UK university students caught cheating using AI | Higher education

June 15, 2025

Announcing StudyPro: The All-in-One AI Academic Platform Now in Complimentary Beta

June 12, 2025

Empower Your Learning with AI: StudyPro Launches Free Beta

June 12, 2025
Leave A Reply Cancel Reply

Latest Posts

Zegna’s SS ‘26 Dubai Show Is A Vision For A Slow, Quiet Luxury Legacy

Love At First Stitch – One Woman’s Journey Preserving The Art Of Ralli

Los Angeles’ ‘No Kings’ Rally Showcases Handmade Protest Art

Why Annahstasia’s Brilliant Debut Is The Album We Need Now

Latest Posts

EU Commission: “AI Gigafactories” to strengthen Europe as a business location

June 15, 2025

United States, China, and United Kingdom Lead the Global AI Ranking According to Stanford HAI’s Global AI Vibrancy Tool

June 15, 2025

Foundation AI: Cisco launches AI model for integration in security applications

June 15, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.