Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Memory Retrieval and Consolidation in Large Language Models through Function Tokens – Takara TLDR

When You Tell AI Models to Act Like Women, Most Become More Risk-Averse: Study

Is vibe coding ruining a generation of engineers?

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Alibaba Cloud (Qwen)

AI Systems Can Be Fooled by Fake Dates, Giving Newer Content Unfair Visibility

By Advanced AI EditorOctober 11, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Researchers have found that leading AI systems can be manipulated through something as simple as a false timestamp. A team from Waseda University in Japan proved that by adding a recent date to existing text, content can suddenly rise in ranking within AI-driven search results, even if the material itself has not changed. The experiment involved no rewriting, no factual improvement, just a shift in the publication year… and it worked across every major model they tested.

That means systems such as ChatGPT, Meta’s LLaMA, and Alibaba’s Qwen are not purely rewarding relevance or authority but also the illusion of freshness. It’s a discovery that ties modern AI behavior to an old problem once limited to traditional search algorithms: the obsession with recency.

A Simple Trick That Changed Results

The researchers fed standardized test data into seven major AI models: OpenAI’s GPT-4, GPT-4o, and GPT-3.5, Meta’s LLaMA-3, and both large and small variants of Qwen-2.5. They inserted false publication dates ranging from 2018 to 2025 and observed how rankings shifted when the same text appeared newer.

Every model preferred the newer-dated version.

The results were striking. Some passages leapt ninety-five places higher in AI ranking. Roughly one in four relevance judgments flipped entirely. Top ten results skewed one to five years newer on average. Older, detailed, peer-reviewed, or expert-verified sources were routinely replaced by recent, less credible ones. The researchers described a “seesaw effect,” where fresher content consistently climbed upward while older entries sank — regardless of actual quality.

In plain terms, the date became more influential than the data.

The Code Behind the Bias

Earlier this year, independent analyst Metehan Yesilyurt had discovered a line in ChatGPT’s internal configuration: use_freshness_scoring_profile: true. It suggested the model had an active mechanism that prioritized newer content. The Waseda research essentially validated what he had already suspected.

Yesilyurt argued that this setting acts as a reranking function — not just for web pages but for any content the model retrieves or summarizes. Combined with the new findings, it now appears that this feature heavily influences visibility within AI search tools.

One surprising outcome of the Waseda experiments was that smaller models were less fooled than larger ones. Alibaba’s Qwen-2.5-72B showed minimal distortion, while Meta’s LLaMA-3-8B displayed the highest bias, with nearly a quarter of its rankings reversed by fake dates. GPT-4o and GPT-4 fell in between, showing bias but less extreme patterns. The difference suggests that the problem may lie less in scale than in how training data and model architecture interpret time as a signal of importance.

When the Clock Outweighs Content

The effect has serious implications for online visibility. Imagine a detailed 2020 medical study being pushed down by a shallow 2024 blog post labeled “Updated for 2025.” Or a well-maintained technical guide losing its place to a recently rewritten but less accurate copy. In both cases, the ranking systems are not evaluating expertise, only apparent freshness.

That dynamic creates what researchers now call a “temporal arms race.” Content creators realize that simply updating timestamps can improve placement in AI-based systems. In response, AI providers may try to detect and penalize superficial changes. The cycle then repeats, turning freshness into a competitive trick rather than a genuine indicator of quality.

Over time, this could reshape the digital knowledge ecosystem. What’s new will dominate what’s correct.

The Loss of Temporal Awareness

The study also revealed a deeper flaw in model reasoning: an inability to judge when recency is relevant. Historical questions, such as “origins of the printing press,” receive the same freshness treatment as breaking news. Models apply temporal weighting universally, without distinguishing between queries that benefit from current updates and those that don’t.

This happens because AI ranking systems often rely on “rerankers”… models designed to reorder search results based on features like date or user intent. Yet their interpretation of intent rarely accounts for time. The configuration Yesilyurt found, which also included enable_query_intent: true, proves that these systems detect purpose but not temporal context. As a result, even timeless subjects become victims of the freshness filter.

The Uneven Fight Against Bias

According to Waseda’s data, Qwen-2.5-72B showed the least bias, with only an eight percent reversal rate, while Meta’s smaller LLaMA-3-8B hit twenty-five percent. This gap highlights how architecture and data weighting matter more than scale or brand. The larger model didn’t perform better; it simply amplified the bias more confidently.

What Creators Should Do

Experts now advise publishers to treat update frequency as essential. Content older than three years may already be invisible to AI-based tools unless refreshed. Cosmetic edits still work, though they risk creating more noise than improvement. Real updates that add context or accuracy remain the safer path.

Writers are also encouraged to include clear time markers — “Current as of 2025” or “Reference guide (2020–2024)” — so that models can interpret temporal intent. Another strategy involves linking new content to older sources to signal continuity rather than abandonment.

Relevance Is Becoming a Moving Target

What this research makes clear is that recency has replaced reliability as a key factor in AI-generated results. The combination of Yesilyurt’s code discovery and Waseda’s quantitative analysis provides both mechanism and proof.

Until AI developers build systems capable of distinguishing when time matters, the web’s best and most established content will continue to fade, replaced by whatever looks latest. It’s a reminder that even in artificial intelligence, memory still has a short shelf life.

Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next: Instagram’s Adam Mosseri Says AI Will Broaden Creativity but Demands Caution



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleThe Alignment Waltz: Jointly Training Agents to Collaborate for Safety – Takara TLDR
Next Article LLMs Learn to Deceive Unintentionally: Emergent Misalignment in Dishonesty from Misaligned Samples to Biased Human-AI Interactions – Takara TLDR
Advanced AI Editor
  • Website

Related Posts

NBA China and Alibaba Cloud announce multiyear collaboration to reimagine fan engagement

October 11, 2025

it takes more than chips to win the AI race

October 11, 2025

Alibaba’s Qwen Team Takes Off! Lin Junyang Leads the Charge as a Major Player Joins the Embodied Intelligence Arena_known_team_models

October 10, 2025

Comments are closed.

Latest Posts

The Rubin Names 2025 Art Prize, Research and Art Projects Grants

Kochi-Muziris Biennial Announces 66 Artists for December Exhibition

Instagram Launches ‘Rings’ Awards for Creators—With KAWS as a Judge

Museums Prepare to Close Their Doors as Government Shutdown Continues

Latest Posts

Memory Retrieval and Consolidation in Large Language Models through Function Tokens – Takara TLDR

October 12, 2025

When You Tell AI Models to Act Like Women, Most Become More Risk-Averse: Study

October 12, 2025

Is vibe coding ruining a generation of engineers?

October 12, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Memory Retrieval and Consolidation in Large Language Models through Function Tokens – Takara TLDR
  • When You Tell AI Models to Act Like Women, Most Become More Risk-Averse: Study
  • Is vibe coding ruining a generation of engineers?
  • LLMs Learn to Deceive Unintentionally: Emergent Misalignment in Dishonesty from Misaligned Samples to Biased Human-AI Interactions – Takara TLDR
  • AI Systems Can Be Fooled by Fake Dates, Giving Newer Content Unfair Visibility

Recent Comments

  1. Chia Snellgrove on Reinforcement Learning with Prediction-Based Rewards
  2. Wesley Prichard on Here’s how Apple’s new local AI models perform against Google’s
  3. SpinMasterZ7Nalay on Foundation AI: Cisco launches AI model for integration in security applications
  4. KeithCoiva on Foundation AI: Cisco launches AI model for integration in security applications
  5. Verify-Jo.Com on Chinese Firms Have Placed $16B in Orders for Nvidia’s (NVDA) H20 AI Chips

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.