Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

OpenAI’s Chinese competitor Zhipu unveils new open-source model

Mistral AI & Qualcomm partner will boost AI on Snapdragon devices

ChatGPT Has No Legal Privilege – Is This A Problem? – Artificial Lawyer

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Industry AI
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Finance AI

What happens when AI cheats the market?

By Advanced AI EditorJuly 1, 2007No Comments5 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Malevolent trading practices aren’t new. Struggles against insider trading, as well as different forms of market manipulation, represent a long-running battle for regulators.

In recent years — however — experts have been warning of new threats to our financial systems. Developments in AI mean that automated trading bots are not only smarter, but they’re more independent too. While basic algorithms respond to programmed commands, new bots are able to learn from experience, quickly synthesise vast amounts of information, and act autonomously when making trades.

According to academics, one risk scenario involves collaboration between AI bots. Just imagine: hundreds of AI-driven social media profiles begin to pop up online, weaving narratives about certain companies. The information spread isn’t necessarily fake, but may just be the amplification of existing news. In response, real social media users start to react, highlighting the bots’ chosen message.

As the market is tipped by the crafted narrative, one investor’s roboadvisor rakes in profits, having coordinated with the gossiping bots. Other investors, who didn’t have the insider information, lose out by badly timing the market. The problem is, the investor profiting may not even be aware of the scheme. This means that charges of market manipulation can’t necessarily be effective, even if authorities can see that a trader has benefitted from distortive practices.

Alessio Azzutti, assistant professor in law & technology (FinTech) at the University of Glasgow, told Euronews that the above scenario is still a hypothesis — as there’s not enough evidence to prove it’s happening. Even so, he explains that similar, less sophisticated schemes are taking place, particularly in “crypto asset markets and decentralised finance markets”.

“Malicious actors… can be very active on social media platforms and messaging platforms such as Telegram, where they may encourage members to invest their money in DeFi or in a given crypto asset, to suit themselves,” Azzutti explained.

“We can observe the direct activity of human malicious actors but also those who deploy AI bots.”

He added that the agents spreading misinformation may not necessarily be very sophisticated, but they still have the power to “pollute chats through fake news to mislead retail investors”.

“And so the question is, if a layman, if a youngster on his own in his home office is able to achieve these types of manipulations, what are the limits for the bigger players to achieve the same effect, in even more sophisticated markets?”

Story Continues

Related

The way that market information now spreads online, in a widespread, rapid, and uncoordinated fashion, is also fostering different types of trading. Retail investors are more likely to follow crazes, rather than relying on their own analysis, which can destabilise the market and potentially be exploited by AI bots.

The widely-cited GameStop saga is a good example of herd trading, when users on a Reddit forum decided to buy up stock in the video game company en masse. Big hedge funds were betting that the price would fall, and subsequently lost out when it skyrocketed. Many experts say this wasn’t a case of collusion as no official agreement was created.

A spokesperson from ESMA, the European Securities and Markets Authority, told Euronews that the potential for AI bots to manipulate markets and profit off the movements is “a realistic concern”, although they stressed that they don’t have “specific information or statistics on this already happening”.

“These risks are further intensified by the role of social media, which can act as a rapid transmission channel for false or misleading narratives that influence market dynamics. A key issue is the degree of human control over these systems, as traditional oversight mechanisms may be insufficient,” said the spokesperson.

ESMA highlighted that it was “actively monitoring” AI developments.

One challenge for regulators is that collaboration between AI agents can’t be easily traced.

“They’re not sending emails, they’re not meeting with each other. They just learn over time the best strategy and so the traditional way to detect collusion doesn’t work with AI,” Itay Goldstein, professor of finance and economy at the Wharton School of the University of Pennsylvania, told Euronews.

“Regulation has to step up and find new strategies to deal with that,” he argued, adding that there is a lack of reliable data on exactly how traders are using AI.

Filippo Annunziata, professor of financial markets and banking legislation at Bocconi University, told Euronews that the current EU rules “shouldn’t be revised”, referring to the Regulation on Market Abuse (MAR) and the Markets in Financial Instruments Directive II (MiFID II).

Even so, he argued that “supervisors need to be equipped with more sophisticated tools for identifying possible market manipulation”.

He added:  “I even suggest that we ask people who develop AI tools for trading on markets and so on to include circuit breakers in these AI tools. This would force it to stop even before the risk of manipulation occurs.”

In terms of the current legal framework, there’s also the issue of responsibility when an AI agent acts in a malicious way, independent of human intent.

This is especially relevant in the case of so-called black box trading, where a bot executes trades without revealing its inner workings. To tackle this, Some experts believe that AI should be designed to be more transparent, so that regulators can understand the rationale behind decisions.

Another idea is to create new laws around liability, so that actors responsible for AI deployment could be held responsible for market manipulation. This could apply in cases where they didn’t intend to mislead investors.

“It’s a bit like the tortoise and the hare,” said Annunziata.

“Supervisors tend to be tortoises, but manipulators that use algorithms are hares, and it’s difficult to catch up with them.”



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleIBM, ServiceNow, T-Mobile: Earnings movers
Next Article Asian shares fall after a quiet day on Wall St, but Nvidia hit by US ban on exporting AI chip
Advanced AI Editor
  • Website

Related Posts

Chinese AI firms form alliances to build domestic ecosystem amid US curbs

July 28, 2025

I sat in on an AI training session at KPMG. It was almost like being back at journalism school.

July 26, 2025

How AI is transforming the lives of neurodivergent people

July 26, 2025
Leave A Reply

Latest Posts

David Geffen Sued By Estranged Husband for Breach of Contract

Auction House Will Sell Egyptian Artifact Despite Concern From Experts

Anish Kapoor Lists New York Apartment for $17.75 M.

Street Fighter 6 Community Rocked by AI Art Controversy

Latest Posts

OpenAI’s Chinese competitor Zhipu unveils new open-source model

July 28, 2025

Mistral AI & Qualcomm partner will boost AI on Snapdragon devices

July 28, 2025

ChatGPT Has No Legal Privilege – Is This A Problem? – Artificial Lawyer

July 28, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • OpenAI’s Chinese competitor Zhipu unveils new open-source model
  • Mistral AI & Qualcomm partner will boost AI on Snapdragon devices
  • ChatGPT Has No Legal Privilege – Is This A Problem? – Artificial Lawyer
  • Alibaba to launch AI-powered glasses creating a Chinese rival to Meta – NBC 6 South Florida
  • Nvidia Now Worth $4 Trillion — But Lawrence McDonald Warns Its AI Growth Depends On An Energy Sector 50 Times Smaller – NVIDIA (NASDAQ:NVDA)

Recent Comments

  1. binance推薦獎金 on [2407.11104] Exploring the Potentials and Challenges of Deep Generative Models in Product Design Conception
  2. психолог онлайн индивидуально on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  3. GeraldDes on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  4. binance sign up on Inclusion Strategies in Workplace | Recruiting News Network
  5. Rejestracja on Online Education – How I Make My Videos

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.