Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Jus Mundi – Jus AI 2, Arbitration Agent – Artificial Lawyer

EgoNight: Towards Egocentric Vision Understanding at Night with a Challenging Benchmark – Takara TLDR

Google’s new Gemini 2.5 model gives AI agents control over web and mobile interfaces

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Customer Service AI

Fortune 500 companies use AI, but security rules are still

By Advanced AI EditorJuly 1, 2025No Comments7 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


AI is no longer a niche technology — it’s becoming a fundamental part of business strategy for most Fortune 500 companies in 2025. All of them are now using AI, but they differ in their approaches to implementing it. Cybernews researchers warn of the risks involved as the rulebooks have yet to be written.

AI is already integrated with core operations, from customer services to strategic decision-making. And this comes with some significant risks.

“While big companies are quick to jump to the AI bandwagon, the risk management part is lagging behind. Companies are left exposed to the new risks associated with AI,” Aras Nazarovas, a senior security researcher at Cybernews, warns.

What does AI find about AI on Fortune 500 companies’ websites?

Cybernews researchers analyzed websites of Fortune 500 companies and found that a third of companies (33.5%) focus on broad AI and big data capabilities rather than specific LLMs.  They highlighted AI for general purposes like data analysis, pattern recognition, system optimization, and others.

More than a fifth of companies (22%) emphasized their AI adoption for a functional application across various specific domains. These entries describe how AI is being used to address business problems, such as inventory optimization, predictive maintenance, or customer service.

For example, dozens of companies already explicitly mention using AI for customer service, chatbots, virtual assistants, or related customer interaction automation. Similarly, companies say they use AI to automate “entry-level positions” in areas like inventory management, data entry, and basic process automation. 

Some companies like to take things into their own hands, developing proprietary models. Around 14% of companies specified their own internal or proprietary LLMs as a focus, such as Walmart’s Wallaby or Saudi Aramco’s Metabrain.

“This approach is particularly prevalent in industries like Energy and Finance, where specialized applications, data control, and intellectual property are key concerns,” Nazarovas noted.

A similar number of companies gave AI strategic importance, indicating AI integration within an organization’s overall strategy.

Fewer companies, only around 5%, proudly declare reliance on external LLM services from third-party providers, leveraging providers like OpenAI, DeepSeek AI, Anthropic, Google, and others.

However, there are also a tenth of the companies that only vaguely mention AI use, without specifying the actual product or its use.

“While only a few companies (~4%) mention a hybrid or multiple approach towards AI, blending proprietary, open source, third-party, and other solutions, it is likely that this approach is more prevalent as the experimentation phase is still ongoing,” Nazarovas notes. 

The data suggests companies often don’t want to explicitly name their use of AI tools. Only 21 companies mention the use of OpenAI, DeepSeek (19), Nvidia (14), Google (8), Anthropic (7), META Llama (6), and less for Cohere and others.

Meanwhile, for comparison, Microsoft boasts that over 85% of Fortune 500 companies utilize its AI solutions. Other reports suggest that 92% of the 500 companies use OpenAI products.

AI is here, and so are the risks

YouTube’s algorithm recently flagged tech reviewer and developer Jeff Geerling’s video for violating community guidelines. The automated service determined that the content “describes how to get unauthorized or free access to audio or audiovisual content, software, subscription services, or games.”

The problem is that the YouTuber never described “any of that stuff.” He appealed, but his appeal was rejected. However, after some noise on social media, the video was later reinstated after what Geerling presumes was “a human review process.”

Many smaller creators might never get similar treatment. 

This story is just the tip of the iceberg of the risks of AI adoption. Cybernews researchers listed many more:

Data Security/leakage: This is the most commonly mentioned security concern, appearing in a significant number of entries across all industries. Issues related to protecting sensitive data, including personally identifiable information (PII), health information, and operational data, are consistently highlighted. Prompt injection: Vulnerabilities associated with prompt manipulation and insecure inputs are also frequently noted, particularly in the context of chatbots, search engines, and other interactive AI systems. Model integrity/poisoning: Concerns about the integrity of LLMs and the potential for poisoning training data are present, especially for proprietary models. This includes risks related to biased outputs and manipulated model behavior. Critical infrastructure vulnerabilities: For organizations operating in critical infrastructure sectors (e.g., energy, utilities), the security of AI integrated into control systems and operational technologies is a major risk. Intellectual property theft: Protecting proprietary LLMs, algorithms, and AI-related intellectual property is a concern, particularly for companies investing heavily in internal AI development. Supply chain/external risks: Risks associated with third-party LLM providers, partner LLMs, and the broader AI supply chain are also mentioned, highlighting the need for secure vendor management and risk assessment. Bias/algorithmic bias: Concerns about bias in LLM outputs and algorithmic decision-making are present, emphasizing the need for fairness and ethical considerations in AI development and deployment. Insecure output: Risks related to LLMs generating harmful, misleading, or insecure outputs are noted, particularly in applications where the AI’s response directly impacts users or systems. Lack of transparency/governance: Issues related to the lack of transparency in LLM decision-making processes and the need for robust AI governance frameworks are also highlighted.

“Critical infrastructure and healthcare sectors, for example, often face unique and heightened security vulnerabilities,” Nazarovas said.

“As companies start to grapple with new challenges and risks, it’s likely to have significant implications for consumers, industries, and the broader economy in the coming years.”

Reckless AI adoption

“AI was adopted rapidly across enterprises, long before serious attention was paid to its security. It is like a wunderkind raised without supervision—brilliant but reckless. In environments without proper governance, it can expose sensitive data, introduce shadow tools or act on poisoned inputs. Fortune 500 companies have embraced AI, but the rulebook is still being written,” says Emanuelis Norbutas, Chief Technology Officer at nexos.ai.

Emanuelis adds: “As adoption deepens, securing model access alone is not enough. Organizations need to control how AI is used in practice — from setting input and output boundaries to enforcing role-based permissions and tracking how data flows through these systems. Without that layer of structured oversight, the gap between innovation and risk will only grow wider.”

Common strategies to mitigate the risk

The regulation of artificial intelligence (AI) in the US is currently a mix of federal and state efforts, with no comprehensive federal law yet established.

Several frameworks and standards are emerging to address AI and LLM security.

The National Institute of Standards and Technology (NIST) has released the AI Risk Management Framework (AI RMF), which provides guidance on managing risks associated with AI for individuals, organizations, and society.

The EU has passed the AI Act, a regulation aiming to establish a legal framework for AI in the European Union. The act raises requirements for high-risk AI systems, including security and transparency obligations.

ISO/IEC 42001 is another international standard that specifies requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS). It focuses on managing risks and ensuring responsible AI development and use.

“The problem with frameworks is that AI’s rapid evolution outpaces current frameworks and presents additional hurdles, vague guidance, compliance challenges, and other limitations,” Nazarovas said. “Frameworks won’t always provide effective solutions to specific problems, but they surely can strain companies when enforced.”



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleNo camera, just a prompt: AI video creators are taking over social media
Next Article Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
Advanced AI Editor
  • Website

Related Posts

AI for customer engagement: Insights from UiPath Fusion

October 7, 2025

AI-Powered Customer Service Fails at Four Times the Rate of Other Tasks

October 7, 2025

CX quality is improving, no thanks to AI customer support

October 7, 2025
Leave A Reply

Latest Posts

Matthiesen Gallery Files Lawsuit Over Gustave Courbet Painting

Basquiat Work on Paper Headline’s Phillips’ Frieze Week Sales

Charges Against Isaac Wright ‘to Be Dropped’ After His Arrest by NYPD

What the Los Angeles Wildfires Taught the Art Insurance Industry

Latest Posts

Jus Mundi – Jus AI 2, Arbitration Agent – Artificial Lawyer

October 8, 2025

EgoNight: Towards Egocentric Vision Understanding at Night with a Challenging Benchmark – Takara TLDR

October 8, 2025

Google’s new Gemini 2.5 model gives AI agents control over web and mobile interfaces

October 8, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Jus Mundi – Jus AI 2, Arbitration Agent – Artificial Lawyer
  • EgoNight: Towards Egocentric Vision Understanding at Night with a Challenging Benchmark – Takara TLDR
  • Google’s new Gemini 2.5 model gives AI agents control over web and mobile interfaces
  • Slack is giving AI unprecedented access to your workplace conversations
  • Introducing AI Mode in Australia

Recent Comments

  1. Dan on Google’s New AI: Fly INTO Photos…But Deeper! 🐦
  2. Jerrold on Perplexity AI’s Comet browser will streak across the web this month
  3. Bunnimens7Nalay on C3 AI Awarded $13 Million Task Order to Expand Predictive Maintenance Program Across U.S. Air Force Fleet
  4. Son Kroes on VAST Data Powers Smarter, Evolving AI Agents with NVIDIA Data Flywheel
  5. Bunnimens7Nalay on Innovaccer Rakes In $275M, Kicking Off What Will Likely Be Another Hot Year for AI Funding

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.