
(Source: SuPatMaN/Shutterstock)
Artificial intelligence has been moving faster than anyone could regulate it. The Federal Trade Commission (FTC) appears to be keen to catch up. In one of its most aggressive moves yet, the agency has launched a formal investigation into seven major tech companies — including Google, Meta, and OpenAI — demanding detailed records about how their AI tools are developed, marketed, and deployed.
The FTC wants to dig into how these systems work, what users are told, and what risks are being overlooked — especially when minors are involved. Chatbots that imitate conversation and emotion are now under the microscope. So are the claims behind them. Some companies have promoted their AI as a legal assistant, a business coach, even a friend — all while quietly collecting user data and often skipping over critical safety measures.
According to the FTC’s order, the agency is seeking internal documentation that shows how AI character personas are created, how outputs are reviewed, and whether users are ever warned when interactions veer into sensitive or harmful territory.
Investigators also want to understand how these platforms are monetizing engagement — particularly when emotional reliance, not just utility, keeps people coming back. The companies are being pressed to explain not only how their chatbots respond, but what steps, if any, they take to measure psychological impact or prevent overreach.
This isn’t the FTC’s first strike against questionable AI practices. Back in 2024, the agency rolled out Operation AI Comply, a coordinated enforcement sweep targeting companies that had leaned on AI hype to sell services they couldn’t back up.
Firms like DoNotPay were penalized for marketing a “robot lawyer” that promised legal support without any verified expertise. Others, including Rytr and a cluster of e-commerce startups, were charged with using AI-powered tools to generate fake reviews or lure consumers into income schemes that rarely delivered.

(Source: Anocha Stocker/Shutterstock)
That effort focused heavily on fraud, false promises, and tools designed to mislead. What’s different now is the scale — and the stakes. The latest crackdown shifts attention to the mainstream players shaping how millions of people interact with AI every day. It’s not just about scammy products anymore. The agency is now asking whether the biggest names in tech are building emotionally responsive systems that engage users, collect personal data, and simulate trust — all without enough oversight or guardrails, especially when those users are children or teens.
For companies at the center of the inquiry, the implications could be serious. If the FTC presses forward with tighter rules or enforcement actions, it may force large-scale changes to how AI products are built and marketed. That means new review processes, revised training protocols, and potentially limits on how data is collected from users — especially younger ones.
Tech giants have faced this kind of regulatory headwind before. After Europe’s GDPR came into effect, Meta’s annual compliance costs crossed the billion-dollar mark. The lesson from that era still applies: when the guardrails come late, they tend to land hard.
It’s not just the FTC that’s uneasy. Lawmakers and advocacy groups across the country are starting to sound the alarm, too. In California, legislators are pushing new proposals to limit how AI chatbots can interact with kids and to require stronger guardrails across the board.
In Washington, the Senate is getting ready to examine how these systems might be affecting people’s mental and emotional well-being. Organizations like Common Sense Media are calling for age-based restrictions and warning that bots built to mirror empathy are entering the market without much in the way of real oversight. The FTC may have taken the first step, but momentum is growing far beyond that.
This probably won’t be the last FTC investigation, and it won’t be the last warning either. We know AI is not going away, in fact it is being embedded more deeply into our everyday lives and also government processes. The real test now is whether the industry can build and implement AI with responsibility before regulators decide the limits for them.