In a significant policy update, AI company Anthropic has announced new weekly usage limits for its Claude AI service. This move comes as part of the company’s broader effort to reduce misuse, including continuous background use of its coding assistant and unauthorized account sharing among users.
Starting August 28, all users on paid Claude AI plans — including the $20/month Pro tier and the premium $100 and $200 Max plans — will see new weekly caps introduced. These include limits on overall usage time and specific caps for the Claude Opus 4 model, Anthropic’s most powerful offering.
The company made the announcement through direct emails to users and posts on social media. According to Anthropic, some users have been running Claude Code, its AI-powered coding assistant, round the clock, which has led to strain on system resources and several outages in recent weeks.
“Claude Code has experienced unprecedented demand since launch,” said Amie Rotherham, a spokesperson for Anthropic, in an email to TechCrunch. She added, “Most users won’t notice a difference,” clarifying that the new limits are expected to impact fewer than five percent of users.
While the current rolling usage limit (which resets every five hours) will remain unchanged, the weekly cap aims to manage demand and ensure a more stable experience for all users. Notably, users subscribed to Max plans will now have the option to purchase additional access after exhausting their weekly quota, with standard API pricing in place for these overages.
The revised usage breakdown is as follows:
♦ Pro Plan ($20/month): 40 to 80 hours of Claude Code usage with the Sonnet 4 model per week.
♦ Max Plan ($100/month): 140 to 280 hours of Sonnet 4 and 15 to 35 hours of Opus 4 per week.
♦ Max Plan ($200/month): 240 to 480 hours of Sonnet 4 and 24 to 40 hours of Opus 4 per week.
Anthropic has not detailed whether these usage hours are calculated based on time spent using the tool, token usage, or computational load. The new figures suggest that the actual usage scale between the $20 and $200 tiers may be closer to six times rather than the previously advertised 20 times.
This decision mirrors a growing trend among AI platforms seeking to discourage policy violations and excessive usage. Earlier this year, AI companies like Anysphere — creators of the Cursor IDE — and Replit implemented similar usage changes, both of which received mixed reactions from their user communities.
Anthropic’s changes appear to be pre-emptive and focused on preserving the quality and reliability of Claude AI, especially its in-demand Claude Code feature. By tightening access and discouraging non-compliant behaviour such as account sharing and reselling, the company hopes to strike a balance between accessibility and fair resource distribution.