News

Anthropic Partners with SpaceX for 220,000-GPU Colossus 1 Access to Boost Claude Capacity and Address User Limits

Anthropic Partners with SpaceX for 220,000-GPU Colossus 1 Access to Boost Claude Capacity and Address User Limits

AI leader Anthropic has announced a partnership with Elon Musk's SpaceX, securing access to the compute capacity of SpaceX's Colossus 1 data center. Located in Memphis, Tennessee, Colossus 1 houses over 220,000 Nvidia GPUs, including H100, H200, and next-generation GB200 accelerators. SpaceX describes it as "one of the world’s largest and fastest-deployed AI supercomputers," designed for large-scale AI training, fine-tuning, inference, and high-performance computing workloads.

The primary goal of this collaboration is to address persistent user complaints regarding Claude's rapid usage limit exhaustion. Anthropic stated in its announcement blog post that accessing over 300 megawatts of compute capacity through Colossus 1 will be used to "directly improve capacity for Claude Pro and Claude Max subscribers."

Specifically, Anthropic is implementing three key changes:

  • Claude Code's five-hour rate limits will be doubled for Pro, Max, Team, and seat-based Enterprise plans. Additionally, the peak-hour limit reduction for Pro and Max users will be removed.
  • API rates for Claude Opus models are receiving a significant boost. For Tier 1 users, for instance, the maximum input tokens per minute will jump from 30,000 to 500,000, and maximum output tokens per minute will increase from 8,000 to 80,000.

Elmer Morales, founder of koderAI, highlighted the impact on developer workflows: "The shift changes workflows from cautious prompt budgeting to deeper reasoning, bigger tasks, and more complete engineering output." Andy Pernsteiner, Field CTO at VAST Data, echoed this sentiment, suggesting the deal will enable developers "to use Claude Code to build richer applications and more advanced agents," freeing them from the need to "meticulously maintain context and reduce MPC use," which were previously workflow bottlenecks.

This agreement with SpaceX follows numerous complaints from Claude Code users who reported hitting usage limits much faster than anticipated; one Redditor, for example, claimed a single prompt consumed 10% of their limit, far exceeding the expected 0.5–1%. Anthropic's blog post also mentioned that it "trains and runs Claude on a range of AI hardware," including AWS Trainium, Google TPUs, and Nvidia, and continuously "explores opportunities to bring additional capacity online," positioning this SpaceX partnership as part of its ongoing compute capacity expansion efforts.

↗ Read original source