Since there is no transparency around the metric for these limits they could easily change the metric and it wouldn't be easy to measure it. I'm thinking about canceling my sub because there is no clear way for me to understand this metric but wanted to see what others thought about this.
What you are experiencing is called bait and switch... we should all be building on site models.
They want inconsistency so then you get to buy more usage. We are like 6mo-1y behind of just running these models (looking at you kimi) on a mac studio and not having to pay another company that think they are building the machine god. Anthropic and co have less of a moat than you think.
It's a good opportunity for people to try kimi and others, and now that soon we will have an agentic harness similar to Claude Code as its getting rewritten to Rust... I guess let's look elsewhere?
My guess is that Anthropic is focusing on enterprise as it gets ready for an IPO, leaving behind solo devs.
I find it very unlikely for this to change, unless they get even more datacenters and capacity to address how much demand they've got recently.