But they're claiming it's more token efficient, so me switching my usage to the new model should _free up_ capacity.
But they're claiming it's more token efficient, so me switching my usage to the new model should _free up_ capacity.