Personally working with timeseries data my experience is that clients typically underestimate how much storage they need for a single state, let alone including historic versions. The decision people want more data, not more snapshots for a given storage spend. But that's timeseries.
They want more data but they don't want to pay for it. People want lots of things, doesn't mean they get it or deserve it.
I can't recall which it was but one of the timeseries databases was bragging on the fact that there are certain situations where scanning a block of data is as cheap as trying to add finer grained indexes to it, especially with ad hoc queries. They did a bunch of benchmarks that said block scanning with compression and parallelism was workable.
And while compression typically leads to write amplification (or very poor compression ratios), in a timeseries database, or a regular database architected in a timeseries-like fashion, modifying the old data is deeply frowned upon (and in fact I've heard people argue for quasi-timeseries behavior because modifying old records is so punishing, especially as the application scales), so as long as you can decide not to compress some pages - new pages - this is not a problem.