My take is that 96k MP3 “sux” but I might tolerate listening to it on my watch in a noisy gym or running out on city streets. If you hear 128k MP3 compared to the original CD the difference should be night and day but you might think it was OK, somewhere between 192k and 320k the quality difference disappears and even the phase relationships between the channels are pretty well preserved because Dolby Pro Logic works right.
Now there is a huge difference between a good master and a mediocre master and often when people release on a fancy format they start with a better master. For instance a lot of CDs are victims of the loudness war and it isn’t hard to make a better release.
I agree; I believe it to be the case that even when provided with an immaculate audio consumption setup, fully wired, insulated, abyss headphones, some insane schiit amp and dac, a perfectly silent room, let the listener pick the songs, the full nine yards, maybe 1% of the population could correctly discern a stereo lossless track from a stereo 320kbps more than 50% of the time over many songs. Genetics plays a role in getting into this 1%, but the far, far bigger cause is really just experience: Its having listened to the same tracks over, and over, and over again, until you've literally learned where the imperfections in this specific 320kbps encoding of this specific song are, rather than imperfections natural to the codec itself.
That's the "dirty secret" of lossless audio: If you ask even someone in this 1% to do the same thing, but for music the experimenter picks and that they've never heard before: that 1% literally becomes 0%. Maybe there's some gigahuman audio codec engineer employed deep in a basement at Dolby HQ who knows exactly the classes of imperfections that AAC encoding imprints into tracks and also has superhuman ears and such... but its damn close to 0 humans who could do this.
What this says about why anyone cares that AM, Tidal, Qobuz, whatever offer lossless audio, or that Spotify doesn't, is certainly interesting. It does seem like a tremendous waste of bandwidth and local storage, to me. But, I totally and fully subscribe to the message of the original article: It matters to me that this quality is maintained, that it exists for those people who do care about it, and that what I am consuming is as close as possible to what the original artists intended (rather than letting a bunch of cooks into the kitchen with opinions about which hertz are more important than others).
I believe you are conflating kbits/s with kHz. The latter being the unit of the sample rate. The thing is, no human ears hear higher tones than 17kHz and Nyquist and Shannon thought us that double of that as a sample rate is sufficient to reproduce tones equal or lower.
I am using k in this particular message to mean kbps but it I think it does describe the menu of sound quality options that people will hear.
I know people can't hear sine waves above 17kHz but there are questions about transient response and particularly how accurate you would need to represent phase if you want to replicate how well people can spot the direction of sounds in the real world. (Notably no "surround sound" technology of any kind would help a blind man with a gun shoot as accurately as they can in the real world)
Now there is a huge difference between a good master and a mediocre master and often when people release on a fancy format they start with a better master. For instance a lot of CDs are victims of the loudness war and it isn’t hard to make a better release.