Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It depends on what kind of curve the volume control uses but assuming 50% volume is half the amplitude (-6dB) you will only lose 1-bit of resolution. To lose half the bit depth you'd have to turn it down by 48dB!

It's worth noting that perceptually, half the volume is actually closer to 3dB (a halving of energy), which is only half a bit of loss.

If a floating point audio pipeline correctly dithers the signal going into the DAC it's unlikely anybody will notice any quality loss by using a digital volume control (even at 16-bit). You might hear the hiss of the dithering if you turn up the analogue portion of the chain, although you'd have to turn it up quite a lot.



No. Volume sliders do not use a linear scale. A linear scale is all but useless for volume. Most volume sliders use a logarithmic scale, where 50% basically has no meaning except in relation to the minimum value chosen by the slider.

This is perfectly sensible, since our sense of hearing does not scale linearly from silent to loud. Our ears have a dynamic range of about 120 dB. On a linear scale, the 50% value would correspond to about -6 dB, which is perceptually one 50th of the full audible scale.

A sensible volume slider (on a PC) would range about 40-60 dB, since anything below -60 dB will be lost in background noise anyway. Thus, the 50% mark would be somewhere around -20--30 dB. Thus, this 50% setting would lose roughly 5 bits of information, not one.

Note however that a reduced dynamic range at "half loudness" is usually just fine, since the full dynamic range can only be heard at high volume anyway.

(Also note that the bottom value of volume sliders usually mutes. Analogue equipment sometimes does not do that, which results in very faint signal playing even when turned all the way down.)

That said, the whole argument about losing resolution probably does not make sense anyway since the operating system volume sliders attenuate the sound hardware DAC gain instead of actually decreasing digital gain...


That said, the whole argument about losing resolution probably does not make sense anyway since the operating system volume sliders attenuate the sound hardware DAC gain instead of actually decreasing digital gain...

We were talking about reducing volume in the App. If the app is using the system volume then the App->CoreAudio step is irrelevant (as nothing changes), the CoreAudio->DAC step doesn't change either (full bit-depth and an OOB message to lower the gain), but the DAC->Speakers analog step still has to output half the volume and thus reduce the range of the signal. For this not to matter the OS would need to be able to change the gain in the speakers instead of the DAC.

Again, how much this actually has an audible effect on quality beats me...


Well, kinda. The DAC would convert the signal at full power, then the pre-amp stage would boost it according to you OS loudness setting in the analogue domain. This is possible, but I don't know if sound cards are actually implemented that way.

I know professional mixing consoles are not (at least not exclusively), but they offset that by calculating everything in 32 bit float and using very high bit length DACs. Sound cards do have a pre-amp stage but I don't know if they are software-controlled.


Well, kinda. The DAC would convert the signal at full power, then the pre-amp stage would boost it according to you OS loudness setting in the analogue domain. This is possible, but I don't know if sound cards are actually implemented that way.`

That's fine but what I was referring to was that after the digital-analog-conversion and the pre-amp now the analog signal that goes out of the audio jack to the speakers has been reduced in range, so on that final path to the speakers you've lost some range.


No, analogue signals do not have a limited range. Analogue dynamic range is only limited by the noise floor of the cable and the resolution of the DACs/ADCs. The DACs/ADCs will probably do 24 bits.

Audio recordings typically do not go beyond 16 bits of dynamic range after mastering. And even before that, microphones can't deliver more than 20. Neither can ears. So that part of the system won't likely be a problem.

Reducing the signal gain in the analogue domain does not decrease its dynamic range, it merely shifts it to a lower range of the same width. Of course, this will only be true in the operating range of the op-amps, but that is typically not a limiting factor.

After the DACs, the signal will likely go through another pre-amp, then main amp in the sound system, then some analogue filters, then loudspeakers. All these are analogue and not usually limiting the dynamic range (though they will add some distortion). Finally, the signal will enter a room with noise aplenty, which will limit the effective dynamic range of the signal significantly. But that is out of control of that volume slider we talked about in the beginning ;-)


No, analogue signals do not have a limited range. Analogue dynamic range is only limited by the noise floor of the cable and the resolution of the DACs/ADCs. The DACs/ADCs will probably do 24 bits.

Of course analog signals have an effective limited range. As you yourself mention the noise floor makes sure of that. Only an idealized analog signal of infinite precision doesn't have a limited range.

After the DACs, the signal will likely go through another pre-amp, then main amp in the sound system, then some analogue filters, then loudspeakers. All these are analogue and not usually limiting the dynamic range (though they will add some distortion).

Precisely. So that's why keeping the signal at as high a level as possible without clipping all the way through the pipeline and only limiting at the end is an advantage. All those stages have their own noise floor. Several other people have mentioned on the thread that this is also the general recommendation for audio work.

Finally, the signal will enter a room with noise aplenty, which will limit the effective dynamic range of the signal significantly. But that is out of control of that volume slider we talked about in the beginning ;-)

That's of course true, and again I admit my ignorance as to how much of a difference this really makes once it gets where it matters, your ears. Originally I was just responding to the idea that using floats in your audio framework eliminated all sources of reduction in precision.


Actually it seems many software volume sliders do[1]. Although you're right that an ideal volume control should be logarithmic.

My main point still stands that at half the amplitude or half the energy (perceptually half the volume) you're only losing half a bit to a bit of resolution. And even at -20 - -30dB, with 4-5 bits of resolution loss, you're probably not going to notice the degredation.

[1](http://www.dr-lex.be/info-stuff/volumecontrols.html)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: