Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Is it better to turn up the volume in the software or on the speakers? (superuser.com)
276 points by ivoflipse on Oct 26, 2012 | hide | past | favorite | 95 comments


I was surprised by the best answer as it seems to me to be more or less wrong.

Let's ignore the discussion about dynamic range and bit depth etc., and assume that the volume control on your operating system controls the DAC rather than doing the stupid thing of digital volume reduction. The fundamental issue is signal to noise ratio on the analog line. If you turn the volume too far down on the computer and turn the volume up on your speakers, the sound on the analog line is too low with regard to the electrical noise and will be hissy. If you turn the volume up too much on the computer and turn the volume down on your speakers, then the signal will be so loud as to produce distortion either in the DAC or on the line itself. You're looking for a middle ground: as loud an output from the computer that you can produce without causing distortion in your loudest music parts. Once you've got that set, change the volume on the speakers to compensate.


Little late to the party here but you're wrong. Unless something is being done to the signal (eq, other processing, etc) then you want the digital side to remain at max volume (i.e., no reduction in bit depth). The audio will have been mastered close to 0 db with little or no clipping (exceeding the maximum amplitude possible) and should not be "too loud" for your amplifier. If it is there is something wrong with the gain configuration of your setup.

As discussed, lowering the volume in software (which is what you likely mean by "controlling the DAC") is accomplished by reducing bit depth. By definition you're reducing the amount of signal present which makes your S/N worse. If by "controlling the DAC" you mean reducing the signal at the DAC's analog stage then there's nothing theoretically wrong with that (except that it's rarely if ever the case in computer sound cards or hi-fi DACs) as it is the equivalent to adjusting the volume on your (integrated) amp.

Probably the most common exception to the above is people using a digital EQ to attempt to improve the sound of a mediocre computer audio setup. Most digital EQs allow for both "boost" and "cut" -- increasing or decreasing the amplitude of individual frequency bands. If you EQ the music the proper way you should only cut frequency bands in order to emphasize the other bands to suit your listening tastes. For example if you want to try to increase the amount of bass you should cut the mids and the highs and turn up the volume (on your amp) to compensate. EQing this way ensures that you're not forcing the signal into clipping by boosting too much.

You still won't get great results because if you're lacking bass or highs then it's most likely because you're not running full range speakers. If the sound does change when you EQ it's most likely due to increased distortion, not because you're appreciably increasing the amount of bass present. That's not true for full range speakers but if you have full range speakers you probably wouldn't need to EQ in the first place. (Full range speaker are usually 20 - 50hz to 20khz.)

If it's still not clear than think of it this way: Standalone CD players don't have volume controls. Their output will be "max volume" of what's on the CD. You can think of your computer and sound card like a (crappy) CD player. Make sure it's at max volume and adjust the level on your external amp or amplified speakers.


'As discussed, lowering the volume in software ... is accomplished by reducing bit depth.'

No, reducing the amplitude of a digital signal does not change its bit depth. If the amplitude of the digital signal is reduced, then a couple of bits on the MSB side will be constant. However, reducing the bit depth would make bits on the LSB side constant.

To state it differently, a lower bit depth increases the quantization levels and results in lower quality audio. Reducing the amplitude retains the same quantization level, and therefore has no reduction in audio quality (At least, not due to bit depth, it does change the SNR as you suggested)


You're correct that lowering amplitude ~= reducing bit depth. I'm not sure that lowering amplitude digitally keeps S/N constant but I'm a bit out of my depth (excuse the pun) at this point.

To further muddy the waters, in most cases using software volume control won't noticeably reduce audio quality. The main point I was trying to make is that on most computer setups 100% volume should be the least noisy, most accurate signal that it's capable of.


> assume that the volume control on your operating system controls the DAC rather than doing the stupid thing of digital volume reduction

Is this a safe assumption though? As much as I consider myself knowledgeable in this area, this is way off my radar.

Even if this assumption does hold, the answer does make a good point when it comes to per-application volume controls, which are certainly performed digitally.

Your point on SNR is a very good one, particularly on the low-end, but I would question whether any computer at max volume would produce a signal strong enough to cause distortion. I generally run my computer at around 90% volume but that's merely a habit carried over from having an analog hifi stack.


Guy with some sound knowledge here. It doesn't matter if the attenutation is done digitally (as described by the top SE comment) or in analog fashion. The answer is always the same. In the music biz the general rule is: you want to put as much gain as early in the chain as possible. In other words, start at the source of the sound, and working your way towards the speakers, turn it up as loud as you can without distorting it.

Take the example of a guitarist. He turns his guitar up as high as he can, then the amp gain as high without distorting it. Then at the sound console. etc

The reason for this the later in the chain you are, the more electronics and cable is involved which adds noise to your signal. Highest signal-to-noise is achieved by ensuring you have as much signal as you can running thru from the get-go.

In the case of a computer, again it doesn't matter how volume changes are done- crank up your software as high as you need (but not beyond the point of distortion), then adjust it further at the speakers.


> start at the source of the sound, and working your way towards the speakers, turn it up as loud as you can without distorting it

In practice that will often be right. In theory it is not.

With an analog signal what you actually want to do is avoid gain, whenever possible. You don't want to add as much gain at every step and then attenuate later. Gain adds noise and removes dynamic headroom.

So, assuming a guitar with passive electronics, you want to turn it all the way up (which adds no gain, passive electronics only attenuate a signal) and put everything else at unity gain (i.e. no attenuation and no additional gain). That will be labeled "0" on mixer faders and is about 80% of the way up on the fader.

We'll ignore guitar amplifiers for the moment, where the distortion that happens when you run out of headroom (clipping, i.e. "fuzz") and the sonic properties of vacuum tubes when they're near the top of their dynamic range (i.e. "tube warmth") are desired effects.

Now, so assuming you have a signal that you want clean all the way to the output, you've got things at unity all the way across the board, and it's not loud enough, then you start looking at where to add gain.

This is where we get to the part about you being in practice correct for many cases.

What we're looking to avoid is adding noise. Cabling exposes signals to RF interference and attenuate the signal since they're not perfect carriers. The more cables and interconnects down the road from the source we are, the more noise will have been introduced. Since we don't want to amplify the noise, it's usually best to amplify the signal at the point closest to the source since less noise will have leaked in. If you amplify a signal 4 connects down your signal path, there will be a lot of noise that you're amplifying along with your source's signal.

However, not all (pre-)amplifiers are created equal. Some, in fact, are pretty noisy themselves. There are certainly points where with a noisy pre-amp, you'll degrade the signal more by amplifying it in an earlier-in-your-signal-path pre-amp than you would boosting it at a very clean pre-amplifier one step down the chain. One of the big differences between cheapo mixers and high-end mixers is the quality of their first-in-the-signal-path pre-amps. A gain-adding stomp box is almost certainly going to be noisier than a Mackie pre-amp.

Now, since this started off talking about the sound coming out of consumer grade computer components (rather than a nice break-out box), it's worth noting that their analog amplifiers are pretty universally terrible. If you have a reasonably short and well-shielded cable, you'll generally be better off leaving it at "unity" (i.e. not boosting the pure signal that's coming out of the digital-to-analog converter), but adding it at the next step in your chain -- the mixer, receiver or powered monitors.

Now, to finish up, I'll circle back around to my first point -- since boosting the signal itself adds noise, if you're having to attenuate it later on, you're adding unnecessary noise. Your strategy of turning things up all the way until they clip is not a good one if you're having to then attenuate the signal later on for it to be at the correct levels for the mix. Ideally what you want is to have all of your faders at unity (again, labeled 0) and then to add gain at the quietest pre-amp (usually early in the chain) available until you hit the loudest volume that the channel will need to be in the mix. Fortunately, the folks designing mixers know that sometimes the situation on the ground is different in a live sound context and you'll actually need to go beyond unity, which is why mixers don't peak-out at there, but allow you to boost the signal there on an as-needed basis.

Edit: Minor addendum -- this assumes that you're not sending weak signals down long, unbalanced cables. If you are, that changes the calculus a little since they'll pick up a lot of RF on the way and you hack that around that by boosting before the cable and attenuating afterwards. But don't do that. That's what DI units are for.


It's not that simple. I've found in my rig, minimum noise is achieved with the guitar volume at 7 or so and preamp gain at minimum. If I turn up the volume on the guitar I can switch on the pad on the preamp, but this adds noise. I then add tons of distortion and gain in the software realm, so every dB of noise floor counts. Even clean guitar sounds are usually distorted.

The sound console is different. You want everything in the console to be trimmed to around 0 dB on the console's meters, and NOT as loud as possible without distorting. Trimming it to 0 dB gives you consistency between channels making it easier to work with the faders, and gives you however much headroom the console is designed to work with. "Loud as possible" will mean somewhere like +24 dB on some consoles, which is too hot to work with. Yes, they have that much headroom -- nominal line level is +4 dBu, around 1.2 V, and consoles have internal voltages in the 15-24V range.

The same goes for recording, you typically want things to peak at something like -18 dBFS. Making things peak "loud as you can without distorting" is the job of the mastering engineer, and it happens right before you stamp out CDs or MP3s or whatever.


> Take the example of a guitarist. He turns his guitar up as high as he can, then the amp gain as high without distorting it. Then at the sound console. etc

Without distorting it? Are you sure you have some sound knowledge? That doesn't sound like you know many guitarists... ;)

qv. https://www.youtube.com/watch?v=4gDsbOraiqg


I've worked in Austin as a live sound mixer (FOH engineer, as we say) for a while now, and it's evenly split-some will say build your gain structure using the exact method; up until you clip, and back off from there.


How do you determine the highest possible volume that doesn't introduce distortion? My sense in trying to find the right volume ratio for listening to an iPhone in my car is that it's best to not turn the iPhone volume all the way up, usually I use 75% or so. Is turning it up until you start to hear something funny the only way to do this or is there a more reliable way?


In pro audio there's usually a meter you can watch, or at least a clip LED that turns red during clipping. Otherwise you use your ears, or if you're really serious, you can use an oscilloscope with test tones and watch for the shape of the waveform to change, or beyond that, you can use dedicated test gear (or software) that measures THD directly.


> Is this a safe assumption though?

No it's not -- I was just pointing out that he's going on about various stuff without hitting the critical meat of the issue at all.


It would be easy enough to test -- download Right Mark Audio (free, but Windows only), connect your audio input to output with a patch cable, and test at different volume levels.


I think you're absolutely right and it's not the first time and I'm sure it will not be the last time that a very verbose post (read well structured with references and mentioning technical terms) gets much more credit than it deserves.

It would be nice to have some statistics about the number of lines of a reply on SO or HN and the amount of 'upvotes'. I think a lot of people would be surprised...


> assume that the volume control on your operating system controls the DAC rather than doing the stupid thing of digital volume reduction.

What would be the difference? I've never seen a DAC with an analog volume control. Any DAC I've ever seen with volume control does so digitally, usually with toggable steps (+6DB, +12DB, etc) or rarely with a programmable gain. Either way the DAC is adjusting the incoming digital signal prior to conversion. It would be really strange to try to make an analog adjustment to the output before the reconstruction filter.


First hit for a AC97 codec: AC ’97 SoundMAX® Codec AD1981B has an analog mixer stage between ADC/DAC and analog input/outputs.

http://www.xilinx.com/products/boards/ml505/datasheets/87560...


That is a 10 year old codec, not a DAC, a DAC is only one part of a codec.


Computers generally tend to have full-blown codec chips with analog mixers in them rather than standalone DACs, though.


Depends on the model, cirrus codecs used in almost all mac's do all their mixing digitally for example.


I've seen a lot of comments that seem to confuse digital and analog audio.

Digital audio levels are fixed -- i.e. 0 dBFS (dB relative to digital "full scale"), thus you want the audio as close to full scale (0dBFS) as possible. This is the reasoning behind what you've probably heard described as the "loudness war". 0dBFS is the reference point beyond which no signal can exceed.

Contrast that with analog audio, be it the DAC (digital-to-analog converter) on your soundcard that splits to a "line out" (likely 3.5mm a/k/a/ 1/8" TRS connector outputting audio at -10 dBV, the "consumer" level) and "headphone jack" (i.e. line level signal -> headphone amplifier). Once the audio signal becomes analog, you must then be concerned about gain staging. If you have a crappy sound card, you shouldn't reduce the analog output, because, in order to achieve the same loudness, you will have to increase the gain on your 'speakers' (a/k/a/ "pre-amplifier"; often integrated into cheap 'all in one' computer audio playback systems). Remember that the noise floor remains the same regardless of gain settings, so by reducing the "line out" level, you are only increasing the noise floor (i.e. level of noise).

For all of you geeks out there... If you are looking to cut through the bullst and get a quality playback system, look into a setup that has a quality DAC and/with monitor controller, a separate preamp, and quality monitors. Benchmark has a great USB DAC, and or Lavry Engineering is what the professional mastering studios use (their DAC costs well over $20k, and not because of crap like 'voodoo diamond dust covered gold-plated platinum wires', but, in part, because it has an internal oven and temperature monitoring circuitry to keep the internals at precise temperatures). You can get a great monitor controller from Dangerous Audio or Coleman. Bryston preamp's are very popular in the professional setting, as are PMC and Bowers & Wilkins monitors.


Is it really possible to turn the volume up that high on a computer you're likely to encounter? I've yet to encounter a computer-like device that produces any noticeable distortion on the output when turned up all the way, if the other end has a volume control to make the final output happen at a reasonable level.


Lots of computers I've used allow audio gain to be set above unity, with noticeable distortion as a result. Next time you're in alsamixer on Linux, look at the fourth line down on the top which says "dB gain". It's not uncommon for it to go all the way up to +6 dB or +12 dB, which will almost certainly distort with common source material.

This happens both on my desktop Linux system and my Linux laptop.


Most consumer-hardware I know will have positive gain on the computer output (all built-in audio for my devices, the PCI cards with Ensoniq 1370 or 1371 I used earlier). On the other hand, a lot of the professional audio equipment I know have mixer-applications that has either 0dB as a maximum setting, or clearly labeled 0dB marks/buttons on the mixer application).

You can verify it easily by playing either a 0dB sine-wave mp3 (first google hit: http://www.dr-lex.be/software/testsounds.html) or trying to get a sine-wave test-tone generator. You'll hear the point where distortion kicks in pretty drastic as soon as you reach the clipping level. With a spectrum-analyzer (e.g. FrequenSee on Android) even slight clipping will be very visible (peaks at 3x, 5x, 7x, ... the base frequency will suddenly appear).


A DAC is unlikely to generate additional distortion on its own only because its volume is too high, as it is (usually) designed to obtain a unit gain corresponding to the maximum level of digital signal at its input. If a 16-bit DAC maxes out its analog output when the input signal is - say - 2^14, then it means that it is effectively operating on a dynamic range of 14-bit, and should be advertised as such. Of course things are not so straightforward, as even the best DACs are not perfectly linear, but they definitely should not clip the output signal.


Let's try to put this discussion on the right track. (There doesn't seem to be any posts here from people who actually design analog circuits.) The SO post reduces to comparing two versions of noise: One from (A) software control of the DAC and the other (B) from hardware.

(A) Pretend that everything except the DAC was noiseless: The noise would be due to the nonlinearities and quantization in the DAC.

(B) Pretend that the DAC was perfect: The noise would be dominated by the noise-equivalent input-power introduced by the resistance present in the components (including the transistors used for amps).

In short: (A) is a function of how wide the range of bitcodes that you use. The smaller the range, the larger the noise component relative to the signal.

OTOH: (B) is a function of temperature: All of the noise power before the final dial to your amp is passed through as is the signal, so the ratio stays constant. There is also a constant noise power introduced after that final amp, but I would guess it is negligible compared to the amplified noise power.

So tl;dr = For a decent sound card, maximize the software volume and then use the analog dial.


Garbage in, Garbage out.

Max your software (usually this is 80% to prevent clipping and distortion), then attenuate speakers to 50% (analog boost is much worse than digital as it raises the noise floor).

Source: Mixing at studios for last 10 years


I came in to say the same thing (having been in bands and recorded quite a bit and working with sound engineers).

On a similar note, I learned and practised all this in another country and I always thought it strange why when I, for example, flipped to American channels, the volume would increase dramatically. I think a similar phenomenon occurs in radio (Loudness Wars) and in malls, etc. Now I wonder if it occurs in American software.


It makes me very glad that Windows ... either Vista or 7 and on have allowed individual-app volume control


The best answer neglects to address something I've noticed in the past: Many phones and portable media players seem to clip when you set their volume to maximum -- that is to say, what reads as "100%" sounds more like "120%". I haven't measured this effect, and I've never seen it documented anywhere, so I don't know whether or not it's just my imagination -- but I've personally observed it with pretty much every phone I've owned.

On the PC, though, I rarely set my system volume to anything other than 100%.


I've noticed this too. I suspect it is, at least in part, due to the poor quality of the DACs on most phones.

I used to own a Samsung Moment; trying to listen to music on it was almost unbearable due to the quality. It must have been dumping bits left and right during the conversion. I tried 320kbps Mp3s, Broadcast Wavs, Flac -- it didn't matter everything sounded like it was crushed down to a 32kbps.

I'd be interested in seeing a phone dedicated to high quality playback of music. As a somewhat audiophile-ish guy, I would snatch up such a piece of technology.


For some phones at least it is not digital clipping, but speaker distorting the sound. A simple test would be to attach speakers or headphones. If it sounds bad both on internal speaker and external speakers, then, yeah, it's digital clipping. If only the internal speaker, then it's simply too weak to handle the signal, or phone chassis starting to resonate or something like that.


In my experience its often that the output driver gets saturated. For example if I plug my old iPod into the standard apple earbuds which have a very low impedance (~25 ohms at 100hz) I can easily push the output stage into clipping. Basically the supply rails can't push enough current and as a result the voltage of the rails sags down low enough to clip the output. On the other hand if I plug in my Sennheiser HD580 headphones with an impedance around 300 ohms I can't push the output into clipping (unless the source was clipped to begin with of course).


Yes, some media players sound louder than others. I haven't verified, but I suspect that they achieve this by purposely increasing the gain and inducing clipping. That sounds dumb, but weak computer speakers can't compensate for low mastering. The proper thing would be to compress the dynamic range, but that's real work compared to just inducing some clipping. Play a song in iTunes at 100% and compare to VLC @ "33%".


The "best" answer seems wrong or at best misleading -- I would be very very surprised if the user-visible OS master volume control, which typically controls the sound card directly, was not directly controlling op-amp gain at some later stage of the sound card.

Assuming this is true, the correct option would be to maximize any application volumes (e.g. YouTube), to maximize master volume to a level just below the sound clips (distorts) at the amplifier input, and to reduce the amplifier's pre-gain (if it has any) so the master volume control has a reasonable range.

This method will minimize the three (not just one) culprits of poor computer audio quality: quantization at the application layer, electronic interference over the physical connection, and clipping at the pre-amp.


In most sound card designs I've seen there is a 'boost' switch on the output. In other words there is a toggle that enables a ~10dB boost on the output amplifier (or sometimes the DAC). The volume control is actually digital but when you hit a particular threshold the external boost is toggled on and digital volume adjusted to maintain a smooth level transition. If you keep going 'up' with the volume control your still changing the volume digital but you've now got an extra 10dB of gain on the output which can drive the output into clipping if the input signal is full range.

EDIT: Just to add. I would be extremely surprised to see a sound card that isn't a full on pro model have true analog gain control. I've designed a couple products with such controls (for pre-amp gain, but a similar concept) and you need digitally controlled resistors to accomplish it. They are large, expensive and touchy to route without inducing noise. As an example: http://www.analog.com/en/digital-to-analog-converters/digita...


Alsamixer for Linux makes this much easier, as it shows the actual level in dB (at least for any sound card I've ever owned). For example, 0dB (maximum volume without clipping) is at ~74/100 on one of my cards, not 100/100.


This is true. This is hardware-specific, which you may have hinted at when you said "on one of my cards". The onboard Intel sound card on my Debian system shows 0dB at 100/100 in alsamixer.


This is spot on - thanks for posting this. You shouldn't be putting your OS volume at 100% - I never put mine past 50% when i'm sending the signal out.


> You shouldn't be putting your OS volume at 100%

Isn't that only true if you're using your sound card's analog output? If you're using a digital output, I would imagine you would want to keep it at 0dB (no gain and no attenuation). Am I right or am I missing something?


Exactly. Digital volume is just smoke and mirrors. Turn it as high as you like - it's not going to distort. Any time you're going through a DA converter and controlling the volume after the conversion (at which point it's an analog manipulation) - which most OS-level volume controllers do - you should generally stay in the 50-75 range depending on the quality of you hardware. Some soundcards might break up at 25%, others might be okay at 100%. I find that 50% is usually safe, and any distortion at 75% is subtle and not offending.


If you can't tell the difference by listening to it, it doesn't matter.


I was thinking exactly this, but then remembered that this goes for all audiophile matters across the board, and people who are into that probably won't listen to this advice anyway.


Why do you guys think every sound application bothers to put a volume control? Wouldn't they figure every device that plays sound already have its own volume control?

For example if you built the YouTube player, what makes you think you need a volume control?


Maybe I want to listen to Youtube in the background while listening to something else at the same time? Per-application sound controls can be important at times.

Or if some Youtube video has obnoxious sound and I want to mute it, but want to keep my music playing from Spotify.


Immersion: if you have to go use the OS UI for volume rather than part of the player UI, that makes it more likely that you'll stop using the player and do something else.

That doesn't make it a good reason...


I've got a soundbar connected to a Dell U2711. If I play a piece of music on YouTube and the volume (of the YouTube player) is above 50%, the sound is distorted. Since I can't control the volume in Mac OS X, and changing the volume on the soundbar itself doesn't stop the sound from distorting, I'm dependent on the volume control of the application I'm using.


From "best" answer: Reducing volume in software is basically equivalent to reducing the bit depth

This is really only true when The Audio System represents samples as integers and not floats like CoreAudio does.


I don't think that's the case. Here's your pipeline:

App->CoreAudio->HWSoundCard->Speakers(or amplifier)

By using floats in the sound API you don't reduce the bit depth on the first step but you still do on the next two steps. Imagine a scenario where you want to set the volume at half the maximum volume. Your two options are:

1) You set the app to 50%. Core Audio handles that as floats but still has to program the hardware to only output half volume, so if the HW itself doesn't take floats the bit depth is halved. And the HW will always have to output half the voltage so the "voltage depth" of the third step is always halved.

2) You set the speakers to 50%. All the pipeline functions at 100% bit/voltage depth all the way to the speakers/amplifier. Only then, at the final amplification stage, does the signal not get boosted to 100% and only to 50%.

How much this affects real world performance beats me...


It depends on what kind of curve the volume control uses but assuming 50% volume is half the amplitude (-6dB) you will only lose 1-bit of resolution. To lose half the bit depth you'd have to turn it down by 48dB!

It's worth noting that perceptually, half the volume is actually closer to 3dB (a halving of energy), which is only half a bit of loss.

If a floating point audio pipeline correctly dithers the signal going into the DAC it's unlikely anybody will notice any quality loss by using a digital volume control (even at 16-bit). You might hear the hiss of the dithering if you turn up the analogue portion of the chain, although you'd have to turn it up quite a lot.


No. Volume sliders do not use a linear scale. A linear scale is all but useless for volume. Most volume sliders use a logarithmic scale, where 50% basically has no meaning except in relation to the minimum value chosen by the slider.

This is perfectly sensible, since our sense of hearing does not scale linearly from silent to loud. Our ears have a dynamic range of about 120 dB. On a linear scale, the 50% value would correspond to about -6 dB, which is perceptually one 50th of the full audible scale.

A sensible volume slider (on a PC) would range about 40-60 dB, since anything below -60 dB will be lost in background noise anyway. Thus, the 50% mark would be somewhere around -20--30 dB. Thus, this 50% setting would lose roughly 5 bits of information, not one.

Note however that a reduced dynamic range at "half loudness" is usually just fine, since the full dynamic range can only be heard at high volume anyway.

(Also note that the bottom value of volume sliders usually mutes. Analogue equipment sometimes does not do that, which results in very faint signal playing even when turned all the way down.)

That said, the whole argument about losing resolution probably does not make sense anyway since the operating system volume sliders attenuate the sound hardware DAC gain instead of actually decreasing digital gain...


That said, the whole argument about losing resolution probably does not make sense anyway since the operating system volume sliders attenuate the sound hardware DAC gain instead of actually decreasing digital gain...

We were talking about reducing volume in the App. If the app is using the system volume then the App->CoreAudio step is irrelevant (as nothing changes), the CoreAudio->DAC step doesn't change either (full bit-depth and an OOB message to lower the gain), but the DAC->Speakers analog step still has to output half the volume and thus reduce the range of the signal. For this not to matter the OS would need to be able to change the gain in the speakers instead of the DAC.

Again, how much this actually has an audible effect on quality beats me...


Well, kinda. The DAC would convert the signal at full power, then the pre-amp stage would boost it according to you OS loudness setting in the analogue domain. This is possible, but I don't know if sound cards are actually implemented that way.

I know professional mixing consoles are not (at least not exclusively), but they offset that by calculating everything in 32 bit float and using very high bit length DACs. Sound cards do have a pre-amp stage but I don't know if they are software-controlled.


Well, kinda. The DAC would convert the signal at full power, then the pre-amp stage would boost it according to you OS loudness setting in the analogue domain. This is possible, but I don't know if sound cards are actually implemented that way.`

That's fine but what I was referring to was that after the digital-analog-conversion and the pre-amp now the analog signal that goes out of the audio jack to the speakers has been reduced in range, so on that final path to the speakers you've lost some range.


No, analogue signals do not have a limited range. Analogue dynamic range is only limited by the noise floor of the cable and the resolution of the DACs/ADCs. The DACs/ADCs will probably do 24 bits.

Audio recordings typically do not go beyond 16 bits of dynamic range after mastering. And even before that, microphones can't deliver more than 20. Neither can ears. So that part of the system won't likely be a problem.

Reducing the signal gain in the analogue domain does not decrease its dynamic range, it merely shifts it to a lower range of the same width. Of course, this will only be true in the operating range of the op-amps, but that is typically not a limiting factor.

After the DACs, the signal will likely go through another pre-amp, then main amp in the sound system, then some analogue filters, then loudspeakers. All these are analogue and not usually limiting the dynamic range (though they will add some distortion). Finally, the signal will enter a room with noise aplenty, which will limit the effective dynamic range of the signal significantly. But that is out of control of that volume slider we talked about in the beginning ;-)


No, analogue signals do not have a limited range. Analogue dynamic range is only limited by the noise floor of the cable and the resolution of the DACs/ADCs. The DACs/ADCs will probably do 24 bits.

Of course analog signals have an effective limited range. As you yourself mention the noise floor makes sure of that. Only an idealized analog signal of infinite precision doesn't have a limited range.

After the DACs, the signal will likely go through another pre-amp, then main amp in the sound system, then some analogue filters, then loudspeakers. All these are analogue and not usually limiting the dynamic range (though they will add some distortion).

Precisely. So that's why keeping the signal at as high a level as possible without clipping all the way through the pipeline and only limiting at the end is an advantage. All those stages have their own noise floor. Several other people have mentioned on the thread that this is also the general recommendation for audio work.

Finally, the signal will enter a room with noise aplenty, which will limit the effective dynamic range of the signal significantly. But that is out of control of that volume slider we talked about in the beginning ;-)

That's of course true, and again I admit my ignorance as to how much of a difference this really makes once it gets where it matters, your ears. Originally I was just responding to the idea that using floats in your audio framework eliminated all sources of reduction in precision.


Actually it seems many software volume sliders do[1]. Although you're right that an ideal volume control should be logarithmic.

My main point still stands that at half the amplitude or half the energy (perceptually half the volume) you're only losing half a bit to a bit of resolution. And even at -20 - -30dB, with 4-5 bits of resolution loss, you're probably not going to notice the degredation.

[1](http://www.dr-lex.be/info-stuff/volumecontrols.html)


In theory yes, but our ears can't hear below ca. 0 db, so the bits lost below 0 db were wasted anyway.

So if you use a 24-bit DAC, those bits doesn't matter...


For digital signals, 0 dB usually means full power. That scale is typically denoted dB FS ("full scale"). Other scales are dB SPL ( "sound pressure level"), which is what you where probably referring to.

So in other words, you are probably saying that reducing the signal loudness is reducing your signal-to-noise ratio and thus your audible dynamic range. However, your noise floor is probably far higher than 0 dB, more like 20-30 dB SPL (if you're lucky!). A normally-loud (that is, non-damaging) music playback will probably be at about 50-80 dB, so your usable dynamic range will be about 30-60 dB, which translates to about ten bits. Most environments will be worse.

0 dB are actually pretty hard to come by. Even well-insulated acoustical measurement chambers only go to about 10 dB. Only several meter of acoustic foam and a solid foundation and a purpose built air conditioning can go down to 0 dB. So, umm, 0 dB is usually a rather useless figure for non-scientific purposes.


That 24-bit DAC does nothing for you after the sound turns analog between the DAC and the speakers/amplifier. That's why I said the second step may not lose you bit depth if your DAC is floating point or as you say 24bit, but on the third step you will lose something.


This may be true for the intermediate steps but but the last step is still a DAC changing integers to analog voltage. If you are attenuating the signal before that point you are still losing bit-depth (even if, by the magic of floating point numbers you are reducing the aliasing and other artifacts).


Anecdotally, I "feel" I get better definition in the music I'm listening to when my Mac volume is maxed and I adjust the listening volume at the speakers, rather than the other way around.


This is one of these questions i wondered about for decades, but never looked it up.


I'm surprised no one has discussed battery life yet.

One of my 'weird unverified theories of life' is that turning the volume on portable device down (laptop/phone/mp3 player) and the volume on the speakers up saves the battery of the device itself. (For example when you're in a car.)


A small amount, perhaps. In all likelihood your probably better off setting the volume on the device somewhere in the middle, but this will vary based on device. Two major things come into play:

1) The input impedance of powered speakers or any external processing stage will be high. ~5-10k ohms, compared to 80-500 ohms for an average set of headphones. This means regardless of the level you use on the player the current draw will be substantially lower than it would be if driving headphones. This means the level doesn't matter nearly as much.

2) The efficiency curve of the output amplifier. Depending on the topology of the output amplifier of the device it may be more or less efficient at different power levels. If the output driver is a class AB topology it actually gets more efficient as it nears its rated output (generally the relationship is logarithmic). As a result you can actually be 'better' overall on power to using a higher power level at some points in the curve. You use more power to drive the output but you also increase efficiency reducing your overall power usage. Class AB is pretty in-efficient anyway as your only going to see 30-50% peak efficiency. With a class D output which is becoming more popular you can see upwards of 90% but your efficiency vs output power curve is generally sharper, you get much closer to peak efficiency much faster (depending on the exact design of course, most portable devices will be filter-less class D topologies).


From the link, it appears that to get 'lower' volume, you scale the output values if digital, whereas you may scale the output signal if analogue. If you have an analogue output (stereo jack), then you probably generate a lower-power output and therefore save energy by turning the volume down. If you use digital out, then the volume of your local device doesn't have any bearing on the battery life of that device, because you are effectively putting out numbers. But I could be wrong!


If you're on a mac, you can obviate a lot of bit depth issues by opening /Applications/Utilities/Audio MIDI Setup.app. From there you can see all your audio input and output devices, and set their frequency and bit-rates. Most default to 44kHz/16-bit, but a saner setting is 44kHz/24-bit.

You can see the objective differences between 16-bit and 24-bit output in NwAvGuy's measurements of the 2011 MacBook Air's DAC: http://nwavguy.blogspot.com/2011/12/apple-macbook-air-5g.htm...


I may be wrong but as far as I know setting the output to 24 bit only makes sense if your music actually is in 24 Bit. Whatever you set in the Audio/Midi configuration is used as system-wide default output signal - including iTunes and so on. Of course you want to avoid any further conversion in your signal chain, which is why it is recommended to match the output configuration in Audio/Midi settings with the bit depth and sample rate of your music library, which most likely will be 44,1 KHz/16 Bit - unless, of course, you got your hands on higher quality masters.

Considering your advice of setting the output to 24 Bit while your audio collection is (most likely) encoded in 16 Bit, should trigger a conversion while outputting the audio signal - this conversion is useless, of course, as you can't turn this material into something it isn't (imagine upscaling 720p video to 1080p, new packaging, same old content/quality). Yet, and correct me if I'm wrong, the conversion still happens and alters the signal.

So, to avoid this match those audio/midi settings with your music library. There are also expensive iTunes alternatives that deal with this problem by adjusting the output settings to the currently playing audio file on the fly - in case your library consists of mixed bit depth/sample rates. I don't use those solutions though.


Upscaling from 720p to 1080p will not lose any information if it's done properly. Similarly, upscaling from 16-bit to 24-bit will not either, and running the audio output at 24-bit has the advantage that quantization noise from volume scaling is a non issue. This means software volume controls will not result in perceptual quality loss.


I am correcting you: Playing 16-bit audio with a 24-bit output device doesn't distort anything. No matter the bit depth of the audio file you're playing, CoreAudio converts it to 32-bit floating-point PCM. Then it converts down to something the output device can handle. I'm pretty sure Windows does this as well.


This seems like it should be a simple straightforward issue, but I still have problems with ocassionally getting clipped audio (or some kind of distortion, I'm no audio expert) on Windows 7 with Realtek's "HD Audio" chipset/driver.

As far as I can tell, I rarely if ever have this problem with the same hardware in Linux with PulseAudio (though I can intentionally cause it using alsamixer by pushing "Master" to 100%) and didn't have this problem in the past on Windows with Creative Labs soundblaster cards.


PulseAudio does nice things: it looks at the requested volume level for each input and output, and arranges the actual controls on each intermediate stage to keep the values as high as possible without clipping, while also making use of knowledge of each mixer's stepping values to give you finer control than your hardware would give you by default :). It does rely on your sound hardware correctly reporting its actual amplification characteristics, but it seems that most hardware is pretty good nowadays, or at least consistent enough to get worked around.


While we're at it - please use a logarithmic scale for your volume slider if you're implementing one.


This gets even more different when devices contain a digitally controlled potentiometer between the DAC and the amp. I believe Macs do this, with the main system volume control actually stepping down the voltage of the signal being fed to the amp. Since this is an analog control one doesn't lose bit depth when turning down the volume and thus this is a fine system to use.

Not all machines work this way, though. One way to check is to hook up an external amp and headphones, turn the computer's volume way down and the amp up to listen levels. If the quality is crap the it's probably just decreasing the bit depth. Or you can do a teardown on the sound pathway.

(Oh, if it isn't clear by this point, keep all your apps turned all the way up for best quality. Only turn them down on an individual, as-needed basis. All-software stuff has to decrease bit depth to decrease volume on a per-app basis.)


More of this on the frontpage please.


Other posts here are making the point that the range in digital controls often includes some dB gain by the time you get to MAX. That digital boost can be very useful when using a laptop with barely audible speakers. I like how VLC makes it explicit with a volume control that goes up to 200%. It would be nice if OS level volume knobs worked the same way so you could always chose your level of distortion vs gain.


In the software. Noise early in the signal chain counts for more than noise later. If you can get noise-free gain by changing a scalar value in a register, it'd be a mistake to turn it down in favor of increasing the gain in an analog stage later on.

For the case where an analog potentiometer immediately follows the DAC, of course, there's no practical difference.


Which reminds me, why isn't having a single volume control a solved problem?


Because you can have more than one application with sound output. I like listening to music while I play games; I set the music to 80%, and I set the game to 15-20% volume. This cannot be implemented in one single volume control.


It would be nice if all your running apps volumns could be controlled from a single place though, rather than having to hunt around for menus in each app.


There does happen to be at least one operating system that supports this: Windows. ^___^


Yep, since at least Vista, I think. The UI guidelines recommend against in-app volume controls for everything except media players:

http://msdn.microsoft.com/en-us/library/windows/desktop/aa51...


This doesn't work well with games. Yes, you could change the volume settings from there, but most 3D games are full screen and it's not usual to alt-tab the game just to modify the volume settings.


On Linux (specifically Ubuntu, but I believe generally relevant) PulseAudio controls the volume of the loudest sound by default, but you can also get a panel showing each input and output from the volume menu -- two or three clicks, vs. two for Windows.


pavucontrol for pulse audio


Pavucontrol well also let you assign apps per device. Windows has the capability of switching app output devices but sadly only uses it to make them follow the default.


Because computers and software work by linking together components often without sufficient communication between the pieces to easily centralise operation.


And unfortunately a centralised volume control might require defining standards. I wonder if there is a way to hack around that.


Stupid idiot answers all.

Volume should always be controlled as close to the source as possible. Anything else is simply inefficient and a waste of processing power.

There is no reduction of bit depth. total hoo-eee.


I tried increase volume via VLC and macbook pro speakers got fried, will stick with using external speakers and changing volumes on them


That's because VLC lets you adjust the volume above 100%. Generally not a good idea. If you avoid that, you should be fine.

Also worth noting, if you attenuate the signal (in software) from the computer you generally won't attenuate the noise, meaning that at the same perceived loudness from your speakers the sound will include a lot more noise. On my 2007 MacBook Pro this is very audible.


I once thought about a naive streaming audio compression: If the client volume is 50% or less, send only the relevant bits.


+1 to the best answer-- always turn up your iPod before your speakers, folks. It makes all the difference.


if you have some cheap speakers, set hardware volume somewhere over 50% and then never touch it. Use the software volume. That's because the hardware volume button will almost certainly break (mine were Logitech).


Isn't this one of those 'it depends' questions?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: