Uh-oh—by me? OK, to be clear, my lengthy comment was to the top post. I’ve only had time to skim the rest of the thread. So I may not get some nuance of what you’re asking, but I’ll give it a try.
When you play guitar through a real amp, and you have some pedals that might be boosting the gain, or have active pickups that might be able to give ample input level, how would you set the gain…Well, first of all, you might recognize that you can’t get a crunchy guitar sound if your input (guitar level) is outright feeble. Worse, it will be noisy, because any background hiss is boosted. Clearly, we want to give it healthy level, so the better question would be how much is too much…
Well, if you want some range to your tone, you wouldn’t want the guitar tone to be in heavy distortion with the volume knob (or volume pedal if you use one) rolled back and the amp drive set to minimum. More likely, you’d want the tone there to be somewhat clean. But with enough level so that when you crank the drive knob it rolls on some good distortion. (If you’re performing, more likely you’ll set the drive so it’s crunching when you have the pedal down or volume knob on your guitar maxed, but be able to pull it back with the knob/pedal to less crunch.)
I hope that makes some kind of sense. Just realize that guitar amps are not traditionally fitted with level meters. You just get the sound you want by ear, and if you need some performance range you do as I described.
(Caveats: I’m not much of a guitar player, mainly keyboards. It’s kind of the same issue driving a tube-amped Leslie from a B3 clone, though, and I’m pretty familiar with that. Also, I wrote Amp Farm, the first serious guitar amp sim plugin, so I know the guts of 'em.)
If I understand the question, I think you’re asking if there would be clipping in the file due to inter-sample peaks. No, there shouldn’t be any reason for the DAW to clip saving to the file. But even saving as 16- or 24-bit fixed-point audio, ISPs aren’t a concern there either—just the actual sample peaks matter. ISPs are a playback issue. Not a particularly big issue, though. While it’s possible to make arbitrarily large ISPs if you try, it wouldn’t be music or anything you’d want to listed to.
OK, I need to separate two things here first. By lossy conversion, I assume you mean mp3 or AAC conversion, which is another issue. But on converting from 24-bit to 32-bit, there is no loss, no essential change in the data. 32-bit floating point gives you 25 bits (before I get corrected: 23 explicit mantissa bits, plus 1 bit implied by normalization, plus 1 bit due to sign = 25 bits) of precision (plus scaling—the scaling plays an essential role in calculations, but is not significant in final playback). So, the bits basically map one-to-one (you don’t stretch the 24-bit range to fill 25, you just copy to the most-significant 24 of the 25), with an extra least-significant bit left over—converting 24-bit audio to 32-bit float audio doesn’t change anything, effectively.
ISPs are always there. When you run the audio data through a D/A converter, the samples are converted to analog value, and run through a lowpass filter (mathematically, it’s stripping off the spectral images inherent in the digital data, but you can look at it as just an element we need to smooth the gaps between samples). If you output samples 0.1,0.8,0.8,0.1…the output analog voltage will rise quickly from 0.1 to 0.8, overshoot, and come back to the next 0.8, and back down to 0.1. There is never a way to get rid of ISPs, because they don’t exist until you convert to analog. You can make them smaller by upsampling and playing back with higher-rate converters, but that doesn’t really change anything—you’ve simply filled in part of the overshoot with real samples. In fact, that’s exactly how meters that detect ISPs work—they do a sample rate conversion and see where the new samples lie.
Like I said, it’s possible to create arbitrarily large ISPs, but then it wouldn’t be music. Real music has fairly small ISPs, and you can guard against them with just a little headroom that you most likely have anyway.
Inter-sample-peak. It basically means that even though none of your samples are going above 0dBFS, when your D/A converter upsamples it, the samples that fill in the gaps can go above 0dB and clip. I personally think the fear of ISPs are way over exaggerated. By nature, they tend to happen on higher frequency stuff and/or sharp transients.
I do too, but if the industry standard (e.g. MFIT) is headed towards little or none than I at least want to learn how to do that should it become necessary. No commercial recording I ever bought or downloaded was below 0db, so I’m wary of all these recommendations I see to master to -0.3db and even -3db. I want to master to 0db with no ISPs on playback.
I’m not really very conversant with all this zeros and ones stuff and I have no desire to be, I’m just a knob twiddler, so my understanding of Digital Land might be too simplistic, but it seems to me that if you convert 24bits to 32bit floating point, it will guarantee that playback or conversion to a lossy file will have zero (or very few) ISPs.
Think of a side view of a peak. The peak has a flat top. That flat top is at 0db. But when it is played back, the DA converter will ignore the flat top and try to estimate where the actual summit is. The summit is then clearly going to be above 0db.
So does it mean that you should try to master so that all the peak fall withing the 0db range ? Hence the -0.3db of witch was spoken about in a previous reply ?
Well, that’s what a lot of people are saying, yes. If you sniff around the interwebs you’ll find some people who even say you’re not safe unless you go down to -3db. I’m not convinced by any of it. Not until I see commercial releases done like that a least.
"To take best advantage of our latest encoders send us the highest resolution master file possible, appropriate to the medium and the project.
“An ideal master will have 24-bit 96kHz resolution. These files contain more detail from which our encoders can create more accurate encodes. However, any resolution above 16-bit 44.1kHz, including sample rates of 48kHz, 88.2kHz, 96kHz, and 192kHz, will benefit from our encoding process.”
[quote]I’m not really very conversant with all this zeros and ones stuff and I have no desire to be, I’m just a knob twiddler, so my understanding of Digital Land might be too simplistic, but it seems to me that if you convert 24bits to 32bit floating point, it will guarantee that playback or conversion to a lossy file will have zero (or very few) ISPs.
[/quote]
I think you’re getting tripped up here: If you have a higher rate master (192k/96k, etc.), and want to also provide a lower rate (44.1kHz), you have chance for intersample peaks on loud peaks that could got over “1.0” in the output. For 24-bit (or 16-bit) results, you’d have to clip those (to 0.999…/-1.0). Floating point output of the SRC allow you to adjust the volume later. And of course ITunes Store does normalize volume across songs, so a super-hot master that would run the most risk of ISP would indeed be turned down later by ITunes.
But note that’s an intermediate process, done by Apple. If you supply them a 24-bit 96kHz wav file (non-floating point), they will process it, they will deal with the intermediate floating point that avoids ISP clipping—it’s not your problem.