Will 96k be the new standard?

Interesting video from Warren Huart, I have been watching his stuff lately. Something he mentions in this Q&A video several times is the inevitability of 96k (96k/48k) as the new standard, whenever the industry comes around on it.

At 36:00 (which should be queued up) and at 50:00, Warren talks about 48k being the current standard, but that we’re moving toward “a 96k world” for broadcasting standards, label deliverables, mastering engineers, and iTunes. Obviously he suggests “make sure your hardware can handle that”.

Is that a pursuit for “higher quality”, or is it simply that people think it will result in higher quality and the technology is allowing for the higher and more affordable bandwidth? In other words, do they look dumb (in some people’s eyes) if they don’t do it?
.
.

Depends what he means by “standard”. CDs aren’t going anywhere anytime soon and they are 44.1 kHz.

Personally I think it’s all a marketing ploy. It may well become some kind of standard if certain companies, or certain industries insist on it, but I can’t think of any good audio reason to produce a recording at 96 kHz.

1 Like

Right. He didn’t mention CD’s or anything retail, really. More along the lines of “industry insider” processes. He mentioned iTunes which might apply to retail, or maybe that was just for the purposes of “iTunes mastering specs”?

Perhaps. Whatever the market will pay for is what will be provided though, when a dollar (or name your currency) is to be made. As he says he doesn’t know a timeline, it’s fluid and shifting. He seems convinced it will happen though.

Doing a quick perusal of some sites, it looks like they are out there offering “HD” tracks, from 96/24 to 192/24 and some other combinations. I saw these and found them interesting:

https://www.hdtracks.com/about-us

It looks like that 2nd link describes this fairly well, and that the playoff between the available technology for playback and the willingness for providers to take the risk is the see-saw determining the pace of change.
.

The shift from lossy mp3 to high-resolution music that exceeds CD-quality is under way, and in a big way. Major tech giants like Sony have begun cultivating new HD hardware, while Neil Young’s Kickstarter campaign for his HD PonoMusic player and downloading service became the third most successful ever to launch from the site, seemingly proving that there are plenty of people for whom high quality digital music is a priority.

However, even as a number of new high resolution AV receivers and portable music players continue to emerge, the solution as to where to go for high quality HD audio files has remained somewhat elusive. The issue stems from a lack of digital recordings at high sample rates and bit depth (starting at 48kHz/24 bit and up) needed to produce the crystalline sonic quality, as well as a reluctance by some artists to release their original recordings for re-mastering at the higher standard.

I do agree with you there is some element of marketing ploy in it, a la the high end gear perception that recording quality is immensely improved by it … especially the phrase “crystalline sonic quality”. Those adjectives always raise a red flag for me. Many people will believe what they are told by ‘authoritative sources’ and not question the logic. On the other hand, perhaps the pendulum is swinging back more in the direction of quality than dumbing-down music to the lowest common denominator of iPod’s and earbuds. Sometimes “intent” is just as important as process.

When Warren mentioned 96/48, I assumed he meant 96k sample rate and 48-bit. He didn’t really specify though. Those sites I posted didn’t mention 48-bit that I could see. I see that Reaper allows for “32 bit FP” and “64 bit FP” (FP = floating point), but no 48-bit. So maybe he meant recording at 96k and then still mastering down to 48k?

I couldn’t tell the difference, if I could it was so slight I might be imagining it.

So to jump from 48 to 96k I would loose half of my resources so the computer is less capable by quite a lot to gain a non perceived advantage?

2 Likes

I am too. Anyone who asks me for deliverables higher than 48 is not interested in hearing about how it’s not necessary, and they don’t care what my opinion is on if they can hear it or not. I just do it for them.

A lot of classical music has been recorded at 192 for quite some time. No one really gives a good explanation for why.

I’m really not sure. I think we’re keeping the higher resolution on the back burner incase it ever needs to get used, which it occasionally does. If for no other reason than a VO artists or SFX editors wants it at 96. HDMI 7.1 surround bandwidth supports 8 channels plus video all the way up to 192.

Does who look dumb? The engineers? Yup. You look pretty incompetent when someone asks for a deliverable at 96 and you can’t offer it or think you’re going to talk them out of it. Saying “Well, you really don’t need that…because of (insert your irrelevant opinion here)” will get you shit canned from projects faster than you launch your DAW.

The best thing to do (from a professional standpoint) is play along.

2 Likes

This is really what it comes down to. Our job as engineers isn’t to sit there projecting our opinions and ideals onto the customers. Our job is provide what they ask for. If we can’t do it, it’s our own fault. If my computer can’t handle it, then I have the choice to get a new computer or pass up the job. That’s really all there is to it.

Now, if I’m doing my own personal stuff, I’ll use whatever I want. I don’t really care what the standard is. For now, whatever I want just so happens to be 44.1kHz because in 99% of the deliver formats, that’s what it ends up being anyway, so there’s zero pressure to do otherwise.

But if the standards changed (which I’m not so confident it will any time soon) then I’ll change right along with it, because if the standards change, then the gear being made to keep up with the standards will change right along with it.

2 Likes

Bit rates will keep getting bigger and bigger as people search for more ‘quality’ and technology advances.
Then when we are at a stupid high rate someone will go
"Hey, why dont we just do away with this digital thing and return to analog, after all thats what we’re trying to achieve sound wise"

And i will cheer and raise a glass to them :clap::beer:

1 Like

Yes, I think this applies (if at all) mainly to commercial studios or demanding clients. If it does indeed impact larger studios it will take time to roll down to the home studio level in all likelihood. What Boz has to say about it pretty much sums it up: if you want a paying client who requests it, you have a choice.

And if there is some concern about it taking extra time or resources, requiring higher spec recording computers to handle the recording latency etc, you could decide to charge more for that ‘service’. As tacman7 points out the difference is not inconsequential in terms of resources. High end pro systems should already be there, but for home recording budgets it could be a consideration.

I think we’re already there. I believe most interfaces will handle 96k and have for some time. Higher end stuff will usually do 192k. My Behringer ADA8000 will only do 48k, so it’s (potentially) a limiting step when piggybacked. I assume there is stuff out there now that will do 384k (or at least higher than 192k) but I don’t think I have heard of it.

The main issue IMO has not been the sample rate, but the computer hardware to handle the latency of recording. The industry is dependent on computer standards and affordability (in most cases). That, and the fact that probably everything dependent on higher sample rates takes longer and increases storage. While hard disk storage has been very affordable for a long time, it has to take twice as long to write a 96k file to disc as a 48k one; twice as long to bounce/render a file, twice as long to do backups and archives, twice as long to upload/download over the internet, etc. All tolled, that adds up to significant amounts of extra time. That’s why I suggested to Jonathan that you may want to charge more, at least until it becomes a competitive necessity to not charge more.

I don’t think I see it becoming a standard. For it to be the standard, it will have to be the default playback format for some of the most common playback devices.

With everything becoming wireless and cloud based, there’s still a huuuge push to make bit rates as small as possible. If you have a system that pushes sound around your house wirelessly, it’s far easier for it to do so using lossy compression and the lowest bitrates possible without significantly sacrificing the audio quality. The only places people listen to high sample rate audio is in high end movie theaters and recording studios. Those people are not determining the standards any time soon

2 Likes

Or they’ll be like “Hey, why’s my hard drive out of space?”

2 Likes

OMG! That would be ridiculously frickin funny!!!

Block session (8hrs recorded @ 44.1/48k) - $700
Block session (8hrs recorded @ 88.2/96k) - $1100
Block session (8hrs recorded @ 192k) - $1300
Block session (8 hrs UNLIMITED RESOLUTION) ~Special SALE~ $1400/day.
…track as high as your word clock and converter can handle!!

384khz? 10m atomic clock? No prob. We have you covered!
(external word clock and hi-res converter not included).

1 Like

Those are very valid points. I don’t think they necessarily apply to broadcasting (which is one aspect Warren mentioned) where it’s more a question of streaming bandwidth and the production team can use any specs they want on the back end. You could probably choose a reception resolution for your signal on the user end, much as YouTube offers multiple video resolution formats for streaming. There’s a drive to offer HD and the maximum resolution for people who want/demand that (the market), whether it’s even usable or not.

However, I’m not sure all that is as limiting as you might think. All technology tends to ‘creep’ to larger formats as innovation allows. Digital cameras started off at, say, 3 megapixels and then progressed to 15 megapixels and beyond. Can anyone tell the difference in the image? Maybe in some cases and when the image is blown up. The change was not necessarily due to consumer demand or frustration, but to competitive advantage and reduced cost per storage amount. Similarly, flash drives about 10 years ago were a bit pricey for just 3GB or 6GB, but now you can get 64GB or more for like $10! That’s nearly the size of a computer hard drive 15 years ago. Are those necessary or based on consumer demand, or is it a competitive race to be the top dog in the industry in terms of bang for buck?

Also, it could resemble the Loudness Wars. Humans tend to follow this competitive urge to outdo the next guy and vie for the consumer dollar. The ignorant consumer public doesn’t know the difference. They too get caught up in the latest/greatest keep-up-with-the-Jones’ mentality. None of this has to make any sense, it just has to be marketable and someone will do it. I think a large part of that is economics and the natural progression of technology innovation.

But isn’t it true? From a business standpoint, if you’re having to invest in superior equipment to accommodate higher standards, you should pass on that cost to the consumer. Even if your equipment is up to snuff, as I mentioned earlier all this potentially adds significant amounts of time to every stage of the process. Time is money. You’d be shooting yourself in the foot to take on more time consuming jobs at the same rate. Capiche?

I dunno. True story. Once upon a time, when I was living in Alpharetta GA, I got a call to replace a keyboard player that wanted to charge a band leader double for using both his left and right hand simultaneously. On the grounds that he had to practice more to do both.

I sort of see this like charging someone extra to mix with the left AND right reference monitor. I guess when the 192/384 kHz is built into an interface you already own, I’d think the same logic would apply to using both hands that you already own lol.

But really, as @bozmillar pointed out, the norms and standards of an industry supersede what would seem to be a common sense practice.

1 Like

Ha. But isn’t that the point? I think itemizing everything is dumb, and this is about 4 steps beyond what I would consider ridiculous, but everybody else just hides those sort of costs somewhere else so that it doesn’t look as dumb. This is a case of too much honesty makes you look bad.

You aren’t going to tell someone that you will charge more for 96kHz. You will take into account how much more work or time it will be (if any) and factor that into how much you are willing to do the project for.

If a rapper came in and said he demanded 96kHz, I’d sigh a little inside and charge him a little extra. Not because it’s really more work, but because I know this is someone I’m going to have a hard time working with.

2 Likes

Depends who they are. Some clients only ask for such specs because it’s what they think they should be doing, not out of any reasoned requirement. If you’re the service provider it’s your duty in these circumstances to explan the situation.

2 Likes

I kind of think you’re missing my point. When 48k is the norm and it takes you X many hours to do a session, and then somebody pops in and wants 96k and it takes you X + X/2 hours to do a session, that makes a difference that you may not want to just give away. Once 96k becomes standard (if that’s truly the case) then you can’t make it a special case anymore, and everybody adjusts their prices or tightens their belts.

Yes, if you’re doing project quotes you don’t have to go there. It’s the hourly rates where I think someone could take a hit. Maybe it’s not that big of a deal, but I’d like to see a time study of how much longer it takes.

Just hypothetically, say back in the 70’s one studio had a 24-track console and tape setup, and then their competitor bought a new 48-track console and tape setup (sync’d tape machines or whatever). Don’t you think the studio with the new 48-track could charge a higher rate? They just invested in bigger, better, and more expensive equipment. They can handle twice the tracks, easily. Isn’t there a difference there? And can they easily justify their higher rates? I know that analogy may not be perfect, but I have pointed out that some studios might be challenged to record at higher specs, even if the interface can handle the sample rate. They might have to upgrade. And the extra time involved for the higher (96k) resolution is kind of like the extra tape tracks. There’s just … more!

Yes, again kind of like the Loudness Wars, where clients finally got educated instead of dragging producers and engineers around with a brickwall collar. I believe there is an education opportunity, and if it’s not addressed but plumped into project fees then it could take years to really look the phenomenon in the eye.

The question is: Will 96k be the new standard?

…new standard of what?

We can argue about the pros and cons of producing 96kHz masters, but will the process become a standard of some kind?

Possibly for the film/broadcast industry. Whatever the industry’s reasons for requiring such a standard, it seems to me that there is nothing the industry can do to prevent people recording at whatever sample rate they like, then simply exporting at 96kHz, which rather makes a mockery of it all.

So what about actully recording at 96kHz? Many plugins sound ‘better’ at 96kHz and some say that since 96kHz requires less aliasing, the sound is improved. We’re talking about micro differences here. Even if it is somehow worth all the extra cpu and storage, does any of it matter when you consider the spec you are going to master at? Does anybody here master at 96kHz as a genereral rule? If so, why? That’s not a rhetorical question, I would like to know why. What can you even do with a 96kHz master? And if you don’t master at 96kHz, doesn’t that negate any advantage you may gain by recording at 96kHz?

I can’t think of a good reason not to record and master at 44.1 kHz. Maybe that will change in the future, but I think that will be the result of heavy industry pressure and marketing, rather than any improvement in sound quality.

1 Like

That would require an awful lot of work and not really accomplish anything but shitting on someone work order for the sake of patting yourself on the back and saying “Muahaha. I didn’t play by their rules”. Each stage in a film (turnover, music, ADR, foley, editorials, mix, and final deliverables) are all usually done by different companies, and you NEVER “consolidate/render/print/commit” audio clips to a single track. You need to deliver them as separate clips so the next person down the line in the process can do their job. There are various methods for keeping everything time striped and frame aligned, the first of which is keeping everything at the same sample rate. Imagine trying to convert all these, import them at 48k, then re-code, then export them.

I agree with you on this one. I have the same question. But the question of ‘why follow the rule?’ (for workflow purposes in this case) turns out to be different than ‘what was the point of that rule to begin with?’. There are audio supervisors that say ‘shoot all audio at 96k’. However, I have yet to meet one that can give you a good reason why they want or think they need it higher than 48.

1 Like

I only listen to audio recorded in 64-bit at a sampling rate of 192,000 kHz. All that 24/96 shit is for amateurs. But for real I think bit depth makes more of a difference to my ears. I can’t really tell the difference between 48 kHz and 96. I still record in 24/96 just to cover my ass. As for it being the new standard, I don’t think so. Even if space increases it would take something revolutionary for people to switch all their music AGAIN from MP3’s to something lossless. Maybe an exchange program where you could trade MP3 for FLAC? Most people stream nowadays anyway because they don’t value owning an album, and they don’t think that the company could lose the rights to the band and your music could disappear anytime.

1 Like