The Quest for Hi Res Audio in Recording

This subject can easily fly off in different tangents but
I wanted to give it a go so that we can all learn from our collective experiences
within our many paths to produce quality music and beyond.

I have always felt the gap between what audiophiles require and
what music production engineers are working with a huge source
dichotomy in the two camps. Can they coexist? Why haven’t they in the past?
Are there ongoing efforts to bridge the gap?

I believe there’s a distinction between the person who takes their music
listening seriously and the engineer who works within the confines of
their industry niche. For example, CD quality is 44.1khz at 16 bit while
people who work in the AV industry, video types stay within their 48khz
world for an “industry standard” solution. While the audiophile these days
are looking into higher resolution audio, 96khz, 176khz, 192khz at 24 bit
or even a 32 bit floating process! Those are some staggering bit rates that
are a long way from home considering CDs just a few years ago were
all the rage.

So why hasn’t there been a push by working engineers to jump on board
a higher than thou resolution rocketship? What’s the hold up? Who is
dictating the dummying down of high res audio resources to the masses?

There seems to be two camps in this venture. The believers and the unbelievers.
The unbelievers state that there is no difference if product is upsampled
from a lower resolution to a higher power. Can the average person tell the difference?
Well, if that’s our criteria then we deserve what we get. But a swift dismissal
is not the end of the story. There is a difference, a huge difference when a musical piece is mixed and the two buss is captured at a high res master.

The piece of the puzzle IMO is that even though let’s say the individual instrument
tracks were recorded on a DAW that had 48khz/24. So if we layed downa two track master at the same rate, we are now compounding the limitations of said
bitrate onto that music. Whereas if we were to mix the song down to a better
hi res mix, 192/32, then we have eliminated the bad limitaions that are
presented when audio equipment is forced to apply limits such as 44.1 or 48khz
to an audio or musical piece. The hi res itself doesn’t really change anything
in the recording process. The mic is still the mic, the cable is still the cable,
but how we limit it’s input to fit 44.1 etc. is the key. Those filters and limiters
have introduced frequency hash and garbage into our recording! We might not
be able to hear it, or should I say distinguish it as noise etc. but it is inherent
in lower res recordings and it does kill a certain clarity within one’s mix.

The sooner we escape the clutches of lower res philosophy being just okay
for the masses, the better off we all will be.

Yes, there are DAWs and Interfaces that can handle higher resolutions
right here, right now! C’mon!

1 Like

Besides the high res audio of Pulse Code Modulation
that we have all been accustomed to listening to on
all our devices, there are other recording formats for lack
of a better term, that are leaps and bounds “better” than
PCM.

Yes, I know, better is subjective blah blah blah…
But what if I said that there are audio recorders
made by companies you know that actually manufacture
a format called DSD! Would you be interested in at least
hearing what it sounds like? Reading about it?

I know that Korg and Sony have had recorders just as recent as last year
that have been for sale at placed like Sweetwater, Musicians Friend etc.

The DSD format leaves behind the old recording audio in kilohertz
and actually records all it’s audio in Megahertz! Yes, Megahertz!
The sound is beautiful and if you are a mixer and are looking for
some depth in your two buss mixes, here it is.

Some of us might recognize DSD from it’s other monikers over the years
such as, SACD, DXD, DoP, DFS etc. Yes, companies like Sony have
developed products and then left them to blow in the wind when
sales weren’t to their liking. Such is life right?

But not all is dead for hi res audio. Not in the least. There are many
an artist and producer who have jumped on the hi res bandwagon
and are taking advantage of it’s spatial quality and kissing goodbye
the disadvantages of a lower process quality. Especially let’s all kiss
goodbye that CD 44.1 standard.

Of course these are all just my opinion and some of it has been
researched by top people we should all know by names on their products.

My personal goal is to one day combine my audiophile desires and marry
them with a fully functional hi res recording setup. Thanks.

1 Like

Maybe you can watch this video in it’s entirety:

1 Like

This vid is shorter for sure:

1 Like

Cons?

The problem is that the auditory system in most humans cannot distinguish between 44.1khz 16bit and higher resolutions, so a huge effort has not been put into hi-res recordings.

Who are these top people you are referring to ? Those old men in your pro-hires videos most likely can’t hear beyond 12khz and I’m guessing would fail a blind test between 44.1 and 96 sampling rates.
Even if they could hear a difference, it’s most likely due to them having trained hearing because they job requires it, but it is not applicable to the average person.

My audio interface has 32 tracks at 192khz with 32 bit resolution and I record at 88.1khz/24 bit and mix down to Red Book standard 44.1khz/16bit and I’m just an owner of a shitty little home studio.

The audio world is overflowing with products that claim to improve sound. $12,000 power cords and some people even tape packets of pebbles to their audio interconnects or run their speaker cables on risers.

Show me the actual science (from somebody not trying to profit from hi-res recordings) and then I might pay more attention, but everything I have researched shows that very few people can even hear the difference.

2 Likes

Yes, even if we can’t hear a difference, there still is one being applied.

You are asking for some proof but you curtail the source criteria by
eliminating the very people who have done the research, namely those that have profitted from the audio business.

If we utilize that standard for everything, then nothing can be measured.

For the audiophile mindset, its not always about what we can hear.
For those audiophile types, its more about measurement. Specs play a big role in that world. Their hope is that it’ll translate to a better sound with a difference some might hear, while others wont be able to hear a difference and therefore abandon the effort towards a higher res system.

As for me, I’m always open to a better point of view.

There is a lot of snake oil being pedaled out there. This is also my take on it. But we shouldn’t throw the baby out with the dirty diaper.

Before retirement, I provided live sound engineering all over the Western US. What we used was adequate for the job at hand. Did they have better gear out there if needed? Sure they did, but money dictated what you could use unless you brought your own piece of gear to supplement the rig.

There’s more to come on this subject so stay tuned please.

It’s not all snake oil though.
Sony did lots of research and implemented DSD in their SACD gear. It was bundled with Blu-ray and sounded superb!

How can you say it’s flawed just cuz some made money?

I’ve found it’s the exact opposite in the audiophile world. They’re all about the subjective experience and shun measurements at all cost. Bring up blind listening tests on any audiophile website and you’re immediately attacked because it will definitively show that they are wrong.
From a research point of view, I will use the data coming from universities and non-profit research institutes before I listen to the subjective viewpoint of a recording engineer who probably has not a clue how the equipment they use on a daily basis even works.

I can’t say it’s completely flawed, but it does go against research done at institutes not looking to make a buck from selling hi-res gear. Once marketing gets hold of an item, it looses any objectivity and quickly slides in the subjective realm.

Thousand dollar power cords sell because no one wants to think they pissed away a grand on a piece of wire that measures the same as one sitting in the bottom a 5 gallon pail in their garage.

2 Likes

Wow, studio!!!
This information is tremendous, and yet very revealing.
The video, Classic Recording Techniques, which I am not finished viewing, is so interesting.
I am no technical expert in these recording matters, and the quest for Hi-Res Audio is truly a maze.
From this single video, a really good education is unearthed, at least for me.
From a conversation I had with studio, and simply talking about my recording set-up (which “studio” helped me get up to, here are some of my points:

  • I am so surprised with how well my mics pick up the sounds of both vocals and guitar
  • I love the fact that nobody sees my two mics (studio made this fantastic visual effect)
  • I no longer plug in my guitars, just pure acoustic recording, which is much better to my ear
  • I love the room reverb from my reasonably priced Yamaha mixer
  • Although there is some noise in my recording, I told studio that I love it, it has the vynil sound to it.
  • I also love the sound created from all the instruments hanging around my music room/recording space - got lucky on “room sound!!!”

I would say - many thanks to “studio” for my recording set-up, and yet the “quest” continues.
This thread is truly an education, at the masteral level!!!

1 Like

There are 2 things at play here. Recording and mixing.

bit depth and sample rate matters for the recording engineers not so much on the mixing desk. If the recording at the source is low resolution or poor quality, there is nothing that can be done by a mixing engineer to “increase” sonic quality. There is no “sonic” advantage for a mixing engineer to “convert” a low rez recording to a higher sample rate for any other purpose than archiving and remastering.

Increasing bit depth allows remastering engineers to inject “new and fake” data to the sound file allowing for more headroom to inject better effects.

but if you were to simply convert a sound file to higher resolution without adding anything new, there is no one on this planet that can tell the difference between the two in a blind test.

Recording in 24 bit 96k is pretty common in industry professionals. Indie musicians dont normally do it because of file size bloating.

Now there are definitely advantages to recording high resolution audio right at the source if the goal is to capture every single nuance of the acoustics (for example very high end studios or orchestra halls).
If your recording environment is a basement full of bass traps, you are simply bloating your hard drives.

because most music is coupled with videos these days, it is indeed a good idea to record 24 bit 96 or 24 bit 48k. So that the video rendering and conversion does not double convert your sound. It can cause audible artifacts.

I would not recommend recording in 16 bit these days.

the internet speeds and data plans on mobile devices haha

3 Likes

Just to be clear, I started this thread to be a discussion on the topic of Hi res audio.

Im not willing to volley insults or pissing contests with anyone who has a vengeful bent in their posts.

There’s lots to talk about on this subject and we’ll get pretty far with peaceful and polite responses. Thanks.

Interesting take on the subject @FluteCafe !

Something I do with my recording clients these days is to provide them with 3 mixes, and have them choose the one they like.

I send them 192/24, 192/32, 48/24.

The source recordings were all done with mics and ran through analog boards and tube gear before being DAWed at 48/24.

Here’s another video on hi res:
Thanks.

If I were a recording engineer today I probably would buy a powerful enough system with a petabyte of storage and 64GB or even 128GB RAM so I could go wild on tracks and effects, especially if I was also processing video. To take advantage of new technology seems a logical direction especially when the cost of a lot of it is getting fairly low. A single song could be 10 gigabytes in size, maybe ten times that, even if the final product size for streaming will be 10mb.

Even on a normal song one creates in Reaper, for instance, one might discover a need or desire to apply a very granular control of volume and reverb etc, to just a single track, and the doing of that might become very stressful on even a good system. So imagine a symphonic recording with maybe hundreds of tracks, and the complexity becomes increasingly less manageable, and suddenly what was barely a concern is now a showstopping artifact or the like.

The desktop studio using a PC and an audio interface may not have yet risen to the level of giant mixing boards etc from pro studios, but I think it is closing in. I imagine the digital being lots of little steps to follow an analogue curve, and no matter what real people can physically hear, the goal is to maintain the illusion of that stepless, smooth analog wave when playing and reprocessing it.

It sounds to me like the pros in this discussion are already convinced of the benefits of hi res audio recording and going a step or two up as it is, which probably addresses most of the mixing issues for most projects. I was reading about AI creating a clearer image translation with a smaller file size, so it is possibly like what is happening in sound files.

1 Like

I don’t think anyone is going to refute the purpose of recording in high resolution. It is the mixing, mix rendering, resampling and oversampling that everyone questions. Converting a 44.1 recording to a 192 artificially can potentially lead to issues. The data is simply not there in the recorded file and the computer is adding fake data to it. If not done properly, it can be very audible.
on the other hand Converting a 48 to 192 is much safer. x4 multiplier. Less chance of errors by the computer.

Recording high bit depth allows for easier gain staging and makes room for more processing later down the road. But audio recording pros who know how to properly gain stage can achieve the same results he mentions in his video with noise and clipping issues with much lower bit depths, even 16 bit recordings.

32 bit allows us to be a bit lazy. Does that make it a bit better ? (probably, but it comes at the cost of file size)

1 Like

@FluteCafe , okay here’s one that mentions your lazy statement i think.

I dont think its being lazy to want something fixed that used to take a skilled mind to monitor. Remember the days of digital recall being a battle cry for the implementation of a DAW?
I do! Lol. Nowa days, ya can’t live without it!

Mistakes will be made along the way for sure. The overall result will hopefully be better audio.
Not in the distant future, but right now.

Thanks.

that is definitely going to be the case soon, but not sure if I am able to get the point across.
This is the job of the recording engineers. They are the ones defining the sample rates and bit depths. Not the mixing and mastering engineers.

The job of mastering and mixing engineers is to work with what they get. There is no real advantage of you converting 44.1 recordings to 3 formats for your clients (they will never be able to tell the difference) unless they plan on doing more post production work on it.

You could simply give them 32bit/192 mixes, that is the most flexible for both video and audio today. But be advised you arent adding any additional sound quality or fidelity to those recordings during this conversion, it is simply easier math for computer chips.

1 Like

@FluteCafe ,
I totally try and stay away from 44.1/16

I do believe there is a sonic difference even when upconverting something from 48/24. I hear it in the versions I do.

There is a mask that is being lifted by not mixing again to a lateral bitrate. Im sure there are people who say there is no change by doing so, and this is where we are at.

There are anomalies inherent in lower resolution files. We all agree to that. So why exgasperate that by utilizing the very same rates when higher resolutions lift those hash signals out of the spectrum?

It may not be perceived by the average person listening on the couch, but it is measurable for what its worth. Thanks.

Now, to be clear, even though i am currently working at the 48/24 and 192/32 frame of reference, my goal is to one day work in that DSD audio format.

Did you know that some Rolling Stones CDs were layered with a DSD (SACD) as well as a regular 44.1 layer? Yup!

If you have one of those discs, you could run it through your Blu-ray player that has SACD capability and listen to the DSD quality coming off of the HDMI dac signal. Cool huh!

1 Like

More tomorrow, but these Paul McGowan videos are worth watching. Thanks

Ha, that could apply to so many human endeavors!

I do find the technical aspects fascinating, and try to understand them.
Once upon a time, I did find the FLAC and Neil Young’s PONO player proposal interesting.
My impression when I saw this thread, and the opposing opinions, was how some of us looked at this going back 10+ years on this forum and the previous one (that inspired this one).
Back in the early 2010’s, there was this big debate about “expensive gear”, and it’s prodigious claims and flowery language, and high prices - versus the newer inexpensive options for home recording.
A few consensus opinions (more or less) evolved, such as this:

  1. High end gear and/or processing options might improve audio specs by 1-5%, at a cost sometimes 10X the inexpensive gear.
  2. For high-end studios and clients, those costs may make sense from an ROI standpoint, whether actual or perceived improvement in sound. Basically, if paying clients appreciate your high-end gear, you may be able to justify it.
  3. For the home recording enthusiast, or project studio, it all depends on cost versus results. Why spend big bucks on something that won’t make a perceived difference in your circle?
  4. The realization that most of the listening audience (consumers) is now using laptops and earbuds, or other relatively lo-fi audio playback. Do they really care about audiophile concerns?

There’s the technical side, and the emotional side. I appreciate both, but tend to see that the emotional side is the most compelling human element. People used to ‘jam’ to AM automobile radios back in the 1950’s. It was huge. The Beatles did amazing things with 4-track mono/stereo recordings. Now you can do 100’s of tracks in DAW’s, but music may not be better now than then.

Does any of the high-end technical stuff correspond to your extensive live-sound experience? Sure, modern live-sound PA arrays and subwoofers far exceed what was probably used at Woodstock over 50 years ago, but does the higher tech make people enjoy the music more? Just a question.

1 Like