Just an observation lol.
Feels like it. I can only speak for myself, but I feel like I’ve hit a bit of a roadblock. It’s hard to talk about audio when it’s the same 5 topics that come up over and over again, and nobody is asking those questions anymore because we don’t have new users coming in.
I don’t think its a bad thing. I’m grateful that various parts of the core group still check in from time to time. Not sure about everyone else, but for me personally there’s only so many conversations you can have about a plugin or a DAW. Not that you’ve exhausted the possibilities of your software, more so that you know how to make it do every single thing you require of it for the time being. There’s just no questions left to really ask, that a 2 min google search can’t work out. It also seems that some of the experiences you have with tools, toys, or techniques start to feel too mundane to be worth sharing. I dunno.
I hit a point where sharing what I was learning about sound design in college here became so unrelated to music recording that didn’t contribute anything meaningful to the group. Still happy to hang out, and if anyone needs help with stuff I’m glad to share anything I can.
Here’s a potential topic. I was a bit surprised when Warren Huart announced probably 2 years ago that the music/audio industry would soon be going to a 96k sample rate as a standard. I haven’t really heard much about it since though. I believe the idea was that since technology keeps getting more robust and affordable, it was a no-brainer to jump to the next level. Presumably a bump in quality would be achieved, or simply do it “because you can”. Related to that, would the increased resolution (oversampling) that plugins are capable of warrant increased sample and bit rates?
Or is all of this kind of silly and no measurable improvement of marketable products and services would be achieved?
96k already is a standard, it’s just not default. You’d be hard pressed to run a studio that takes on industry projects without being able to work with 96kHz.
Now, if he’s talking about 96k as the standard for deliverable music, I think he’s not paying attention to the technology. As far as music goes, it’s all about data rates, not storage space. Nobody stores their music anymore, they stream it, and streaming companies are always going to push for the lowest bit rate possible. Unless somehow some marketing genius can convince millions of people that 96k will sound significantly better to the point where they’re willing to pay for it, that shift isn’t going to happen.
Right, we’ve had the capability to record at 96k for quite some time. IIRC he was talking about recording at 96k as an industry standard, the big studios and big boy producers etc. I don’t know that he meant delivering the music at a different standard, though I got the impression that he thought it would be going in that direction eventually as a natural step.
As I understand it, there is a market for high resolution .wav files, it’s just not a big market. And maybe it never will be. Now those are most likely all downloads and not streaming. However, the 5G wireless technology will shift a lot of things, and current limitations on streaming could potentially become obsolete over time.
So I guess there’s 3 criteria to look at: 1) Do higher sample and bit rates actually sound better (a noticeable upgrade in quality), 2) Will new technology allow for higher stream rates and perform equivalent to the old streaming quality, and 3) Is it something people will pay for.
Other than streaming becoming the big new distribution platform, the digital technology standards we have now have been around for what … 20 years or so? It seems like at some point there will be some kind of change driven by technology and innovation. In media formats we had almost a transition a decade over the last 70 years or so - vinyl (mono), vinyl (stereo), 8-track tape, cassette tape, CD, MP3, streaming. As we approach 2020 it would seem like another shift is around the corner, and at least the digital technology standards would change/upgrade?
I think if people cared about audio quality, the first thing they would ask for is to get rid of that blasted watermarking that they put on all the songs. That stuff is orders of magnitude worse than any compression algorithm or sample rate reduction I’ve ever heard, and nobody complains about it. There are so many things we could do right now that are not a limitation of technology that we don’t do, I think it’s a little early to start predicting what sample rate people will use.
Are you talking about on streaming music services? I don’t want to sound ignorant, but I don’t think I’m aware of this.
A lot of the time, labels will give streaming services a watermarked version of a song. It has some funky noise going on in it that is very audible. It’s far more destructive to the song than any of the compression schemes. A lot of the time, the bad sound you get from streaming services is not the audio quality itself, it’s that water mark making the music sound awful. Once you notice it, it’s hard to not hear it.
Do you have any examples of this? You say people are not talking about it, and I haven’t heard about it either. Which streaming services, if you know? I don’t generally frequent them except YouTube. Is this to prevent piracy or something? I’m quite familiar with watermarking when sharing files for critique or review. We also had that conversation years ago on RecordingReview in regard to BTR. But for listening pleasure, yes that would be absolutely horrible.
I did find some info online now that I’m looking for it. Some old, some newer, like this one:
It maybe makes some sense on Soundcloud where people present their work for purchase in some cases, but not for other ones like Spotify or Apple Music subscriptions.
I’ll be fine all day tomorrow, but when I get back, I’ll try to find some examples. I hear it most in orchestral music. It’s sorry of a warmly sound, but different from compression.
It’s an interesting dynamic… the willingness to trade quality for convenience. I wonder how many kids out there have never listened to ‘full quality’ audio (from vinyl, CD or a lossless format like FLAC)?
I admit that I listen to streamed audio for the convenience when at work or on-the-go. But when I’m at home and I want to listen to music (not just have it as background noise) I go straight to my collection of FLAC or pull out a CD (my vinyl collection hasn’t got much love lately). I definitely notice that when I go back to something with ‘full quality’ audio I have that “Woah, I almost forgot how GOOD this actually sounds.” it’s insidious how listening to sub-standard audio has become completely acceptable as the norm.
What do you listen to the most when you actually want to listen to music?
I’m guilty of using Spotify for casual listening. That being said, I signed up for Tidal recently and was really impressed with their HiFi/Master recording options… so for “better” listening and more critical listening I’ve moved over to Tidal (at least for things I don’t have in my own library).
I also recently have dipped my toes into the world of external DAC and headphone amplifiers… which makes listening to high quality audio that much more awesome!
When talk becomes redundant then creating with said audio is the answer.
Point of order: 96kHz is not 'high resolution.
High-resolution audio, also known as High-definition audio or HD audio, is a technical and marketing term used by some recorded-music retailers and high-fidelity sound reproduction equipment vendors. It refers to higher than 44.1 kHz sample rate and/or higher than 16-bit linear bit depth. It usually means 96 kHz (or even much higher), sometimes informally written as “96k”
Yes, we all know what it is marketed as, but that does not make it so. 96kHz is increased headroom, and therefore a lower [than 44.1khz] noise floor, but since the noise floor of 44.1khz is undetectable by human hearing, to call 96kHz “high resolution” is at best, a misnomer, and at worst, salesroom snake oil.
Retracted. Drunken bullshit, sorry dudes. I am referring to bit depth here with my comments, not sample rate. I’ll come back when I’ve scoffed the obligatory curry and sobered up with an Alka Seltzer or two.
Boo. We got so close to being able to have a debate. Opportunity lost. You should have just stuck with your guns on this one and fought it to the end.
But, in all seriousness, you can get more dynamic range out of high sample rate because you can use a heavier noise shaping dither to increase the dynamic range.
So that sounds (no pun intended) like a benefit if that’s what you’re looking for. How would you recognize a heavier noise shaping dither? Does the name of the dither algorithm give any indication, or would you have to look at some specs?
In my youth, I would have done so. I’m still a stubborn cantankerous git, but an older and wiser one who knows when to admit he’s wrong.