How to sound good on Streaming Platforms

First off, I feel like writing today! Its the best way for me to get over my stage fright and lockdown anxiety and it works.
And todays topic is loudness :sweat_smile:

We might be hearing a lot of discussions revolving around Loudness Penalty for streaming.

While its true that streaming platforms slash the volumes down but its not all bad news. All in all it gets our mixing juices flowing and gets us to focus more on the music because platforms are in a way giving everyone a common ground.

First things first, imo dont use the loudness guide literally. Even the loudness meter designers say that.

Next step is in understanding the loudness.
Loudness for LUFS or LKFS metering system used by most streaming platforms use this formula:

Meaning loudness is measured in roughly half-second chunks (400 miliseconds of audio). Overlapping ater 100 miliseconds. The more overlapping high peaks and transients you have within that interval, and number of such occurrences in your song, the more loud your mix will be considered.
Now that we know the formula next is how do we beat it right ?? Spoken like a true hacker lol

Half a second is an eternity in sound mixing. While this is bad news for fast tempo songs and good news for slow tempo songs, it doesn’t mean that we all start writing slow ballads :laughing: … but what we can do is identify if we are or have been overdoing some things in practice. For example in a fast tempo song where a loud kick or snare spreads may be lasting longer than 400 ms, overlapping with several other samples and adding up to a whopping penalty. Then streaming platforms will just turn your mix down even if it doesn’t sound loud and you can say goodbye to your quieter parts (if you have any!).

So how do we beat it?? and do we need to?
Groove it!
May be we can consider a groove instead. Hit hard in the first measure, hit a bit softer in the 2nd measure and so far. Then when needed, we can thump those double kicks in quick bursts. If not It might make us a bit better song writer in general. When we groove our loud parts, it not only fools the algorithm but it might create more emotion instead.

Move it!
Focus on the dynamics of the rich ear sensitive mid-high section. The more rich dynamics you have in this range, the more movement your listener will feel while listening during commuting or workouts where mind isnt really focused on grabbing the entire picture of the soundscape and the finer details.

Delay it!
In streaming versions, the most common data that is lost is ambiance, reverb and stereo spread.
Even though we can stream 386 kbps on youtube on a pc, on most commuter phones HD is turned off and you are lucky to stream 128 kbps of sound quality. Thats where these shenanigans will be lost.

Doesn’t mean we stop using our nice reverbs, just consider using more dry delays that arent lost in the low quality versions for a similar effect. Then you can still top it off with nice reverb… or as CLA does it with 10 different reverbs! :wink: Though CLA Echosphere is a decent plugin.

Think it through!
Last but not the least, lets focus on our goal as a song writer producer:
We need to understand our audience and write music for them instead of getting bogged down with geeky details. Good music and good arrangement will sound good even if heard in mono and a subpar, unhyped up mix.

References (International Telecommunication Union) Whitepaper about LUFS
R-REC-BS.1770-3-201208-S!!PDF-E.pdf (1.5 MB)

4 Likes

So I did some experiments and I needed a track screamin loud but featuring some emotional harmonies and what better track came to light than one of @takka360 's mixed track “Piece of Me”. It is a powerful piece indeed. That track became my guinea pig. Though that track sounds amazing as is, I wanted to create a streaming ‘remix’ while experimenting on some of the points I mentioned earlier.

The purpose of this remix is not to show what an ideal streaming mix should be but highlight that we dont need to get bogged down with so many details about losing quality that we lose focus of what the song delivered to begin with. This remix is scaled down in quality purposely and it is not an industry standard ‘polished mix’ per say but highlights subtle things we need to focus on more for streaming or any version really. This version is far from perfect and has obvious issues. It was remixed from mp3 (i did this on purpose) and also downscaled it in my remix to lose a bit more quality but it focused on rebalancing the ‘stream friendly’ mid high frequency band where more emotional harmonies and subtle dynamics tend to hide. While we dont have loudness on our side to force a mix to the listener, we do have a bit of sound psychology on our side of the casual listener.

I compensated for lost reverb with a wee bit of delay on the entire mix, compensated for loudness with a bit of saturation targetted around the vocal frequencies in the “screaming” parts and buried the vocals slightly at the level of the loud guitars to trick the mind into thinking that the scream is louder than it is.

I did about 4 or 5 versions of different EQs and even some really slammin versions but this one had the most emotion and the least slamming and the lowest quality. Yet it outshined my other re-mixes.

Check around 3:12s - 3:45s for the track’s most emotionally heightened tear jerking parts where I did some subtle re-balancing of the harmonies using a stock eq and a bit of vocal automation . Subtle automation was in the mid highs to bring out what I was trying to accomplish here.
In all other mixes, this emotion was lost from overhyped treatments.

This experiment shows how critical the musical nuances are even in a lowfi mix that can stand up to par in playlists with polished mixes. Something to keep in mind in our workflow while we polish things and focus on clarity lost here punch there, making sure everything is heard etc etc. While those things are important, its more important to keep the focus on what the track delivers as well.

This track uses a stock cubase eq and a slight stock reverb send signal and topped off with @bozmillar big clipper.

2 Likes

Fantastic mix.

1 Like

Thanks for the rundown.

I can see how the “groove it” and “move it” parts will work great, however I have some reservations on “delay it”, based on my own experience only. I have made a number of null tests of wav vs mp3 versions of the same master and what I’m hearing is mostly highs and side content. While it is true that reverb and ambiance is often living there, it is not always the case and choosing to alter your favorite treatment just for the sake of optimizing the result after conversion to lossy format sounds like a wild move to me.

Here’s a quick null test I just made on a short extract of my last song. It is the 96kHz 24bits wav file against mp3 in “normal” quality (which I believe is 160kbps on the desktop app) streaming from Spotify with the normalization off. This is the result of the null test (of course you’re hearing the compressed version of the null test since the forum doesn’t allow wav streaming but it gives a pretty good idea of the difference nonetheless).

Here’s the same extract in its original form, for the sake of hearing what the mp3 conversion does to the music

And while we’re at it, here’s a comparison between the loudness readings of the original file vs the streamed version (same settings).

Original

Streamed

And since we’re talking about normalization, I also took the measurements of the same song streamed from Spotify, this time in “high quality” (which I believe is a mp3 with a 1,411 kbps bitrate) and normalized (on normal, which is -14 LUFs)

It’s interesting to notice that Spotify actually does almost nothing to your music dynamics and loudness if you’re already within their targets in the first place.

Anyway, back to the “delay it” thing: what I mean is that while it is important to understand what lossy conversion and normalization do to your music, I don’t believe we should go to such great lengths in order to optimize the end result. The algorithms should match our standards, not the other way around. Besides, who knows what the streaming standards will be in 2,5 or 10 years from now? All of this is changing so quickly.

And based on my experience (and that of several mastering engineers I talked to), as long as your music sounds good in the first place and you didn’t go over the top with true peaks and dynamic range reduction, the algorithms work pretty well. The opposite is also true (songs that have been too heavily limited suffer a significant deterioration when streamed).

2 Likes

Since I was looking into this, I also took measurements for a few other songs to gain more insight about what Spotify does (and doesn’t do) to our music. I had already done this a few times before but didn’t save the data.

I did that on 3 songs which I consider to be roughly representative of the “conservative”, “standard/loud” and “too loud” categories. Of course we’re only doing comparisons on loudness (and dynamics) here, nothing else and certainly not anything artistic. How loud we perceive a song is subjective. Though LUFs is doing an immensely better job at estimating this than RMS, it is still far from perfect and doesn’t always exactly match what we are perceiving. In this particular case however, what I was seeing on my screen was matching what I was hearing really closely.

The three songs are Twisting the Knife by me (conservative), Lean by Fytakyte aka @ColdRoomStudio (standard) and The Day that Never Comes by Metallica from the infamous Death Magnetic album, and obviously for the “loud” representative.

Apologies Andrew, I know you’ve made it clear that discussions on loudness were not your jam, however this small experiment is interesting I believe, even for you. I promise that I won’t drag you again into a discussion on this topic after this one, unless it is at your initiative.

I don’t know how loud the original master was for Metallica. Both Andrew Scheps and Greg Fidelman got mixing credits but interestingly enough the name of the mastering engineer does not appear on allmusic.com (I bet he got his name removed from there after everyone started using the album as a reference of what NOT to do). Anyway it was Ted Jensen.
Based on the info on Bandcamp, Andrew’s song was mastered by Chris Graham, who I know a little since I was part of a training in which he was involved. Being a little familiar with the way he works, I am guessing the master was around -9 LUFs or so (actually a bit hotter than what would be considered “standard” but again, this is a song I knew and I wasn’t going to search the internet for hours for something more “standard”).
Finally, I mastered Twisting the Knife at -14.2 LUFs for Spotify (and did an even softer version for Apple Digital Masters), so that’s conservative.

Short reminder in case you haven’t read the previous posts: the following is the audio recorded from the desktop Spotify app with a premium account, quality on “very high” (mp3 @1411 kbps which is virtually indistinguishable from the original wav) and normalization on “normal” level (-14 LUFs).

First, a general look on the waveforms is pretty self explanatory as to the dynamic content of the normalized songs (from top to bottom is the most conservative to the loudest:

Apart from the obvious spikey nature of the upper waveform, it is interesting to note that the song that was originally loudest (Metallica’s) ends up being the less loud on Spotify, at least visually. Let’s see what the measurements have to say about it.

Twisting the Knife:

Lean:

The Day that Never Comes:

This confirms that Metallica’s song is only peaking at -6.8 dB after normalization, that’s how much it has to be turned down in order to end up at an integrated -14 LUFs overall. Based on the amount of audible distortion on the transients and the cymbals in this song, it is safe to assume that the original master was peaking way over 0 so that’s a pretty big difference. Twisting the Knife on the other hand is peaking at -0.8 dB and the wav file is peaking at -1 dB so that’s a tiny 0.2 penalty.

I have to be fair and acknowledge that the comparison is not entirely relevant because while Lean and The Day that Never Comes are full-on rock songs with an all-in arrangement throughout, Twisting the Knife has almost 50% being mostly acoustic with little to no drums, so we are not exactly making a fair comparison here, but since it is my song I knew for a fact that it had been mastered conservatively, and I wasn’t going to spend hours investigating until I could find another rock song that would match the conservative criteria. If you know one, I’d be happy to include it in this test.

Now the real test: what are my ears telling me? Well pretty much the same thing I’m seeing and reading. When I play similar sections of the 3 songs (loud sections with drums, bass and vocals), Twisting the Knife is WAY louder than Metallica’s song. It is easy to do the test in a DAW with the three files on hand when you can instantly switch from one to the next, but even when playing them from Spotify (desktop app, since the browser player doesn’t normalize) you can tell how much softer Metallica’s song is played in its loud sections, even though you have to spend some time switching from one song to the next and fumble about until you find the right section.

I know that a lot (if not most) of mastering engineers still produce loud masters, most of the times under the client’s pressure, but things are slowly changing. As I mentioned in another post, it is estimated that roughly 90% of the music that is listened through streaming is normalized. Uploading a loud master when you know it is going to end up no louder (often softer) than the rest, and with a diminished dynamic range doesn’t sound like the right thing to do, at least to me.

2 Likes

Hey! thanks for the awesome writeup and research

all good points, one thing to add,
Its not always what streaming platforms do to the music, its also about the data plan settings in most devices. If the video isnt being streamed in HD, or data saver is turned on, a lot of good ambience information is lost. You can test this by streaming video on facebook or youtube in a lower quality.

Its a good practice to use delay in combination with reverbs, doing this not only foolproofs your mixes but also creates depth in your mix.

Right, however people streaming music with low quality are still a minority overall, judging by the statistics. That is a good thing! And it is likely to be even less in the future with the advent of 5G (regardless of what one thinks about 5G, this topic is not about health).

Loads of interesting stats here for those interested: https://www.businessofapps.com/data/spotify-statistics/

good info! thanks.
One thing to note is that Facebook is currently developing into a very strong streaming platform as well, specially for music videos. Facebook videos can now be monetized and its striking youtube at the heart already as artist revenues dwindle on youtube, they are starting to grow on facebook because of the way facebook presents the feeds. Its likely going to stay that way. The thing about videos on facebook is that the facebook video is not HD by default and sound quality on every video watched on a smartphone via facebook app is very low (unless you voluntarily change the setting and mostly no casual user does because it eats up data). There is a huge population of smartphone users who use the FB app. I dont see facebook changing that option very soon even with 5G as they are not the “big” video guys yet. The lo fi option hasnt escaped us completely yet, likely wont for a very long time.

Right. It’s going to be a tough battle between Youtube, who has the best video tools, and Facebook who has an overwhelming superiority in terms of audience.

I wish technological progress would primarily benefit the users in terms of quality but it’s not that simple…

1 Like

What a great thread! Lots to chew on here. Thanks much Michelle and Jean-Marc!!

1 Like