top of page
Search

The Loudness Wars and what I've learned



The first time I listened to Rush's Vapor Trails in 2002, it was perceptively suspicious to me that Rush was trying answer a criticism belittling their musical ego underneath their early career. During the 1970's Rush were known as a harsh, angry, rocking prog band that damned the industry in lieu of their own direction. While their album, 2112 in 1976 was exactly that, it became a staple in their discography that fans will never retire. Same with Moving Pictures. The latter is definitely a keystone right of passage through which most new fans are introduced, whether by coercion or radio airplay. After all, Rush's biggest fan hit, Tom Sawyer opens this album with a massive bang! Vapor Trails did this as well. With the return of the drummer, Neil Peart (pronounced "Peert" for you drummers out there) was just getting on with his life after a massive family tragedy. So, the opening of the album echoed his triumphant return. Every song on this album displayed an anger toward the past and a lyrical tribulation that was never seen before in a Rush album. It had a lot to say but the master was unfortunately over-cooked.


The album wasn't just loud. It was smashed to the core. Fans, especially those with some industry experience, including myself, saw that it was crushed in mastering. Now, if you've not heard this original master, there were problems beyond what you'd normally hear, even in today's crushed masters. In 2002, CD players were prone to digital clipping when the master was produced with a peak level of 0db LUFS. It didn't take long before complaints made the rounds on Rush fanpages. Some more astute studio nuts used tools like soft-clipper plugins to tame the masters to play back nicely on older CD players. Howie Weinberg, who mastered the first revision of Vapor Trails has said he likes to master loud when it warrants, but back then, I doubt even he knew exactly how this digital clipping was going to impact the results of the release.


When I wore out Vapor Trails after a full week's worth of absorbing it, I began to notice odd headaches. It began to affect me like a cold air conditioner vent blowing directly onto my forehead for too long, or like ice-cream throat. Or, like when you ride for a long while on a roughly paved road (or gravel), and finally meet smooth pavement. I never realized the ear-fatigue I was experiencing until I stopped playback.


Now, remember, this was 2002, the floodgates to the CD The Loudness Wars were finally open, and the Waves L2 (hardware and plugin) was infiltrating mastering studios, worldwide. Around this time, CD's were getting louder and louder, thanks to heavier limiting. I sensed this trend stemmed largely from the massive success of Metallica's Black album, Oasis' What's The Story Morning Glory, and Nirvana's Nevermind. In fact, Oasis, I believe were the crux of the realization that loud means 'terrible', but the industry wasn't interested in reigning this in. Metallica's Death Magnetic was also one that garnered complaints that it was too loud and crushed.


I won't attempt to regurgitate the actual history of the loudness wars, because, really, not everyone agrees on when and how it started. Many argue that it started in the 1960's with vinyl and when AM radio benefited mainly from more powerful antennas and harder limiting. Then, in the 1980's with FM radio, higher-fidelity gave way to even louder examples of radio airplay. While the albums released in the 1980's weren't heavily limited, there was a trend that they would be mastered so they would play louder on FM radio.


Through the 1990's and 2000's, in the height of the CD market when so many people owned CD-changers, switching from one CD to another resulted in a fairly quick transition of less than 10-15 seconds which meant that, without adjusting the volume (because we've gotten lazy), a louder CD had more perceptive impact. Needless to say, the industry was twice-baking the music in ways never done before by hard-limiting, EQ-ing with more mids, adding distortion, saturation, and truncating transients. All this meant was that the overall loudness of the CD was more important that it's sonic fidelity. Ear fatigue became a term used more and more widely in the listener community and reviews.


Hopefully, we've seen the end


There are a number of audiophiles and mastering engineers who will acknowledge the mistakes of the past and that the industry is now steering away from super loudness. In fact, in the television broadcast industry, there are now regulations that dictate how loud your productions can be without penalty or fines. The audio industry isn't quite there yet, but I detect it's coming.


With the aid of companies like Apple Corp., Spotify, and YouTube, loudness has become less of an issue for the listener, as most all material has been limited to a level playing field on these streaming platforms. In fact, now, many clients aren't striving for stupid loud masters, but rather, GREAT masters. Over 90% of my clients do enjoy a loud CD master, but for streaming, they prefer just a master that sounds amazing. Loudness is not an issue. Not a single client this far has complained about how loud their music played on streaming platforms.


Still... This isn't stopping the industry yet. Very loud masters are still being pushed to streaming platforms, and many of them still sound louder, even though the streaming platform levels them. How so? Creative, to be sure, but for some, ignoring the best practices is how. As we know from history, such as building taller skyscrapers or pushing the limits of our space program, every boundary is pushed at some point. I'm always impressed at how someone like Ted Jensen mastered Lamb Of God's Checkmate or Gears from their self-titled album. It's heavy, clear, punchy, but very loud in comparison to other metal albums. It sounds more controlled than say, Rush's 2002 Vapor Trails. It's because the industry learns to push the limits further and further with ever-improving tools and learning from mistakes.


I have mixed feelings about loudness. My CD masters are typically very loud, but not as much if the music isn't "loud" by nature. I try to find the perfect balance between dynamic and loud on every master. When it comes to streaming, I find I'm always experimenting because I want to find the right balance between dynamics and loudness, and I've discovered a few great tools for helping, but the bottom line is that I don't rely on those tools to tell me when it's right. My ears are telling me everything I need to know, but even then, emotion is a key element that overrides almost everything else.


I recently mastered a progressive-sounding metal instrumental for a great band who, just for fun wanted a super-loud CD master just to see how far we could push it. While metal cannot compete with EDM for loudness on a meter, it can get quite loud, if mastered carefully enough. Believe me, I've managed an EDM master to -0.1dB LUFS loudness (not peak, loudness), and it still held together. But this isn't ideally what you'd want to release. The metal master was largely at a loudness of -4 dB LUFS, and still held together nicely. I called this the Norma Jean master. The client was happy with the result. However, this was not the master sent for streaming. That one played at roughly -8 dB LUFS, 4 dB lower than the super loud one. By the way, all of these played without digital clipping because of lessons we've learned here in the past 20 years--how not to clip a DAC (Digital to Analog converter). In fact, it's ironic: we tend to clip converters going to digital from analog, but nobody wants the converter going from digital to analog to clip, EVER!


So, why do it? Why try to see how much more we could squeeze out of it? It's not because a client wants to just hit a target, and not always because a client wants to simply be louder than everyone else. It's because of the emotion behind it, and the laziness of managing a volume control. The client wants to try it because the fear of missing out on unrevealed detail, emotion, and "what if it's better"? This fear will never go away. And I want to be careful about the term, "lazy" in that I don't mean people are such, but rather, it's become a much better way of listening to set a level and leave it. In the mastering world, a set volume/level is something we routinely require to best evaluate each master when listening, so that consistency is achieved from song to song. Our ears hear best at a certain volume level. Too low, and we miss critical details. Too loud, and our ears compress, which fools you into thinking something sounds better. Still, people do prefer not to have to wrestle a volume control after every next song.


So, the loudness wars continue


Yes, they will continue on always. No technology, regulation, or human will ever stop fighting with each other on how loud music should sound. While technology and standards will regulate, humans will continue pushing the boundaries. It's what we do. The only hope we have is that listeners don't care, and I mean EVERY listener won't care. That's it. That's the only way the loudness wars will ever truly end. You can crush a metal tune for streaming only to have it turned down, but then, you'll always have mix and mastering engineers studying the results to see how they can manage to make the results louder. Even if mastering a song to -25 dB LUFS results in the very loudest, punchiest, and most impactful streaming platform performance, then the engineer will study that and seek ways to make a -25 dB LUFS sound louder than others.










264 views0 comments

Recent Posts

See All
bottom of page