for those who didn't read girglemirt's link on dynamic range compression, it is basically lessening the difference between loud and soft passages of music. i could see this happening in rock music for a couple reasons.
first possibility is the bands simply aren't writing as dynamically anymore as they used to. i don't think this would be the case with bands such as modest mouse, who i've heard are quite talented musicians (never got into them myself, so i'm going off hearsay), but there's a lot of stuff playing over and over on the radio these days that is pretty crappy, so i wouldn't be surprised.
the second possibility is, as david commented earlier, the target audience for rock music generally aren't audioholics with hi-fi systems (although they may think their **** "surround sound" system in their car is hi-fi). the music will generally be played through headphones or in the car as they speed down the highway. which leads into my third possibility:
with the ever growing market of digital distribution and mp3 player penetration, society at large is using headphones as their primary listening method. in order to keep the low passages loud enough to hear in noisy areas and the loud passages low enough to not blow your ears out when one suddenly arrives in a passage, more and more compression is added to keep the volume constant. obviously this is going to have a bigger effect on music like rock whose target audience is the target audience of mp3 players as well, so i guess my second and third point sort of go together.
it'd be interesting to sit down and listen to bands who have been around for a while from their earliest album to their latest straight through and see how the recording styles change over time in regards to the level of compression used.
Originally Posted by
ebh
i was reading the wikipedia article on headphones (
http://en.wikipedia.org/wiki/Headphones) and it mentions that a lot of people think that stereo and headphones don't really mix. of course others say the opposite.
i'm taking a physics of music class at the moment, and while a lot of it is a joke (100 level course for kids who know nothing...), we did talk a little about how stereo sound worked in the early days which was kind of interesting. people initially used stereo by simply making a sound louder in one ear than the other, but there is a lot more to stereo imaging than volume levels (and timing, but i'm not going to talk about that). it turns out that the reflections of sound off of your ear, shoulders, face, etc actually increase or reduce certain frequencies that are heard by your ear, and depending on which way the sound is coming from, different frequencies may be affected. this is why we can tell the difference between a noise directly in front of us and one directly behind us. since everyone's body is different, these effects may be completely different for different people.
when this idea was first proposed, they tested the theory by creating a fake human head and recorded some music with a microphone placed on either side of the fake head where the ears were positioned, and also recorded the same music with just two microphones without a fake head between them. what they discovered was that the stereo imaging sounded much more real in the sound recorded with the fake head in between the microphones than without it.
while there are sound mixing programs available that modify frequencies based on common effects produced by this phenomenon to create stereo imaging, i'd be willing to bet there are plenty of sound engineers out there who don't use it, which is why so many people may feel that stereo and headphones don't mix well.
sorry this post was so long...
CMT-340SE2 Mains & Center, CBM-170SE Surrounds, Rythmik F15, Emotiva XMC-1, Emotiva XPA-5