What ever happened to surround sound music?


This post is based on a presentation I gave at the inaugural Indiana University Media School Graduate Student Conference.

About 10 or so years ago, it seemed like there was a new game in town: surround sound music! Of course, those of you old enough can recall that this isn’t the first time such a promise was made. But this time, by golly, it’s going to work! And if you believe that, then I have a 3D TV to sell you, too. But surround sound music seems like such a natural evolution, much like 3D TV. But time and again, surround sound music has failed to launch.

Back in my undergrad I took a course where, for one assignment, we had to produce a 5.1 mix of a project one of our peers has recorded. Even while explaining the assignment, the professor seemed doubtful about surround sound music really taking off and this being a relevant skill to build. Sure was fun to play with for the assignment, even if my mix was terrible.

As much as I try to avoid jargon, this post is going to have some. So, before I really dive in here, I’m going to hit you with some definitions:

  • Mono: one channel of audio information. Might be coming out of 1 to 2 or more speakers, but when each speaker is playing the exact same sounds, it’s mono.
  • Stereo: two channels of audio information.
  • Surround: more than 2 channels of audio, where some number of channels are positioned in such a way that the sound that comes out of them is coming from the sides, behind, above, or below the listener.

Also, I think to contextualize my argument properly, I need to give a (painfully!) brief history of recorded music, too.

  • Sheet music: circa 2000 BCE, cuneiform tablets had musical notation on them
  • Mechanical reproduction: circa 9th century (!!!), a hydro-powered organ that performed music etched into interchangeable cylinders by the Banū Mūsā brothers

A diagram of the hydro-powered organ.

  • Phonograph: 1877, Thomas Edison. Wax cylinders that could have audio waveforms etched into them and played back later.
    • Recorded live; recording and playback on one mechanism.
  • Disc phonograph: 1889, Emile Berliner. Platters instead of cylinders.
    • 33 ⅓ rpm (the LP): 1948, Columbia Records
  • (Practical) stereo sound: Bell Labs, 1937
  • Surround first attempted by Disney’s premiere of Fantasia in 1940
  • First “big” consumer format was Quadraphonic in the very early 1970s
    • Actually 3 competing and not cross-compatible formats
    • Could be done on tape or vinyl
    • CDs hypothetically could contain quadraphonic sound and it is allowed (but under-specified) in the “Red Book” but this was never commercially attempted 
  • Once the DVD and home theater setups became largely ubiquitous, DVD-A was attempted (among others)

OK, now to the good stuff!

The Case Against Stereo

It’s kind of hard to imagine given its ubiquity and sort of obvious design, but stereo music was not met with resounding embrace. Perhaps most understandably, the public needed to be convinced that it was more than a mere gimmick. But even musical luminaries like Brian Wilson of the Beach Boys and Phil Spector of convicted murder and the famous “Wall of Sound” production aesthetic spoke out against stereo. Spector thought that stereo would take control away from the producer, and power away from his Wall of Sound. It was an issue of scale: the Wall of Sound didn’t seem to work in stereo.

Wilson’s concerns were similar, but more focused on feeling like stereo necessitated trusting the public to set up their stereo systems correctly. If the speakers weren’t placed right, the stereo image would be strange and the balance between the left and right sides of the music would be bizarre or at the least, transformative to the recording. To contrast with mono systems, you just need to plug it in and turn it on. There’s nothing to calibrate.

To make matters worse, when companies were pushing stereo, they needed to be able to sell stereo records to people. As such, lots of recordings that were designed for mono were reprocessed as stereo. Back to Spector’s concerns, these recordings were not conceptualized for stereo. Even on a well set up stereo system, it is ultimately a perversion of what it was meant to be. Even more damning is that audiences had mixed reactions to “stereoized” recordings.

Surround Sound: more channels = more music?

It was only about 10 or 15 years after the initial foray into stereo music that surround sound first came to the consumer market in the form of quadraphonic sound: four speakers positioned around the perimeter with the listener in the middle. Just think about the physical reality of that for a moment! A few years ago a mono system could just be plunked down wherever convenient. No wires running every which way, sounded pretty good in a large portion of the room, and it was cheap. Then two speakers, but the sweet spot was still pretty large and the wires were limited to one side of the room at least. But then quad? This required an entire room to be dedicated to the listening of music, and you couldn’t stray too far from the sweet spot and have it still sound “good.” Wires would have to run the perimeter of the room, too. And the cost of four speakers and the specialized playback systems. Yes, systems. There were several competing quad formats that were not cross compatible. Yikes. Couple that with the quad-ized recordings and it was a bit of a mess.

All of that aside, there is a certain parity for the mono-to-stereo move and the stereo-to-surround move. But one worked and the other didn’t. Why?

Affordances of the Medium

Every medium is unique: Van Gogh’s Starry Night rendered in watercolor would be a different work because watercolor and oil do different things. The same thing applies to music formats: each has a unique set of strengths and weaknesses. Things tend to be most interesting, it seems to me, when artists leverage these affordances of the medium to create something that only works in that medium. The concerns about surround sound delivery are becoming less and less pronounced, thanks to modern surround emulations on headphones and even home theater soundbars can kind of fake surround sound. But where’s the music?

Starry night as painted by VanGogh

A watercolor re-interpretation.

I think that it has to do, largely, with the fact that not many artists need (or want) a surround sound space to do their work. In the West, our music listening traditions are deeply rooted in musicians being collected together in one area and the audience paying attention to them. (It hasn’t always been this way, but it has been for a few hundred years for the most part.) With our two ears in any physical space, we will hear stereo sound. So between our cultural practices of music and our built-in stereo receiver, stereo music works nicely.

Let’s go back to Spector’s Wall of Sound. The Wall of Sound didn’t scale well to stereo because it was built upon the idea that adding multiple, multiple layers of a single part together, he could create an all-encompassing assault of music. Splitting this into stereo meant he would need to double what were already some of the largest, most complex recording sessions. It just couldn’t be done effectively. Now recall that surround is at least doubled yet again in terms of channels.

What does surround sound music even sound like?

Ever listen to early stereo recordings? You might hear the drums all the way in the left, the bass all the way in the right, and so on. Maybe there would be extra reverb added to fill the space. It was a bit extreme, but it was a necessity. Those sources weren’t recorded to be stereo, so all they could do was put individually mono signals in different spaces in the mix. And by golly, if people are paying for stereo, let’s make sure they hear it! This was also due to limitations of early stereo recording consoles where panning (placing things in the stereo field) was reduced to “L-C-R:” a three way toggle for left, center, or right. But back to surround… what should go in those additional channels?

“This 5.1 mix of Megadeth is so going to be worth it.”

The answer to this question is similar to the answer for early stereo: grabbing elements from recordings conceptualized for stereo and distributing them across the additional channels. The result is an emaciated surround mix, spread thin around the room. Crucial pieces would be excised and hung out in periphery. Even worse, those sounds from aside or behind the listener have very different psychological meanings than sounds from in front of you. On a fundamental and animal level, sounds from sources that we can’t see are startling.

Other approaches were to take a stereo recording and make it sound like you were listening in an idealized listening environment. Some kind of emulation of a space. This is an interesting idea, but there’s no way to account for what the listener’s room already sounds like. Once more, this is ultimately noise. The signal is the music!


I don’t mean to universalize. There are some wonderful examples of surround sound music out there, but it’s very niche. And it’s because it necessitates the entire process of recording the music (if not the conceptualization of the music itself!) to be done, from the ground up, for surround sound. And it’s hard. It’s very, very hard to do because there is so little basis for comparison. Part of successful artistic endeavor is pushing against the boundaries of the possible. In surround sound, those boundaries are so much more distant than stereo or mono that it’s hard to even find them. It’s for these reasons that I think surround sound music will never leave the niche. If the content is good, people will make excuses to jump through the hurdles to listen to it.

A Recommendation

Even though I’ve been dumping on surround sound music, I don’t want you to think that I dislike it or think it’s dumb. Far from it! It’s just hard to find examples of surround sound music that sound like they should be surround or that they are doing something that can only be done in surround. But those examples do exist, and I’d like to recommend one:

The Flaming Lips: Yoshimi Battles The Pink Robots 

I recommend this one in particular because it’s a reasonably well known recording in its own right, but also because the stereo and surround versions allow for a compare and contrast: the ‘Lips didn’t just release a surround version of stereo mixes: they’re different versions of the songs with different elements and different vibes. The Flaming Lips have long played around with surround sound, so it only seems fitting that they knocked this one out of the park. And despite its age, it still sounds like the future – and that’s what surround sound is all about, right?

MP3s don’t matter (until they do)


I’ve written before on some of the differences in MP3s vs WAVs, specifically how MP3s seem to invoke more negativity than WAVs in a blind test. I don’t know about you, but I thought those results were interesting and weird. So, I thought it made sense to kind of zoom out and try and get a bigger picture of this phenomenon.

A logical first step was to ask “Can people even hear the difference between WAVs and MP3s in their day-to-day life? If so, in what circumstances?” As the title implies, people generally can’t tell in most circumstances but once they do, it is a very pronounced shift.

The Experiment

I made an online experiment, asking people to listen to 16 different pairs of song segments and select the one they thought sounded better. There were 4 levels of MP3 compression: 320k, 192k, 128k, and 64k.

‘Why those levels of compression?’ you might be wondering. Amazon and Tidal deliver at 320k, Spotify premium does 192k, YouTube does 128k, and Pandora’s free streaming is 64k.

For each pair, one version of the segment was a WAV and the other was an MP3. (See below for more detail.) I also asked basic demographic information and how they usually listen to music and how they were listening to the experiment. For example, a lot of people use Spotify regularly for music listening on their phones, and a lot of people used their phones to do the experiment. Doing the experiment gave up a lot of control over how and where people listened, but the goal was to capture a realistic listening environment.

The Songs

I selected songs that are generally considered to be good recordings capable of offering a kind of audiophile experience. Also, I tried to choose “brighter” sounding recordings because they are particularly susceptible to MP3 artifacts. The thought behind this was to maximize the chance for identification of sonic differences, because I was doubtful there would be any difference until a very high level of compression.

I also split the songs into eras: Pre and Post MP3. I thought that maybe music production techniques might change to accommodate the MP3 medium, and maybe MP3s would be easier to detect in recordings that were not conceived for the medium.

The Song List by Era

Pre MP3 (pre 1993):

  1. David Bowie – Golden Years (1999 remaster)
  2. NIN – Terrible Lie
  3. Cowboy Junkies – Sweet Jane
  4. U2 – With Or Without You
  5. Lou Reed – Underneath the Bottle
  6. Lou Reed & John Cale – Style It Takes
  7. Yes – You and I
  8. Pink Floyd – Time

Post MP3:

  1. Buena Vista Social Club – Chan Chan
  2. Lou Reed – Future Farmers of America
  3. Air – Tropical Disease
  4. David Bowie – Battle for Britain
  5. Squarepusher – Ultravisitor
  6. The Flaming Lips – Race for the Prize
  7. Daft Punk – Giving Life Back to Music
  8. Nick Cave & The Bad Seeds – Jesus Alone

The Song List by Compression Level


  1. Cowboy Junkies – Sweet Jane
  2. Lou Reed – Underneath the Bottle
  3. Squarepusher – Ultravisitor
  4. Daft Punk – Giving Life Back to Music


  1. David Bowie – Golden Years (1999 remaster)
  2. NIN – Terrible Lie
  3. The Flaming Lips – Race for the Prize
  4. Air – Tropical Disease


  1. U2 – With Or Without You
  2. Lou Reed & John Cale – Style It Takes
  3. Buena Vista Social Club – Chan Chan
  4. Nick Cave & The Bad Seeds – Jesus Alone


  1. Pink Floyd – Time
  2. Bowie – Battle for Britain
  3. Lou Reed – Future Farmers of America
  4. Yes – You and I

The Participants

I had a total of 17 participants complete the experiment (and 1 more do part of the listening task) and a whole lot of bogus entries by bots…. sigh. Here’s some info on the real humans that did the experiment:

Pie Charts2.png

Note: options with 0 responses are not shown

Pie Charts3.png

Pie Charts4.png

“Which best describes your favorite way to listen to music that you have regular access to?” was the full question. I didn’t want everyone to think back to that one time they heard a really nice stereo!

Pie Charts5.png

Pie Charts6.png

Pie Charts7.png

“This includes informal or self-taught training. Examples of this include – but are not limited to – musicians, audio engineers, and audiophiles.”


Unfortunately, the sample size wasn’t big enough to do any interesting statistical analyses with this demographic info, but it’s still informative to help understand who created this data set.

The Results

Participants reliably (meaning, a statistically significant binomial test) selected WAVs as higher fidelity when the MP3s were 64k. Other than that, there was no statistical difference.





11 to 57 in favor of WAV, p <0.001

When I first looked at the Pre/Post MP3 comparison, I was flummoxed. There is a statistical difference in the Post MP3 category… favoring WAVs.


That’s pretty counter-intuitive. That would be like finding that people preferred listening to the Beatles on CD instead of vinyl. It just doesn’t make sense. Why would recordings sound worse in the new hip medium that everyone’s using?

They don’t. My categorization was clumsy. So, yes, I selected 8 songs that were recorded after MP3s were invented, but what I didn’t consider is that the MP3 was not a cultural force until about a decade later, and not a force in the music industry until later than that even. So I went back and looked at just the Post MP3 category and split it again. Figuring out when the MP3 because a major force in the recording industry was a rabbit hole I didn’t want to go down, so I used a proxy: Jonathan Sterne, a scholar who looks at recording technology, published an article in 2006 discussing the MP3 as a cultural artifact. And luckily enough, using 2006 ended up being fruitful because of my 8 songs in the Post MP3 category, none were released on or even near 2006. I had 5 released before and 3 released after, and when I analyzed those groups, there was a strong preference for WAV in the older recordings but not in the newest recordings. This suggests that yes, recordings, after a certain date, are generally recorded to sound just as good as MP3s of a certain quality or WAVs. Here’s the analysis:


25 to 60 in favor of WAV, p < 0.001



So, to sum up: the debate between WAV and MP3 doesn’t matter in terms of identifying fidelity differences in real world situations for these participants UNTIL the compression levels are extreme. And, recordings designed for CDs and not MP3s sound better on CDs than MP3s, but it doesn’t matter for older recordings. If I had to guess it could be because some of the limitations of the vinyl medium are similar to MP3 (gasp! Heresy!) and so recordings designed for vinyl work kinda well as MP3s, too.

Show your students awful movies


I would rate Tommy Wiseau’s The Room on par with Kubrick’s 2001: A Space Odyssey. And I mean that seriously. Obviously, I need to qualify that a bit: The Room is one of the worst movies ever made and 2001 is one of the best. But not only do I rank my personal enjoyment of them equal, I think they are equally great opportunities for students to learn about visual storytelling.

Of course, in every classroom there is a finite amount of time. You can’t show them everything, so how do you pick? Before the rise of streaming services and the ready availability of media, I might have answered this question differently. However, the question is easy to answer now: show them bad movies. Show them the worst movies you can find.

Before Netflix, it was much less likely that students had a chance to see great movies. They’d probably heard of them, but getting to watch them was a different story. Selling the family on renting a complex, old movie versus the new Adam Sandler film on a Friday night was unlikely. But now there’s no real cost to watching whatever you want. Even if they haven’t seen Citizen Kane yet, they will. Why? Because now that they’re in college, they’re hanging around with other cinephiles and have access to all the greats. So let them do it on their own.

What they might not do on their own is explore the worst cinema has to offer: the strange, poorly conceived, horribly executed, and clumsy films that are lost to the annals of time. I mean, come on, they’re in school to learn how to make great visual storytelling media. They aren’t going to watch any Joe Don Baker film (other than, possibly, Walking Tall). And that’s a missed opportunity.

I want to make my classroom a place where students not only learn, but feel inspired and empowered. I’m sure it’s the same for any educator. My concern of showing my students great examples of cinema history is that they’ll be intimidated. Showing students Dr. Caligari implies that is the benchmark that they need to achieve in my classroom to be worth anything. Besides, they already have insecurity in spades.

A bad movies not only communicates what not to do clearly and repeatedly, but as they’re watching it they’ll inevitably think “I can do better than Plan 9 From Outer Space!” And yes, yes they can. And if Plan 9 was made, then they have a shot too.

The Resilient Human Hypothesis


“Isolation” by xkcd

Are we trapped in our technology? Does media change what it means to be human? Is some new trend in media going to alter who we are? Do you ever yearn for the days when people could talk instead of [insert use of media here]? These are common concerns that have echoed throughout the world since (at least) Socrates. Before I go any further, I don’t mean to infer that these concerns are unfounded or totally incorrect. Clearly, media does change how we communicate: go back slightly more than two decades and you wouldn’t even be able to be bored by my blog because the World Wide Web wasn’t invented yet.


Randolph Scott went one step further and peaced out the day I was born.

The point being, I don’t think I need to convince anyone that media changes our society. What I am arguing, however, is that media doesn’t change what it means to be human.

There’s a fair amount of disagreement on what exactly humans need in our lives to be healthy and happy, but social contact with other humans is generally accepted as a fundamental part of our lives. So what gives? How can everything be different, yet nothing changed?

Here’s an aside that illuminates the idea a bit: remember how good movies used to be in the days before Michael Bay and the junk we have now? I do too. But we’re wrong. It’s just that, over time, the junk gets forgotten and the good stuff is kept. Well, they are forgotten unless you’re a masochist and love bad movies. (Hmm, that might make for a good blog post…)


I do make her watch cheesy movies, the worst I can find. (La la la!)

Right, so bad movies are forgotten. But so are failed attempts to redefine how we communicate. This brings us to the first formal statement of the Resilient Human Hypothesis:

Communication technologies and mediums that fulfill human needs for communication are the ones that permeate society and last a long time.

This is hard to demonstrate thoroughly since it’s a negative and not a whole lot of people are willing to share their utter failures with the world. But here’s an example: the chat room.

Yes, my friends, there was a time in the early days of the internet that strangers would join a shared text space and type words at each other in a real time dialog. They were popular for a while, but mostly they’re relegated to a niche. What do we have instead? Chat rooms with people we know or are accessible to our social circle. These are typically called group messages now instead of chat rooms. People in group messages are more real to us than strangers.


Ah, the glory days of the internet…(?)

So why the rise and fall of chat rooms? I’m sure there is more than one cause, but I’d be willing to bet that talking with strangers via text doesn’t quite scratch the “needs to be social” itch. It was the closest thing you could do a long time ago since group messaging a bunch of friends wasn’t possible, and not everyone was on the internet yet or as much. So it lasted as long as those circumstances lasted and then left the mainstream consciousness. Sure, we still communicate quasi-anonymously through spaces like reddit or tumblr. But usernames become recognizable as individuals, and it isn’t a real time conversation like a chat room.

What does it all mean?

What I’m driving at is this: media works for us and not the other way around. We are too complex and too old of a species to be fundamentally changed by smartphones in just a few years. We have the same needs and desires as people from hundreds of years ago, so clearly the smartphone is serving us and not changing us on a fundamental level. And it isn’t just serving the individual, it’s serving the collection of individuals in our society.

Yes, the smartphone changes our environment in a litany of ways, but it is succeeding as a communication medium because it is scratching an itch to be social. We are still the same! And I would make an identical argument for any popular medium.

Metaphors, music, and learning from the absurd


It finally happened. I think every graduate student gets one, and I got mine: a reading assigned for class that is completely blowing my mind. Steve Larson’s Musical Forces is provocative, funny, and controversial. Larson argues that, like the physical world, music has forces that govern (or, in the case of music, “influence” might be appropriate) its motion through time. Music has forces that are similar to the physical forces because of the one thing common to every human: the experience of having a body and existing in the physical world. And we base all of our knowledge in metaphor for the physical world. (Note: “base,” “in,” etc.)


Larson even says he can quantify the musical forces. You’ll have to read it yourself to see if you agree. I have yet to make up my mind.

Anyway, time to pivot:


… says the pawn shop, without a hint of irony.

I’m finally starting to gain some perspective on what truly interests me and the conceptual continuity that connects all of my expression. From a personal perspective, I see little distinction between my identities as a scientist and a creative. Research, to me, is a fundamentally creative endeavor and despite the stereotypes about creative types, I think scientists and creatives face very similar problems:


  • What hasn’t been done yet?
  • How can I synthesize things that have been done to produce new things?
  • How do I know if it’s good?
  • When is it done?
  • What do I do with it when it’s done?
  • What value does this create?
  • What else could I have been doing if this fails?

The threads that I see more and more connecting these aspects of my life are all about levels of abstraction. Cast in another light, it might be described as metaphor in the same way that Hofstadter and Larson mean it: cross-domain mapping. (As well as allegory, which is intra-domain mapping). Now, before you recoil in horror at that jargon, let me clarify this idea a bit while also making it more opaque.

Cross-domain mapping is about making an association between two unrelated things. First of all, think of domains as categories. The classic example is “the legs of a chair.” Chairs don’t have legs. Not really. Animals have legs, and a chair is not an animal. We call those sturdy vertical protuberances on the bottom of a chair “legs” because their function and form are evocative of actual legs. An example of intra-domain mapping is something like saying “[song a] starts the same way as [song b].” They don’t literally start the same way, but we choose to relate them. Surely the notes played, arrangement, tempo, etc. might be highly, highly similar but they aren’t literally identical. Larson calls this kind of comparison “hearing as.” Going back to the legs of a chair, that would be an example of “seeing as.”

Right about now, if you’re still with me, you might be thinking “oh, well this isn’t so hard.” But there’s that sense of something lurking in the depths, isn’t there? A sense of unease. An ugly question rears its head: what exactly qualifies as a domain? The short answer is that there is no answer. There are big, obvious domains that would be hard to argue as being part of the same domain like cars vs dogs, South Indian cuisine vs Southern Indiana cuisine, blogs vs good sources of information, and so on. Got it? Good.

For your consideration, what is this pictured below?


Depending on your individual knowledge, possible answers range from “that Star Wars thing” to “the Enterprise NCC-1701-A, a refit Constitution class cruiser, under command of Admiral James T. Kirk.” Now, given the disparity between those descriptions, and not even considering everything in between, can you see how it would be hard to define universal, concrete domains? Let’s go further. Is the ship below the same or different from the one above?


Very quickly, you’ve probably come to the conclusion that “it depends – it’s complicated.” You’d be right. Domain mapping gets complicated quickly because domains are highly context driven as well as individualized.

There’s good news, though. Metaphors and allegories can organize nicely into hierarchies depending on your level of analysis: human vs animal -> animal kingdom vs plant kingdom -> multicellular life vs single cellular life -> … Whatever the context or individualized knowledge you possess, we all have hierarchies of abstraction.


Inevitably, you end up with this trope.

And at least right now, that’s the thing that interests me: how do we, as humans, manipulate these hierarchies of abstraction to communicate effectively? Music, to me, is a primary example of this. I could orate, paint, or even write all I want to try and have you understand a piece of music and it wouldn’t matter one bit if you haven’t actually heard it. The music-ness of the abstraction of thought is part of communication itself, and it can’t be expressed in any other way. At least, I don’t think so.

Furthermore; when creating music, how do we manipulate levels of abstraction to communicate something? What does it mean to strum a guitar? When I’m working with my bandmates on a new song, what do we talk about and why? How does it influence what we play? And when assembling a song for dissemination as a piece of media, what does it mean to put the guitar in the mix one way or another?

Brian Eno talks at length about some absurdities he uses when working with other musicians to provoke and evoke certain moods, vibes, or styles of play. One of my favorites is Oblique Strategies, which was originally a deck of cards meant to be a guide through abstract ideas and commands when stuck on some sort of creative task. Follow that link, check out a few cards.

You draw a card and read it, then put it back down in a huff. What the hell does “Change nothing and continue with immaculate consistency” mean? Well, it’s up to you whether or not that prompt relates something meaningful to you. It’s a pointedly absurd way to provoke someone into thinking about different levels of abstraction, but none the less it’s a tool that people (myself included) swear by.

I don’t think there’s any one answer to any of the questions I’ve raised about manipulating levels of abstraction. I do think if I constrain myself to one type of communication (recorded music) there’s probably commonalities to what it means to experienced listeners and what it means to them on some basic level, since we have so much more in common than different because we’re all grounded in the same physical reality.