Show your students awful movies

Standard

I would rate Tommy Wiseau’s The Room on par with Kubrick’s 2001: A Space Odyssey. And I mean that seriously. Obviously, I need to qualify that a bit: The Room is one of the worst movies ever made and 2001 is one of the best. But not only do I rank my personal enjoyment of them equal, I think they are equally great opportunities for students to learn about visual storytelling.

Of course, in every classroom there is a finite amount of time. You can’t show them everything, so how do you pick? Before the rise of streaming services and the ready availability of media, I might have answered this question differently. However, the question is easy to answer now: show them bad movies. Show them the worst movies you can find.

Before Netflix, it was much less likely that students had a chance to see great movies. They’d probably heard of them, but getting to watch them was a different story. Selling the family on renting a complex, old movie versus the new Adam Sandler film on a Friday night was unlikely. But now there’s no real cost to watching whatever you want. Even if they haven’t seen Citizen Kane yet, they will. Why? Because now that they’re in college, they’re hanging around with other cinephiles and have access to all the greats. So let them do it on their own.

What they might not do on their own is explore the worst cinema has to offer: the strange, poorly conceived, horribly executed, and clumsy films that are lost to the annals of time. I mean, come on, they’re in school to learn how to make great visual storytelling media. They aren’t going to watch any Joe Don Baker film (other than, possibly, Walking Tall). And that’s a missed opportunity.

I want to make my classroom a place where students not only learn, but feel inspired and empowered. I’m sure it’s the same for any educator. My concern of showing my students great examples of cinema history is that they’ll be intimidated. Showing students Dr. Caligari implies that is the benchmark that they need to achieve in my classroom to be worth anything. Besides, they already have insecurity in spades.

A bad movies not only communicates what not to do clearly and repeatedly, but as they’re watching it they’ll inevitably think “I can do better than Plan 9 From Outer Space!” And yes, yes they can. And if Plan 9 was made, then they have a shot too.

The Resilient Human Hypothesis

Standard
isolation

“Isolation” by xkcd

Are we trapped in our technology? Does media change what it means to be human? Is some new trend in media going to alter who we are? Do you ever yearn for the days when people could talk instead of [insert use of media here]? These are common concerns that have echoed throughout the world since (at least) Socrates. Before I go any further, I don’t mean to infer that these concerns are unfounded or totally incorrect. Clearly, media does change how we communicate: go back slightly more than two decades and you wouldn’t even be able to be bored by my blog because the World Wide Web wasn’t invented yet.

220px-randolph_scott-publicity

Randolph Scott went one step further and peaced out the day I was born.

The point being, I don’t think I need to convince anyone that media changes our society. What I am arguing, however, is that media doesn’t change what it means to be human.

There’s a fair amount of disagreement on what exactly humans need in our lives to be healthy and happy, but social contact with other humans is generally accepted as a fundamental part of our lives. So what gives? How can everything be different, yet nothing changed?

Here’s an aside that illuminates the idea a bit: remember how good movies used to be in the days before Michael Bay and the junk we have now? I do too. But we’re wrong. It’s just that, over time, the junk gets forgotten and the good stuff is kept. Well, they are forgotten unless you’re a masochist and love bad movies. (Hmm, that might make for a good blog post…)

masochism

I do make her watch cheesy movies, the worst I can find. (La la la!)

Right, so bad movies are forgotten. But so are failed attempts to redefine how we communicate. This brings us to the first formal statement of the Resilient Human Hypothesis:

Communication technologies and mediums that fulfill human needs for communication are the ones that permeate society and last a long time.

This is hard to demonstrate thoroughly since it’s a negative and not a whole lot of people are willing to share their utter failures with the world. But here’s an example: the chat room.

Yes, my friends, there was a time in the early days of the internet that strangers would join a shared text space and type words at each other in a real time dialog. They were popular for a while, but mostly they’re relegated to a niche. What do we have instead? Chat rooms with people we know or are accessible to our social circle. These are typically called group messages now instead of chat rooms. People in group messages are more real to us than strangers.

giphy

Ah, the glory days of the internet…(?)

So why the rise and fall of chat rooms? I’m sure there is more than one cause, but I’d be willing to bet that talking with strangers via text doesn’t quite scratch the “needs to be social” itch. It was the closest thing you could do a long time ago since group messaging a bunch of friends wasn’t possible, and not everyone was on the internet yet or as much. So it lasted as long as those circumstances lasted and then left the mainstream consciousness. Sure, we still communicate quasi-anonymously through spaces like reddit or tumblr. But usernames become recognizable as individuals, and it isn’t a real time conversation like a chat room.

What does it all mean?

What I’m driving at is this: media works for us and not the other way around. We are too complex and too old of a species to be fundamentally changed by smartphones in just a few years. We have the same needs and desires as people from hundreds of years ago, so clearly the smartphone is serving us and not changing us on a fundamental level. And it isn’t just serving the individual, it’s serving the collection of individuals in our society.

Yes, the smartphone changes our environment in a litany of ways, but it is succeeding as a communication medium because it is scratching an itch to be social. We are still the same! And I would make an identical argument for any popular medium.

Let’s define music!

Standard

Goodness, I have written lots of word about music, but I’m not sure if I have ever thoroughly defined what I mean by “music.” In this post you’ll find my definition, of course, but I want to clarify right up front that this may read to be slightly antagonistic. In a sense it is meant to be, but ultimately it is about how to define music in the context of communication. I’m trying to push boundaries, not hurt feelings.

I don’t claim all of these thoughts as my own, but this may be a unique synthesis of standing ideas. I’ve also touched on some of these ideas in previous posts, but I wanted to put them all together.

Music describes a way of thinking about sound.

Music is a bit like the infamous Supreme Court ruling on pornography: it’s hard to define but when you’re presented with an example, you recognize it immediately. Once you start leaving the very obvious examples, it gets kind of hard to find the boundary between music and regular sound. That’s because music describes a way of thinking about sound, not a specific kind of sound.

I think the most famous example of pushing the boundaries of music in the western world might be John Cage’s 4’33.” A pianist sits down, prepares to play, then does nothing for 4 minutes and 33 seconds. Is that music? Well, Cage would certainly say so but the audience in the music hall is split. Some say yes, some say no. Who is right?

I would argue that 4’33” in that example is definitively music, and here is why: the context. In his autobiography, Frank Zappa argued that context is key. He called it “putting a frame around it.” Let’s explore this a bit. The audience in my example above is at a music hall to hear music. A performer sits at an instrument, prepares to play, then plays silence for 4’33”. While it is certainly up to audience members to decide how much they enjoy the performance, they can’t really argue about whether or not music happened because the context clearly articulated that music happened.

Here’s another example: you’re walking in the woods alone, and you come to a clearing to find a pianist sitting at a piano. As you approach, she hops up and says “ah! I just finished my performance of 4’33”! What did you think?” Did you hear music for the last 4 minutes and 33 seconds? I don’t think so. There was no contextual clue to encourage you to think about sounds as music for the previous four and a half minutes. (Unless, of course, you just so happened to be doing it on your own free will, but the odds of that are remote.)

Another way to think about it is the old paradox: don’t think about an elephant. It’s impossible to not think about an elephant when you are given this prompt. Similarly, the people in the music hall are thinking about music and thinking about sound as music. Even if they’re thinking “ugh, this is stupid, this isn’t music,” they are still thinking about sound as music.

Music is communication.

When we hear sound as music, we are interpreting and processing it. Music is inherently more vague in its meaning than language, but there is still meaning. Music has emotional impacts, triggers memories, and causes physiological responses. Language does all of these things, too.

I think a lot of people get hung up on the idea of “music is communication” because music isn’t specific or declarative. I agree wholly that music is non-specific and non-declarative. I can’t play you a tune on a recorder to ask you to get me a beer (I would if I could, though!). And if you ask 10 people to listen to the same song, they’ll each tell you something different when asked what it means.

However, language suffers some of the same faults. Has anyone ever misunderstood you? Or have you ever said something that came out wrong? Of course you have. Language is specific, but the interpretation is difficult. I think music suffers a somewhat similar fate: a composer can intend to convey a scene or a feeling, but different audience members will have different responses.

Also, I’m blogging right now. (Duh.) But why? Well, blogging has a certain set of affordances that other kinds of communication lack. I could say this out loud, but only the other people near my desk would hear me. And once I’ve said it, it’s gone forever. I could write a book, but that means people need to buy it to read my thoughts. I could write a poem, but my prose is terrible. The point is that I’m writing this in blog form because it seems to be the best way for me to share these specific ideas in a way that I want to share them. Music is no different. I can express things that are difficult or impossible to express outside of music.

I think a more complete analysis of the affordances of music would be a swell thing to do, but here’s a short sketch: musical expression has no substitute mode of expression. I can’t accurately tell you about a piece of music, I can only approximate it in words. Information is lost when I talk about it compared to you experiencing it first hand. I think what is lost is the thrill and the emotion. Not only am I sharing words, but I’m sharing my interpretation of it. I’ve taken the experience out of it. It’s like baby food: the nutrition is there, but the experience of texture is lost in the processing.

Music is interesting.

Unlike language, music is inherently interesting. Language is designed to convey specific ideas. The goal is clarity and meeting expectations of normal patterns of communication. Sentences have at least a noun and a verb. Normal communication is utilitarian and functional. Musical communication is impressionistic and fanciful.

Part of the joy of listening to music is the blend of having your expectations met and defied in unexpected but carefully constructed ways. A piece of music establishes or implies a set of rules, but then defies those rules for your enjoyment. For example, a common thing to do in a pop song is to modulate up part of the way through the song. This defies expectations because the song has clearly established itself to exist in a given key, but then everything suddenly shifts upwards. The foundation the song was built on just got pushed upward a little bit. It’s startling, but it can be pleasant when done artfully. Another example is establishing a phrase (a pattern) by repeating the structure, but then unexpectedly stopping the pattern short. Again, this can be quite exhilarating and pleasant when done carefully. Imagine that happening in a conversation, though. Someone is talking to you and they just stop right in the

… Language doesn’t work that way, does it? Language is meant to inform and music is meant to challenge and entertain you, in a broad sense. Attempts to describe music in terms of musical forces (like physical forces) sometimes stumble because music does unexpected things. A thrown ball will always obey physical forces. In that sense, it is uninteresting. Music, however, will only sometimes obey musical forces and that’s part of the point.

Music is important.

Music is a means of expression for both performers and listeners. It is therapeutic. Music helps build identity both for individuals and groups. These are concrete, real psychological benefits. Music helps us survive, and it helps shape societies.

And now, I think a brief explanation of what music is not would be useful.

Sheet music is a lie.

Sheet music is not music nor is it an accurate representation of music. It is a shorthand expression and a necessary means to preserve musical ideas in the era before recording audio was possible. It is a useful guide for memorization and performance. Systems that explicitly or implicitly rely on sheet music as if it is real music are faulty.  Sheet music captures onsets and durations in an abstract and imperfect way, and make little to no attempt to capture feeling.

Schenkerian analysis is a way to analyze music, but it is not the way.

Schenkerian analysis is a useful tool to analyze music of a certain type when asking certain questions. However, since it is by far the dominant (heh) method of musical analysis, it is often applied to situations where it is not relevant or meaningful. Schenkerian analysis also presumes that sheet music is an accurate representation of music. Schenkerian analysis is performed on sheet music, not actual music. It is also produces a tautological result: each piece of music can be reduced to simpler and simpler versions, eventually ending in a descending pattern of notes. On the surface, this is a stunning revelation about how music works but the problem is that Schenkerian analysis demands this outcome.

When studying the psychological implications of music, it is important to ask questions about the music that most people actually experience.

Remember, music is a phenomenon that exists in the mind. It then follows that it is important to study the kinds of music found in most minds. And I think it’s safe to say that Schubert isn’t it. It’s time to roll up our sleeves and dig into the music of the now.

Music perception and cognition research largely limits itself to SERIOUS CLASSICAL MUSIC and maybe jazz when feeling cheeky. This is a problem! And please don’t think I’m knocking serious classical music or jazz, or the study of this music. It’s very important and relevant and I am grateful that people do it because both of the forms of music profoundly influence our current popular music.

What I am advocating is that music be studied in such a way that is more related to how most people experience music. Artificiality is a challenge in any line of research, but this stumbling block seems easy enough to avoid. The barriers to studying popular music are institutional elitism, not practical issues.

Anyway, I hope you enjoyed this or at the very least found it provocative. I know it helped me a lot to codify all of these thoughts in one place, so I thank you for the indulgence.

The crux of the biscuit is the apostrophe: Mindfulness, Flow, and Dimensional Emotion Theory

Standard

Here’s a question that’s been bouncing around in my head for some time now: are mindfulness and flow related?

At first glance, it’s hard to see any relationship. Mindfulness is almost like a meditation exercise where a person shifts their attention to being in the moment and avoiding distractions. It’s an extremely heightened sense of self. Flow is when you get lost in a task completely, and your sense of self dissolves. Mindfulness is explicitly actively sought, flow is explicitly an emergent property.

And yet I can’t shake this idea that they’re very, very similar. Here’s why:

There are several ways to conceptualize emotions, but the two main camps are “discrete emotions” and “dimensional emotion.”  Discrete emotions theory argues that we have some finite set of emotions that are unique from each other. Dimensional emotion theory argues that we label emotions, but in reality all emotions are related and can be described as existing in some kind of dimensional space.

25520f01

Valence, arousal, and dominance represented by SAM. Also useful for a hyper-niche Halloween costume.

I fall into the dimensional emotion camp, and typically conceptualize emotions as existing on three dimensions: valence, arousal, and dominance. In most cases, dominance is ignored since valence and arousal have such profound explanatory power. This is a bit abstract so let me give some examples:

  • joy would be high valence, high arousal
  • rage would be low valence, high arousal
  • depression would be low valence, low arousal
european-high-end-luxury-high-precision-valance-cloth-curtain-embroidered-curtains-living-room-bedroom-curtains

Not to be confused with a high valance.

With these examples, I think you can see how we move around this dimensional space. “Negative” emotions are given a low valence score, “positive” emotions are given a high valence score. Emotions that are evocative of feeling energetic are given high arousal scores, and emotions that are evocative of a lack of energy are given low arousal scores.

While this is mere conjecture, I would suggest that flow and mindfulness could both be placed similarly on the dimensional space: above-neutral valance, below-neutral arousal. First off, this is an odd space to be in to begin with: it’s hard to think of words for emotions that would be high valance but low arousal. In fact, a famous database of rigorously tested images used to induce reliable emotional responses (IAPS) doesn’t have anything in that category. Secondly, why would the same or similar emotional space be used to describe such subjectively different emotional experiences?

And thus we reach the crux of the biscuit: the apostrophe. The important part is the part that’s missing: dominance.

Dominance is a way to express who is in control: you or the emotion. Panic is low dominance because the emotion is controlling you, but anger is high dominance because you are cognitively engaged with the object d’frustration. (These are clumsy definitions, but they’ll suit the purposes of this post. Just know that there’s plenty more to read on the topic.)

Again, conjecture, but it seems to me that a possible key difference between flow and mindfulness is to be found on the dominance dimension. In fact, I would even go so far as to suggest that mindfulness might be the most salient example of a highly dominant emotional experience, given that it’s the active manipulation and engagement with emotion. Flow, on the other hand, might be low on the dominance dimension because of the profound and signature loss of sense of self.

I’d love to test these hypotheses, but I haven’t quite figured out a way to do it yet (or at least, in a way that benefits me as a doctoral student studying media). I’ll keep thinking. If you have any thoughts, please let me know.

If psychology were easy, people wouldn’t write music about it.

“Well I’m not so well acquainted
With the topography of your mind
I need a detailed description
A representation of some kind”

 

Metaphors, music, and learning from the absurd

Standard

It finally happened. I think every graduate student gets one, and I got mine: a reading assigned for class that is completely blowing my mind. Steve Larson’s Musical Forces is provocative, funny, and controversial. Larson argues that, like the physical world, music has forces that govern (or, in the case of music, “influence” might be appropriate) its motion through time. Music has forces that are similar to the physical forces because of the one thing common to every human: the experience of having a body and existing in the physical world. And we base all of our knowledge in metaphor for the physical world. (Note: “base,” “in,” etc.)

 

Larson even says he can quantify the musical forces. You’ll have to read it yourself to see if you agree. I have yet to make up my mind.

Anyway, time to pivot:

1761013715_b52beca319_z

… says the pawn shop, without a hint of irony.

I’m finally starting to gain some perspective on what truly interests me and the conceptual continuity that connects all of my expression. From a personal perspective, I see little distinction between my identities as a scientist and a creative. Research, to me, is a fundamentally creative endeavor and despite the stereotypes about creative types, I think scientists and creatives face very similar problems:

 

  • What hasn’t been done yet?
  • How can I synthesize things that have been done to produce new things?
  • How do I know if it’s good?
  • When is it done?
  • What do I do with it when it’s done?
  • What value does this create?
  • What else could I have been doing if this fails?

The threads that I see more and more connecting these aspects of my life are all about levels of abstraction. Cast in another light, it might be described as metaphor in the same way that Hofstadter and Larson mean it: cross-domain mapping. (As well as allegory, which is intra-domain mapping). Now, before you recoil in horror at that jargon, let me clarify this idea a bit while also making it more opaque.

Cross-domain mapping is about making an association between two unrelated things. First of all, think of domains as categories. The classic example is “the legs of a chair.” Chairs don’t have legs. Not really. Animals have legs, and a chair is not an animal. We call those sturdy vertical protuberances on the bottom of a chair “legs” because their function and form are evocative of actual legs. An example of intra-domain mapping is something like saying “[song a] starts the same way as [song b].” They don’t literally start the same way, but we choose to relate them. Surely the notes played, arrangement, tempo, etc. might be highly, highly similar but they aren’t literally identical. Larson calls this kind of comparison “hearing as.” Going back to the legs of a chair, that would be an example of “seeing as.”

Right about now, if you’re still with me, you might be thinking “oh, well this isn’t so hard.” But there’s that sense of something lurking in the depths, isn’t there? A sense of unease. An ugly question rears its head: what exactly qualifies as a domain? The short answer is that there is no answer. There are big, obvious domains that would be hard to argue as being part of the same domain like cars vs dogs, South Indian cuisine vs Southern Indiana cuisine, blogs vs good sources of information, and so on. Got it? Good.

For your consideration, what is this pictured below?

uss_enterprise_ncc-1701-a

Depending on your individual knowledge, possible answers range from “that Star Wars thing” to “the Enterprise NCC-1701-A, a refit Constitution class cruiser, under command of Admiral James T. Kirk.” Now, given the disparity between those descriptions, and not even considering everything in between, can you see how it would be hard to define universal, concrete domains? Let’s go further. Is the ship below the same or different from the one above?

uss_enterprise

Very quickly, you’ve probably come to the conclusion that “it depends – it’s complicated.” You’d be right. Domain mapping gets complicated quickly because domains are highly context driven as well as individualized.

There’s good news, though. Metaphors and allegories can organize nicely into hierarchies depending on your level of analysis: human vs animal -> animal kingdom vs plant kingdom -> multicellular life vs single cellular life -> … Whatever the context or individualized knowledge you possess, we all have hierarchies of abstraction.

giphy1

Inevitably, you end up with this trope.

And at least right now, that’s the thing that interests me: how do we, as humans, manipulate these hierarchies of abstraction to communicate effectively? Music, to me, is a primary example of this. I could orate, paint, or even write all I want to try and have you understand a piece of music and it wouldn’t matter one bit if you haven’t actually heard it. The music-ness of the abstraction of thought is part of communication itself, and it can’t be expressed in any other way. At least, I don’t think so.

Furthermore; when creating music, how do we manipulate levels of abstraction to communicate something? What does it mean to strum a guitar? When I’m working with my bandmates on a new song, what do we talk about and why? How does it influence what we play? And when assembling a song for dissemination as a piece of media, what does it mean to put the guitar in the mix one way or another?

Brian Eno talks at length about some absurdities he uses when working with other musicians to provoke and evoke certain moods, vibes, or styles of play. One of my favorites is Oblique Strategies, which was originally a deck of cards meant to be a guide through abstract ideas and commands when stuck on some sort of creative task. Follow that link, check out a few cards.

You draw a card and read it, then put it back down in a huff. What the hell does “Change nothing and continue with immaculate consistency” mean? Well, it’s up to you whether or not that prompt relates something meaningful to you. It’s a pointedly absurd way to provoke someone into thinking about different levels of abstraction, but none the less it’s a tool that people (myself included) swear by.

I don’t think there’s any one answer to any of the questions I’ve raised about manipulating levels of abstraction. I do think if I constrain myself to one type of communication (recorded music) there’s probably commonalities to what it means to experienced listeners and what it means to them on some basic level, since we have so much more in common than different because we’re all grounded in the same physical reality.

Beauty and the Beast: is there any difference between listening to MP3 vs CD quality?

Standard

TL;DR: yes. But come on! There’s a bunch of graphs and some lame jokes if you actually read the post.

Preface

As I sit here at my desk, I am surrounded by audio equipment and CDs. Spotify is open right now (streaming quality set to “Extreme,” thank you very much). My favorite pair of headphones are within arm’s reach. My studio monitors are effortlessly reproducing a lovely Terry Riley piece. Clearly, I am spoiled. But wait, let’s rewind a moment: I’ve got a stack of CD’s next to me, but I’m streaming compressed audio when I could be enjoying clean, uncompressed audio from my CDs? Why would I do that? (I also have a record player and a few choice vinyls, but it’s an obviously inferior format to CD so it’s not part of the comparison.)

I do it because it’s convenient. And there’s a massive amount of diversity on Spotify that simply isn’t legally accessible to me given my grad student budget. And I’m not alone: a whole heck of a lot of people in the US use streaming services. But all of them, save one, stream in what’s called lossy formats. In fact, other than listening to a CD or vinyl, the music you listen to is probably in a lossy format. It means the previously uncompressed and pristine digital audio of a CD is reduced not just in file size, but in information it contains.WAVs, by comparison, are lossless. It’s kind of bonkers to think, but MP3s and other lossy formats throw away a LOT of sound. That’s partially why they’re so small. The goal, of course, is to only throw away things you can’t hear.

It might sound kind of like science fiction (or the fantasy of scared parents of metal fans): unheard sounds in recordings? It’s true, though. In fact, our cognitive systems are really excellent at filtering out unwanted noise. It’s called the cocktail party effect. So why not automate the process and only save the parts that we hear anyway? It might not be that simple. I, along with a classmate and our advisor, decided to test if there was a difference in the subjective enjoyment of music listening between WAVs and MP3s.

The Experiment

We selected eight songs: four recorded before MP3s were even a glimmer in the Fraunhofer Institute’s eye, and four very recent songs. We did this because there’s an idea floating around in audio engineering and audiophile circles that, for example, the Beatles sound better on vinyl than CD because the albums were recorded for the idiosyncrasies of vinyl in mind. The easiest way to control for this was to have two “early” songs and two “recent” songs as MP3 and another set of two and two as WAVs.

The Song List

  • Aretha Franklin – RESPECT
  • Michael Jackson – Thriller *
  • The Eagles – Hotel California
  • The Beatles – Help! *
  • Carly Rae Jepsen – Call Me Maybe
  • Sia – Chandelier *
  • Rihanna – We Found Love
  • Daft Punk – Get Lucky *

* = MP3, 128k, LAME encoder

Note: the oldest available CD mastering was used for the pre-MP3 songs to eliminate / reduce the chance that some modern mastering techniques would be used to make it more MP3 friendly. For example, “Hotel California” was sourced from the original CD release in 1989.

We had people come in, put on headphones we provide them with, and listen to all 8 songs presented to each person in a random order. After each song, they would rate how positive it made them feel, how negative it made them feel, and how much they enjoyed it. The reason we asked positive and negative separately is because we conceptualize those feelings as representing activations of appetitive or aversive systems, respectively. They can activate separately or they can activate together.

Keep in mind, we told the participants nothing about the sound quality, MP3s or WAVs. As far as they knew, they just had to listen to 8 songs and respond to those 3 questions for each.

Results

I instigated this experiment because I didn’t think there would be a difference. We ended up hypothesizing that there would be a difference between the formats, such that people would like WAVs more. But to be honest I was skeptical, even if I had a theory-driven rationalization as to why I thought it would come out this way. (More on that later.) I thought people might even prefer MP3s since our participants are young and have probably been listening to MP3s their whole lives, give or take.

H1 figure.png

F(1, 17) = 2.162, p = 0.16

The graph above shows the mean positivity results by Format. It’s not statistically significant, but it is in the direction we predicted. Admittedly, this one result alone isn’t convincing. But wait — there’s more!

H2 figure.png

F(1, 17) = 5.224, p < 0.05

And this is a prime example of why we split out positivity and negativity into two measurements: the negative scores are significant, and support our hypothesis that people would like MP3s less.

H3 figure.png

F(1, 17) = 1.7, = 0.21

Again, not statistically significant findings here but the data are trending in the direction we predicted.

RQ1 figure.png

F(1,17) = 5.285, p < 0.05

And here’s the kicker: people rated early era songs as MP3s more negatively than anything else. And this finding is statistically significant.

Discussion

So what gives? Well, it could be as simple as our participants just hated “Thriller” and “Help!” as songs. But more than they hated The Eagles‘ “Hotel California?” I sincerely doubt it. But it is possible, I’ll admit that openly.

Here’s what I think went on, though: remember how I said that MP3s strip out a lot of information, most of which you can’t hear anyway? I bet that process is flawed. It clearly works very well, but I bet that it is imperfect and listening to MP3s is actually MORE work for your brain than uncompressed audio (like WAVs). Our minds are very lazy and, under most circumstances, seek the path of least resistance when hit with a task. If MP3s tax the cognitive systems more than WAVs because we need to actively fill in some of the missing gaps or work harder to do our usual filtering, then it seems logical that we would rate the experience more negatively.

Moving Forward

This study isn’t perfect. I would prefer to have run it with a counterbalanced design where some participants heard Song A as MP3 and others heard Song A as a WAV. That would help remove unwanted effects of the song itself. That, and while I have some ideas as to why these results came about, this experiment doesn’t prove or even directly support my ideas. I need more information before I can put that claim forward more strongly.

The good news is that we have a lot more research in the pipeline regarding audio compression and how it impacts the listening experience.

Huron, Pinker, and Western Lenses on Evolution of Music

Standard

Music psychology is a tricky field to work in. While human cultures the world ’round have what western musicologists would refer to as music, there’s a fair amount of variation in what music “means” to each culture. This has lead many astray to the endless void of relativism: if the meaning and role of music is at least somewhat different from culture to culture, then it’s impossible to identify universals in music or even suggest that music is an evolved ability. Not only is this an intellectual dead-end, it lacks any explanatory power. (Isn’t the notion that cultures are relative an inherently Western view in and of itself? The mind doth boggle.)

As relativists attempt to peck away at music, so do some empiricists. Steven Pinker gained quite a bit of notoriety when he remarked that music was were “evolutionary cheesecake.” Huron puts this a bit nicer and suggests that music might be a function of NAPS, or NonAdaptive Pleasure Seeking. Roughly speaking, the idea is that music is not an evolved trait, per se, but it emerges from other abilities and stimulates existing pleasure systems. Huron cites heroin and alcohol as other examples of NAPS. Goodness! I’m not convinced music is in the same category, but I think I get the idea.

Both Pinker and Huron point out that music doesn’t seem to directly aid in physical survival activities such as eating, sex, or seeking seeking shelter. I think there’s plenty of logical arguments to counter such a claim, but the point is that they’re just that: logical arguments. Post hoc reasoning, at that. There is a lot of face validity to saying music aids in mate selection or food gathering activities, but these arguments are not falsifiable in any reasonable way. I will gladly concede this point to them: I’ve never sang a sandwich into existence.

But now I’d like to go back to the relativists that I was kicking around at the beginning of this post. Pinker and Huron seem to be thinking about survival in physical terms only, and this is a very Western view. Humans have physical and emotional needs, and those emotional needs shouldn’t be discounted so quickly.

Here’s a simple example: music can help treat depression. And people that suffer from depression are much more likely to die than people without it. This doesn’t require the logical juggling of other arguments about mate selection or food gathering. Music directly benefits emotional and mental health, and that helps keep people alive. It’s just harder to see it when we use our Western lenses, which devalues mental and emotional health.

Obviously, this is the briefest of overviews, but I hope it illustrates the point that just because music may/not help in basic physical survival needs, it can help keep people alive. And to me, that’s a strong case for music having evolutionary value.