Wrestling with continuous response measures


Something I’ve wrestled with on more than one occasion is a phenomenon known as clipping in the audio world. It’s caused when the signal is too loud for the file.  In a slightly more precise sense, there’s more information in the input signal than the medium can store. Check out this graphic: Image

(I’m not crazy about the source, but here it is)

So in audio, once you try to record something louder than the system can handle the extreme ends get chopped off like on the right side of the graphic. That’s gone forever. What gets stored is a very poor representation of the original. And here’s where it comes to continuous response measures: if a subject reaches the top of the scale, that’s clipping and that’s loss of information.

I’ll use myself as an example here. In the past I’ve been a subject for a pre-study test on some stimuli. We were asked to indicate how much we were enjoying the clip we were watching. The first few clips where things like commercials, Maury, and Big Bang Theory. As you may imagine, they were all rated very low. (Bazinga!)

But next, Louis C.K. appears on the screen. Oh man, major jump up in enjoyment right there. This is my favorite comedian. Now I’m probably at like a 7.5/10 on the enjoyment scale. And now that I’m looking more closely, I notice this is from my favorite stand-up special of his. 9/10. And as luck would have it, he starts my favorite bit. I’ve now moved up to 10/10 on the scale of enjoyment because after suffering through Big Bang Theory and commercials, this is heaven.

But here’s the problem: I’m already at 10/10, and we’re only in the set-up of the joke. As the joke progresses and I’m enjoying it more in anticipation of, and then delivery of, the punchline; I can’t express that. I’m at 10/10 already. I’m clipping and the researcher looks at my data and thinks I equally enjoyed the set up and delivery of the punchline. As you may imagine, I was laughing hysterically at the punchline, but only stifling giggles at the set up. Why did I do that, as the subject? Why would I go to the maximum before the punchline? Because I can’t predict the future. All I can do is indicate increase or decrease in enjoyment, and I don’t know what the future holds so I can’t realistically say “gosh, no I should hold out for the punchline.”

So some of you may be noting that once several subjects are run, and the stimulus order is randomized, that’ll come out in the wash. And it sort of does, I’ll give you that. The overall impact on the dataset is minimized. But why not just use a system that can’t clip? Why mix in clipped, distorted data in the first place? Conducting studies is hard enough, so why stack the cards against yourself?

“Oh. So, like subliminal messages?”


That’s the question I often get asked when I begin to explain my research. It’s a fantastic question. I mean, how exciting would it be to actually develop subliminal messages? Ethically dubious (at best), but exciting none-the-less. No, my aims are much less grandiose than world domination.

That being said, audio-in-media research seems to be underrepresented as a whole. There’s probably a litany of reasons, ranging from lack of interest to difficulty of understanding the neurological responses to audio compared to visuals. The prior is something I have in copious amounts and the latter I am equally baffled. But regardless of the difficulty, as every research area has its own challenges, there is presently a dearth of knowledge and I am working to help fill the gaps.

Generally, my current research projects all keep me in the realm of cognitive processing of media and using psychophysiological data to help express that processing. One project is looking at evolved vs symbolic visual and aural communication. More on this one later as it develops. Another, which we’ve taken to calling “My Missing Bridge,” is about cognition of familiar songs that have been edited to remove most verses and – you guessed it – the bridge. “My Missing Bridge” has recently been accepted to the 2014 International Communication Association conference. Another study that’s gearing up is about reinterpreting the venerable Fletcher-Munson study through the lens of LC4MP. I’m really excited about this one, as it could lay some important groundwork for future study. Another that’s quickly growing legs is focused on generative music in video games: can a generative soundtrack increase flow? Aside from audio, I’m involved in a study that involves collecting psychophysiology data on people viewing pornography with Dr. Bryant Paul, who has a great nickname.

So that’s what I’m up to right now. I’ll post updates as they become available.