Back in 2015, there was this interesting phenomenon about people seeing a photograph of a dress and coming down sharply on two different sides of what colors the dress was, with some saying it was white and gold and others that it was blue and black.
This short video explains what is going on and this phenomenon shows dramatically that our perception of color is not due exclusively to the spectrum of light wavelengths that is reflected off the image and enters our eyes, and thus entirely objective, but also depends on the way that our brain processes sensory input which in turn depends on factors such as the context in which the image is embedded, and thus has a subjective element as well.
Not sure if you saw the two-part Nova that this came from, Mano, but it was/is well worth the time spent. One of the better Novas that I have seen in the past few years. Two one-hour shows.
Interestingly, I have a certain amount of experience with this material from the perspective of sound and hearing (the series being biased toward sight). In my Science of Sound course, one of the things we focused on was sound pressure level (typically measured in dB SPL, with variants) versus the perception of said level, which we call loudness (measured in phons). I always had fun demonstrating how a sound at one SPL can be louder than a different sound that’s at a higher SPL (primarily depends on frequency). I was always careful to explain that loudness is correlated to SPL but they’re not the same thing. Loudness is our perception of SPL and is colored by the human auditory system. Pitch is also in this category. Most people think it is just a musical term for frequency, but again, they’re not identical.
For my engineering students, I always enjoyed talking about sight and how the three color receptors work together. Before I studied theses things, it never made sense to me that combinations of red, green and blue could make another color. After all, if you pump three different frequencies into a loudspeaker, you don’t hear some new frequency or pitch that is unlike the originals. Instead, you hear all three (granted, they might combine into a chord, but a seasoned musician could tell you what the three pitches are). This brings up an interesting question that I would pose to my students: If an extraterrestrial were to come to earth and watch our TV, would they see what we see? Heck, color is sort of a contrived thing. I mean, there’s nothing inherently “red” or “blue” about a certain wavelength. That’s just how we describe our sensation of it. And that’s not all. What does it mean to say that a strawberry is sweet? It means that certain simple sugar molecules fit receptors in our mouth which triggers a signal to our brain. The resulting sensation is labelled “sweet”. In simpler (non-human language) terms, that sensation tells us we should eat this thing (because it contains valuable simple carbohydrates that we need). So “sweet” just means that this thing contains simple sugars. That is, the sensation is the human perception that the berry has simple sugars in it. It’s a lot shorter to say that it’s sweet, though.
I will be going to a birthday party shortly. While there, I will have a piece of cake. I will enjoy the sensation that those sugar molecules make when they fit into the receptors in my mouth, triggering nerve impulses to my brain. YUM!
I should also mention that I enjoyed the bit about the brain halves controlling the muscles on the opposite side (e.g, right hemisphere controlling left hand). The “getting both hands to do different things” bit was particularly fun. I say this as a musician, specifically, a drummer of some 50+ years. One of the things drummers strive for is “independence”, that is, the ability to get your limbs to do different things at the same time, and to do so smoothly and efficiently. Practice, practice, practice. I would’ve loved to see them do some of their tests on professional musicians to see how/if their brains are wired differently from non-musicians.
Mano Singham says
Thanks for the tip about the NOVA shows. I have queued it up to watch.
jimf: the bit about perception of sound is interesting to me because several years ago I noticed a hearing loss, mostly of higher frequencies, which was making it difficult for me to understand human speech. I could hear just fine, but I had trouble telling one word from another unless the person enunciated clearly.
I got some fairly expensive hearing aids in the hope of boosting the higher frequencies. I can understand human speech better as long as I’m not in a noisy environment; but if there’s, say, an air conditioner running, or even in a meeting with folks having side conversations, my subjective perception is a much lower signal-to-noise ratio for no net gain, or even sometimes a net loss.
Is there something I can say to my audiologist that will help with getting the hearing aids adjusted properly? For example, are you aware of any range of frequencies that critically affect the brain’s ability to decode speech? (I’m guessing not, but it probably doesn’t hurt to ask.)
You’re dealing with a well-known problem that still doesn’t have a perfect solution. It’s not just frequency range loss, although that is a typical outcome for aging ears. In my 20s, I could hear up to about 21.5 kHz (versus the typical 20 kHz). I was one of those people who could walk into a room and know that the TV was on even though I could not see it and the sound was off (I could hear the 15.75 kHz scanning frequency). These days, I doubt I could hear above 16 kHz but nothing sounds different to me. As we age, not only does the upper frequency limit drop, but we may also see an overall sensitivity drop and reductions in certain frequency bands. Think in terms of a 1/3rd octave EQ where someone has lowered all of the levels by differing amounts. Your hearing is like a fingerprint in that it is unique to you. The better hearing aids can be programmed to smooth out those variations. But that’s not all there is. Human hearing has an amazing dynamic range. In fact, it often exceeds that of standard lab electrical measurement devices. From the threshold of hearing up to the threshold of feeling/discomfort, you’re looking at a 120 dB range, or a pressure factor of 10^6. On top of this, the sense of loudness is “super logarithmic”: a doubling of power (3 dB) does not sound “twice as loud”. It takes around 8 to 10 dB (a power factor of 10) to do that. Further, that depends on the frequency being tested and just how high the SPL is. It’s very dynamic and not at all linear. I used to tell my students that a decent rule of thumb is that a 1 dB increase is a “just noticeable difference”, but at some levels with some frequencies, it might be 0.5 dB or 2 dB. It’s as if you have a dynamic level compressor after your ears. Those sorts of things are very hard to mimic or correct for if they are failing. In other words, the frequency adjustments are just part of it.
All I can really say is that the upper midrange, say from 3 kHz to 5 kHz, is known to affect the ALcons (Articulation Loss of consonants). A bad ALcons (for example, being in a highly reverberant space) will make it hard to distinguish a “b” from a “d” from a “t”, and so forth. Boosting this area slightly can improve intelligibility but can also lead to listening fatigue (I’m sure your audiologist knows this). In fact, boosting this region is a trick recording engineers might use to increase the “attack” or “bite” of certain instruments such as the toms on a drum kit. Some loudspeaker manufacturers will boost this region to make their loudspeakers sound as if they have greater clarity (some would describe them as “forward” sounding). Again, there is the issue of listening fatigue and coloration of the instrument’s timbre.
Sorry I don’t have better news for you. At least modern hearing aids are a far cry from the units of the 1960s which were just simple amplifiers with barely adequate in-ear transducers.
For comparison, the human hearing system is leagues beyond our visual system in terms of frequency range and dynamic range. If our ears were like our eyes in that regard, a standard 88 key piano would only need the middle twelve keys, and it could only be played between mezzo-forte and forte. All other notes and levels would either be outside of our ability to hear them or would be dangerously loud (a middle C at triple forte probably would make you deaf).
jimf: thanks for the detailed response. The phrase, “loss of consonants,” rings true. 😎
Mano: sorry to have derailed the thread with talk about sound rather than sight.
John Morales says
You didn’t derail.
Relax. Was informative.
Mano Singham says
It was not a derail. I would call it an interesting detour!
Another interesting thing about vision is that different parts of our retinae will adjust independently of others. I’m sure we’ve all had the experience of staring at some bright light, and when you look away you see a “ghost” afterimage because your retinae have adjusted, making the light parts darker and the darker parts lighter to compensate.
Likewise, if you stare at an orange square, then look at a white wall, there will be an afterimage of a green square.
You can have your own personal vision of Jesus by following these instructions:
Try it. 🙂
The thing I find amazing about our hearing is how we can concentrate on particular sounds when there are a lot of other sounds happening at the same time, say the speech of the person we are chatting with when in a crowded noisy room. Except in my case I can no longer do that particular one easily, but still can do others. I have definitely lost some of my upper ranges, I used to be able to heat the pings from the bats that lived in our side passage, but that went maybe a couple of decades ago. I should get my ears tested as not always being able to hear what people say is a problem now.
In my case, I can hear what people are saying, but I can’t understand them unless they enunciate clearly. I think what jimf described in the second paragraph at #5 is my problem. I hope to get an appointment with my audiologist before I leave on a trip in about a week and a half and see if we can mitigate that.
That’s called “the cocktail party problem”. It’s well known, and illustrates just how complex and adaptable human hearing is. A colleague of mine got his PhD studying that. If I recall correctly, he used Kalman filtering to clean up the signal (i.e., focus on one conversation out of many).
An interesting question is “Where did this ability come from?” Perhaps it came from being able to discern a dangerous sound (e.g., predator sneaking up) out of a background of wind noise, other animals, etc. I don’t know, but it certainly turns out to be useful when you’re around a bunch of chatting people.
Jim Balter says
I see pale blue (or more like bluish white) and gold … no black and no white.
Jim Balter says
You’re misusing the words “objective” and “subjective”.
This is odd. I’m seeing the same picture both ways!? At first, it was gold and white, and only for a split second, did it flash black and blue. However when I tried to repeat the flash of black and blue, my perception of the colors stubbornly remained the former. That was at 3PM.
Now, at 11PM, it’s obstinately black and blue, the same image source the same device!
Holy moly, they reverted!.
And yet again!? This is new.
It now seems a toss up, what the color combination will be when I look on the photo, then away for a few minutes and back. I’ve tried this on three screens already. Additionally, whatever the color combination I perceive happens to be on the one screen, it’s that identical way for the other two!
John Morales says
I do remember it, and I recall I experienced something akin to what antaresrichard did.