Suppose you are sitting in your living room and suddenly everything in front of you changed to something else, say a view of the ocean. Wouldn’t you be startled? And yet, when we watch films, a cut from one scene to another changes also the entire field of view instantaneously and yet it causes us no problems. And reports about the public viewing of the very first films suggest that this new thing did not cause viewers any problems at all. I wrote a few months ago about the research by Jeffrey M. Zacks, a professor of psychology and radiology at Washington University in St. Louis, and others about why our brains are not disoriented when we watch films with even very rapid cuts that change the entire field of view instantaneously.
He has now written an article that tries to fill in with more detail why it is that we are not confused by these rapid changes, even though on the surface it seems like nothing we experience in our everyday lives.
Movies are, for the most part, made up of short runs of continuous action, called shots, spliced together with cuts. With a cut, a filmmaker can instantaneously replace most of what is available in your visual field with completely different stuff. This is something that never happened in the 3.5 billion years or so that it took our visual systems to develop. You might think, then, that cutting might cause something of a disturbance when it first appeared. And yet nothing in contemporary reports suggests that it did.
What is going on here? Consider that our visual systems evolved over hundreds of millions of years, while film editing has been around only for a little more than 100 years. Despite this, new audiences appear to be able to assimilate splices on more or less the first try. I think the explanation is that, although we don’t think of our visual experience as being chopped up like a Paul Greengrass fight sequence, actually it is.
Simply put, visual perception is much jerkier than we realise. First, we blink. Blinks happen every couple of seconds, and when they do we are blind for a couple of tenths of a second. Second, we move our eyes. Want to have a little fun? Take a close-up selfie video of your eyeball while you watch a minute’s worth of a movie on your computer or TV. You’ll see your eyeball jerking around two or three times every second. It turns out that most of the eye movements we make are these jerky, ballistic movements called saccades. They take a little less than a tenth of a second and, while the eye is moving, the information that it is sending to your brain is pretty much garbage. Your brain has a nifty control mechanism that turns down the gain during these saccades so that you ignore the bad information. Between blinks and saccades, we are functionally blind about a third of our waking life.
So, the signal that our brains are getting about the visual world is not like a smooth camera-pan around the environment. It’s more like a jittery music video: a sequence of brief shots of little patches of the world, stitched together. We feel like we have a detailed, continuous permanent representation of the visual details of our world, but what our visual system really delivers is a sequence of patchy pictures. Our brains do a lot of work to fill in the gaps, which can produce some pretty striking – and entertaining – errors of perception and memory.
So now I think we have a story about why our heads don’t explode when we watch movies. It’s not that we have learned how to deal with cuts. It’s certainly not that our brains have evolved biologically to deal with film – the timescale is way too short. Instead, film cuts work because they exploit the ways in which our visual systems evolved to work in the real world.
This model of how the brain operates explains why most of us are oblivious to continuity errors in films that, despite the best efforts of the people in charge of seeing that there are none, occur in large numbers.
The technical term for that representation is an event model, and a good event model captures the information about the scene that is important for guiding your behaviour and making predictions about what might happen next.
Our models are optimised to represent the information that is important for our comprehension of the activity. If the current shot has stuff that is inconsistent with what was in the last shot, we tend to go with what we currently see.
That makes good evolutionary sense, doesn’t it? If your memory conflicts with what is in front of your eyeballs, the chances are it is your memory that is at fault. So, most of the time your brain is stitching together a succession of views into a coherent event model, and it can handle cuts the same way it handles disruptions such as blinks and saccades in the real world.
To me, the way our brains process information is one of the most fascinating areas of study.