This week we worked out our teaching schedules for next year, and it has been determined that next Fall I will teach cell biology and a section of our writing course, and in the Spring I will teach…evolution (a new course for me) and neurobiology (a course I haven’t taught in over 5 years), which is going to be painfully intense, possibly worse than this semester. I think the anticipation of stress is contributing to my insomnia.
It will be an interesting time, at any rate. I have some of the same complaints about the current status of neuroscience that Ed Yong describes.
But you would never have been able to predict the latter from the former. No matter how thoroughly you understood the physics of feathers, you could never have predicted a murmuration of starlings without first seeing it happen. So it is with the brain. As British neuroscientist David Marr wrote in 1982, “trying to understand perception by understanding neurons is like trying to understand a bird’s flight by studying only feathers. It just cannot be done.”
Oh, man, Marr was amazing. I could just spend the whole semester trying to puzzle out his work on color perception, which is a perfect example of complex processing emerging out of simple subunits, all figured out with elegant experiments. I went through his vision book years ago, it was bewilderingly complex.
A landmark study, published last year, beautifully illustrated his point using, of all things, retro video games. Eric Jonas and Konrad Kording examined the MOS 6502 microchip, which ran classics like Donkey Kong and Space Invaders, in the style of neuroscientists. Using the approaches that are common to brain science, they wondered if they could rediscover what they already knew about the chip—how its transistors and logic gates process information, and how they run simple games. And they utterly failed.
Wait! That’s perfect! I once knew the 6502 inside and out, writing code in assembler and even eventually being able to read machine code directly. I still have some old manuals from the 1970s stashed away somewhere. I wonder if the students would appreciate signing up for a course on how brains work and then spending the semester trying to figure out how an antique 8-bit chip works by attaching an oscilloscope to pin leads?
Even when I last taught it, that was the struggle. It was easy to give them the basics of membrane biophysics — it’s all math and chemistry — but the step from that to behavior was huge. If I just teach it from top down, beginning with behavior, it’s a psychology course, which is a subject so vast that we’d never get down to the cellular level. There is no in-between yet.
I have a year to fret about it. Who needs sleep anyway?
Giliell, professional cynic -Ilk- says
IIRC this is also one of the criticisms of neuroscience in gender studies: People measure something and declare it to explain a behaviour (and ground it in cold hard biology because brains aren’t things that are formed during our lifetimes)
chrislawson says
Actually, there is an in between. Neuroanatomy/medical neuroscience deals with the larger structures in the nervous system and it reaches both ways, down to the cellular level (the physiology of neurons affects the patterns of neural structures) and up to the behaviour level (different structures are associated with different functions, including behavioural) … but just because we have a field of science that looks at the layer between cells and behaviours doesn’t mean we can construct a complete model of human behaviour from the neuronal level up.
It’s the same issue with physicists who claim that all science rests on QM/particle theory. (Most of them are half-joking, fortunately.) They’re correct in the broad sense. And certainly any understanding of chemistry depends on understanding electron shells. But that doesn’t resolve every problem, even in principle. You can’t look at an enzyme’s protein sequence and know what it does, and we can’t solve this problem just by learning more about QM or protein folding. The behaviour of an enzyme depends on the other molecules in an organism. Which is why we can swallow penicillin with reasonable safety while it’s a deadly poison to many bacteria. And why cytochrome c works as an electron carrier when it sits in the mitochondrial intramembrane space but then acts as an apoptosis trigger if it moves into the cytosol.
komarov says
Oh absolutely, some would love that! They probably wouldn’t be biology students though
dancaban says
Your brain was a 6502 diassembler? Outstanding! Loved the 6502, less so the Z80 and 68000, but adored the ARM.
rietpluim says
Oh good old times! I learned to program the 6510, which was basically the same as the 6502, on the Commodore 64. Sometimes I regret having sold the thing.
davem says
We must be twins, forced apart at birth…
dhabecker says
PZ,
Please accept a few words of wisdom from an old-timer.
Take care of your health and don’t fret. You’ll live longer and get more done.
I used to enjoy the opinions of Ed Quillan in the Denver Post. He didn’t take care of himself and now I read you.
Most Sincerely
blf says
Pedantically, of course, there are manuals, existing code, boards and board designs, and so on (presumably including simulators & emulators), with / using the 6502, et al. With the possible exception of the boards themselves, examining those would be “cheating”. Examining the boards is perhaps analogous to examining the creature containing the “brain”; and if the board is working, studying its behaviour at various levels of detail are also analogies.
Actually putting instruments to not-completely-known chips is a surprisingly common thing to do. So is dismantling (“shaving”) a chip to take a look at just what is inside. Reasons for doing so range from fairly benign (companies looking for patent violations, and just to study the competitor’s designs) to hostile (criminal mobs devising attacks on financial terminals (e.g., ATMs & smartcard systems)). The techniques and tools are quite advanced, and are both passive and active. Some active actions include “out-of-spec” stimulations (e.g., over / under -voltages, -timings, -temperatures, unexpected signals, and so on). It is possible to find videos showing some of the techniques being used in home / small-scale laboratories.
The flip-side is there is an entire industry devoted to designing, testing, and manufacturing chips resistant to such attacks. Such equipment is used in modern financial terminals. (The objective here is not so much to obfuscate what the chip does, or even how it does it, but to make the cost / difficulty of successful attack so high even the mafias fail to find and / or use an exploit.) Such chips often contain some form of self-destruct, automatically triggered (without software intervention) when an apparent attack is detected.
I have no idea to what extent a non-trivial chip has been fully reverse-engineered. Chips have been reverse-manufactured (shaving them and copying each layer to make your own), the Soviets were masters at that.
jaybee says
PZ, you have committed a sin you have railed against, but in reverse. You often talk about how computer geeks think we are just about able to “upload” the state of a brain, virtualize it, and then have immortality. “But brains and computers are nothing alike!” you rightfully complain.
And here you are saying how trying to look at the transistors of a 6502 and understand a video game is a good metaphor. When I read this metaphor a few days ago I shook my head — the transistors of a CPU are time multiplexed — the same transistor and wires are carrying unrelated information every cycle, whereas the vast majority of neurons are indicating if some condition is met or not — eg, there is a 45-ish degree line in this part of my visual field.
I’m not saying that studying thought by looking at individual neurons is not difficult or is the best approach, but I am saying comparing it to how a CPU operates is silly.
unclefrogy says
what I hear is your concern for the material and the students in the face of great complexity none perfect knowledge.
it is only one class and as such is limited in time. Your concern is that of a good teacher.
You know that things will change with time if the basics are solid the students will be able to understand the new knowledge that will occur
uncle frogy
Richard Smith says
I was inordinately pleased to discover, years ago, that my initials are also a 6502 assembly instruction (return from subroutine).
Czech American says
Don’t worry. I’m sure by next year teaching evolution will be illegal and you will be off the hook for that one.
All other biology teaching will be easier too because then you can just make shit up as long as you name check god and jesus enough. So much saved time and effort.
colonelzen says
No you can’t just get it from the hardware. The hardware, both a 6502 and very, very small sections of brains (quite possibly within single cells) are Turing Complete. That means that from any possible combinations of inputs they can (within resource limitations) they can generate any possible output.
Granted the “program” can be theoretically deciphered as it (as with all information) exists in a material medium. So theoretically you could say .. Well there’s this here RNA strand over here, and there’s that above normal concentration of this enzyme in this dendrite, and less of that one all through that axon and this here neuron is connected to these by these synapses … And when you’ve gotten ALL of that information and how all the neurons and synapses will fire under any particular combination, you can say you know ALL of the complete program…. (Er a hundred billion or so neurons, averaging hundreds to thousands of synapses each and extraordinary complexity and partitioning of chemistry in each neuron …. I hope PZ will give a week or so to complete the homework)
But you still ain’t home free. Turing is famous for proving *mathematically* that for programs that exhibit above minimal reentrancy (and brains are loaded with it) you cannot predict what its indefinite future state will be (aka “The Halting Problem”). And that’s for a completely deterministic mechanism (with full true determinism, if you run the exact same program with exactly the same inputs you will be at the exact same state as in a prior run similarly at the same number of steps … you can predict from the past … a little practice much like empiricism … but you can’t predict the future beyond the levels where the possibilities fold together (and no, not with infinite resources either, at least not a countable infinity of resources). Not to mention the fact that brains as computers are incredibly sloppy and “noisy” … fabulously different than an ideally deterministic machine.
So PZ and his cite are completely correct.
Terska says
Coincidentally I heard a song on my way home titled” A Feather Is Not A Bird” by Roseanne Cash.
DanDare says
My daughter did dual psychology and neuroanatomy degrees and is now doing an honors year working on a neuroscience project at the Queensland Brain Institute. I have been thoroughly primed to understand this post PZ.
And yeah I miss 6502. These days for me its all spring and hibernate with typescript for the UI.
mikehuben says
> It was easy to give them the basics of membrane biophysics — it’s all math and chemistry — but the step from that to behavior was huge.
Some behavior of the nematode Caenorhabditis elegans has been determined from its 302 neurons. See: Caenorhabditis elegans: Research use.