Get your geek on for Thursday

I’m going to be opening my mouth again on Thursday in Minneapolis — I’ll be giving a talk in MCB 3-120 on the Minneapolis campus at 7:30 on Thursday, 3 December. This will be open to the public, and it will also be an all-science talk, geared for a general audience. I’d say they were going to check your nerd credentials at the door, but just showing up means you’re already fully qualified.

The subject of the talk is my 3 big interests: a) evolution, or how we got here over multiple generations, b) development, or how we got here in a single generation, and c) the nervous system, the most complicated tissue we have. I intend to give a rough outline of how nervous tissue works, how it is assembled into a working brain, and how something so elaborate could have evolved. All in one hour. Wheee!

Afterwards, we’ll be joining the CASH gang for refreshments, somewhere. They haven’t told me yet where, but I know they’re fond of pizza.

Martin Chalfie: GFP and After

Chalfie is interested in sensory mechanotransduction—how are mechanical deformations of cells converted into chemical and electrical signals. Examples are touch, hearing, balance, and proprioception, and (hooray!) he references development: sidedness in mammals is defined by mechanical forces in early development. He studies this problem in C. elegans, in which 6 of 302 nerve cells detect touch. It’s easy to screen for mutants in touch pathways just by tickling animals and seeing if they move away. They’ve identified various genes, in particular a protein that’s involved in transducing touch into a cellular signal.

They’ve localized where this gene is expressed. Most of these techniques involved killing, fixing, and staining the animals. He was inspired by work of Shimomura, as described by Paul Brehm that showed that Aequorin + Ca++ + GFP produces light, and got in touch with Douglas Prasher, who was cloning GFP, and got to work making a probe that would allow him to visualize the expression of interesting genes. It was a gamble — no one knew if there were additional proteins required to turn the sequence into a glowing final product…but they discovered that they could get functional product in bacteria within a month.

They published a paper describing GFP as a new marker for gene expression, which Science disliked because of the simple title, and so they had to give it a cumbersome title for the reviewers, which got changed back for publication. They had a beautiful cover photo of a glowing neuron in the living animal.

Advantages of GFP: heritable, relatively non-invasive, small and monomeric, and visible in living tissues. Roger Tsien worked to improve the protein and produce variants that fluroesced at different wavelengths. There are currently at least 30,000 papers published that use fluroescent proteins, in all kinds of organisms, from bunnies to tobacco plants.

He showed some spectacular movies from Silverman-Gavrila of dividing cells with tubulin/GFP, and another of GFP/nuclear localization signal in which nuclei glowed as they condensed after division, and then disappeared during mitosis. Sanes and Lichtman’s brainbow work was shown. Also cute: he showed the opening sequence of the Hulk movie, which is illustrated with jellyfish fluorescence (he does not think the Hulk is a legitimate example of a human transgenic.)

Finally, he returned to his mechanoreceptor work and showed the transducing cells in the worm. One of the possibilities this opened up was visual screening for new mutants: either looking for missing or morphologically aberrant cells, or even more subtle things, like tagging expression of synaptic proteins so you can visually scan for changes in synaptic function or organization.

He had a number of questions he could address: how are mechanotransducers generated, how is touch transduced, what is the role of membrane lipids, can they identify other genes important in touch, and what turns off these genes?

They traced the genes involved in turning on the mec-3 gene; the pathway, it turned out, was also expressed in other cells, but they thought they identified other genes involved in selectively regulating touch sensitivity. One curious thing: the mec genes are transcribed in other cells that aren’t sensitive, but somehow are not translated.

They are searching for other touch genes. The touch screen misses some relevant genes because they have redundant alternatives, or are pleiotropic so other phenotypes (like lethality) obscure the effect. One technique is RNAi, and they made an interesting observation. Trying about 17000 RNAis, they discovered that 600 had interesting and specific effects, 1100 were lethal, and about 15,000 had no effect at all. The majority of genes are complete mysteries to us. They’ve developed some techniques to get selective incorporation of RNAis into just neurons of C. elegans, so they’re hoping to uncover more specific neural effects. One focus is on the integrin signaling pathway in the nervous system, which they’ve knocked out and found that it demolishes touch sensitivity — a new target!

They are now using a short-lived form of GFP that shuts down quickly, so they’ve got a sharper picture of temporal patterns of gene activity.

Chalfie’s summary:

  • Scientific progress is cumulative.

  • Students and post-docs are the lab innovators.

  • Basic research is essential. Who would have thought working on jellyfish would lead to such powerful tools?

  • All life should be studied; not just model organisms.

Chalfie is an excellent speaker and combined a lot of data with an engaging presentation.

Irwin Neher: Chemistry helps neuroscience: the use of caged compounds and indicator dyes for the study of neurotransmitter release

Ah, a solid science talk. It wasn’t bad, except that it was very basic—maybe if I were a real journalist instead of a fake journalist I would have appreciated it more, but as it was, it was a nice overview of some common ideas in neuroscience, with some discussion of pretty new tools on top.

He started with a little history to outline what we know, with Ramon Y Cajal showing that the brain is made up of network of neurons (which we now know to be approxiamately 1012 neurons large). He also predicted the direction of signal propagation, and was mostly right. Each neuron sends signals outwards through an axon, and receives input from thousands of other cells on its cell body and dendrites.

Signals move between neurons mostly by synaptic transmission, or the exocytosis of transmitter-loaded vesicles induced by changes in calcium concentration. That makes calcium a very interesting ion, and makes calcium concentration an extremely important parameter affecting physiological function, so we want to know more about it. Furthermore, it’s a parameter that is in constant flux, changing second by second in the cell. So how do we see an ion in real time or near real time?

The answer is to use fluorescent indicator dyes which are sensitive to changes in calcium concentration — these molecules fluoresce at different wavelenths or absorb light at different wavelengths depending on whether they are bound or not bound to calcium, making the concentration visible as changes in either the absorbed or emitted wavelength of light. There is a small battery of fluorescent compounds — Fura-2, fluo 3, indo-1 — that allow imaging of localized increases in calcium.

There’s another problem: resolution. Where the concentration of calcium matters most is in a tiny microdomain, a thin rind of the cytoplasm near the cell membrane called the cortex, which is where vesicles are lined up, ready to be triggered to fuse with the cell membrane by calcium, leading to the expulsion of their contents to the exterior. This microdomain is tiny, only 10-50nm thick, and is below the limit of resolution of your typical light microscope. If you’re interested in the calcium concentration at one thin, tiny spot, you’ve got a problem.

Most presynaptic terminals are very small and difficult to study; they can be visualized optically, but it’s hard to do simultaneous electrophysiology. One way Neher gets around this problem is to use unusually large synapses, the calyx of Held synapse, which is part of an auditory brainstem pathway. It’s an important pathway in sound localization, and the signals must be very precise. They have a pecial structure, a cup-like synapse that envelops the post-synaptic cell body — they’re spectacularly large, so large that one can insert recording electrodes both pre- and post-synaptically, and both compartments can be loaded with indicator dyes and caged compounds.

The question being addressed is the concentration of Ca2 at the microdomain of the cytoplasmic cortex, where vesicle fusion occurs. This is below the level of resolution of the light microscope, so just imaging a calcium indicator dye won’t work — they need an alternative solution. The one they came up with was to use caged molecules, in particular a reagent call Ca-DMN.

Caged molecules are cool, with one special property: when you flash UV light of just the right wavelength at them, they fall apart into a collection of inert (you hope) photoproducts, releasing the caged molecule, which is calcium in this case. So you can load up a cell with Ca-DMN, and then with one simple signal, you can trigger it to release all of its calcium, generating a uniform concentration at whatever level you desire across the entire cell. So instead of triggering an electrical potential in the synaptic terminal and asking what concentration of calcium appears at the vesicle fusion zone, they reversed the approach, generating a uniform calcium level and then asking how much transmitter was released, measured electrophysiologically at the post-synaptic cell. When they got a calcium level that produced an electrical signal mimicking the natural degree of transmitter release, they knew they’d found the right concentration.

Caged compounds don’t have to be just calcium ions: other useful probes are caged ATP, caged glutamate (a neurotransmitter), and even caged RNA. The power of the technique is that you can use light to manipulate the chemical composition of the cell at will, and observe how it responds. These are tools that can be used to modify cell states, to characterize excretory properties, or to generate extracellular signals, all with the relatively noninvasive probe of a brief focused light flash.

Reading this will affect your brain

Baroness Susan Greenfield has been spouting off some bad neuroscience, I’m afraid. She’s on an anti-social-networking-software, anti-computer-games, anti-computer crusade that sounds a bit familiar — it’s just like the anti-TV tirades I’ve heard for 40-some years — and a little bit new — computers are bad because they are “changing the workings of the brain“. Ooooh.

But to put that in perspective, the brain is a plastic organ that is supposed to rewire itself in response to experience. It’s what they do. The alternative is to have a fixed reaction pattern that doesn’t improve itself, which would be far worse. Greenfield is throwing around neuroscientific jargon to scare people.

So yes, using computers all the time and chatting in the comments sections of weird web sites will modify the circuitry of the brain and have consequences that will affect the way you think. Maybe I should put a disclaimer on the text boxes on this site. However, there are events that will scramble your brains even more: for example, falling in love. I don’t want to imagine the frantic rewiring that has to go on inside your head in response to that, or the way it can change the way you see the entire rest of the world, for good or bad, for the whole of your life.

Or, for an even more sweeping event that had distinct evolutionary consequences, look at the effect of changing from a hunter-gatherer mode of existence, to an agrarian/urban and modern way of life. We get less exercise because of that, suffer from near-sightedness, increased the incidence of infectious disease, and warped our whole pattern of activity in radical ways. Not only do neural pathways have to develop in different ways to cope with different environments, but there was almost certainly selection for urban-compatible brains—people have died of the effects of that shift. Will Baroness Greenfield give up her book-writin’, lecturin’ ways to fire-harden a pointy stick, don a burlap bag, and dedicate her life to hunting rabbits?

Embryonic similarities in the structure of vertebrate brains


I’ve been doing it wrong. I was looking over creationist responses to my arguments that Haeckel’s embryos are being misused by the ID cretins, and I realized something: they don’t give a damn about Haeckel. They don’t know a thing about the history of embryology. They are utterly ignorant of modern developmental biology. Let me reduce it down for you, showing you the logic of science and creationism in the order they developed.

Here’s how the scientific and creationist thought about the embryological evidence evolves:

i-0fbb95c437feb7bb89110acb6f8e6326-brcorner.gifScientific thinking

An observation: vertebrate embryos show striking resemblances to one another.

An explanation: the similarities are a consequence of shared ancestry.

Ongoing confirmation: Examine more embryos and look more deeply at the molecules involved.


Creationist thinking

A premise: all life was created by a designer.

An implication: vertebrate embryos do not share a common ancestor.

A conclusion: therefore, vertebrate embryos do not show striking resemblances to one another.

[Read more…]

Soon, we’ll be reading your minds!


No, not really, but this is still a cool result: investigators have used an MRI to read images off the visual cortex. They presented subjects with some simple symbols and letters, scanned their brains, and read off the image from the data — and it was even legible! Here are some examples of, first, the images presented to the subjects, then a set of individual patterns from the cortex read in single measurements, and then, finally, the average of the single scans. I think you can all read the word “neuron” in there.

Reconstructed visual images. The reconstruction results of all trials for two subjects are shown with the presented images from the figure image session. The
reconstructed images are sorted in ascending order of the mean square error. For the purpose of illustration, each patch is depicted by a homogeneous square,
whose intensity represents the contrast of the checkerboard pattern. Each reconstructed image was produced from the data of a single trial, and no postprocessing was applied. The mean images of the reconstructed images are presented at the bottom row. The same images of the alphabet letter ”n” are displayed in
the rightmost and leftmost columns.

Before you get all panicky and worry that now the CIA will be able to extract all of those sexy librarian fantasies out of your brain by aiming a gadet at your head, relax. This is an interesting piece of work, but it has some serious limitations.

  • This only works because they are scanning the part of the visual cortex that exhibits retinotopy — a direct mapping of the spatial arrangement of the retina (and thus, of any images falling on it) onto a patch of the brain at the back of your head. This won’t work for just about any other modality, except probably touch, and I doubt it will work for visualization/cognition/memory, which are all much more derived and much more complexly stored. Although I’d really like to know if someone closes their eyes and merely imagines a letter “E”, for instance, whether there isn’t some activation of the visual cortex.

  • The process was time consuming. Subjects were first recorded while staring at random noise for 6 seconds in 22 trials. This was necessary to get an image of the background noise of the brain, wwhich was subtracted from subsequent image measurements. The brain is a noisy place, and the letter pattern is superimposed on a lot of background variation. Then, finally, the subject has to fixate on the test image for 12 seconds.

  • Lastly, a fair amount of math has to be flung at the scan to extract the contrast information. This is probably the least of the obstacles, since computational power seems to increase fairly rapidly.

Give this research some more time, though, and I can imagine some uses for being able to record specific aspects of brain states. I’d be more interested in a device that can read pre-motor cortex though — I’d like to get rid of this clumsy keyboard someday.

Miyawaki Y, Uchida H, Yamashita O, Sato M-a, Morito Y, Tanabe HC, Sadato N, Kamitani Y (2008) Visual Image Reconstruction from Human Brain Activity using a Combination of Multiscale Local Image Decoder. Neuron 60(5):915-929.

A Natural History of Seeing: The Art and Science of Vision

Simon Ings has written a wonderful survey of the eye, called A Natural History of Seeing: The Art and Science of Vision(amzn/b&n/abe/pwll), and it’s another of those books you ought to be sticking on your Christmas lists right now. The title give you an idea of its content. It’s a “natural history”, so don’t expect some dry exposition on deep details, but instead look forward to a light and readable exploration of the many facets of vision.

There is a discussion of the evolution of eyes, of course, but the topics are wide-ranging — Ings covers optics, chemistry, physiology, optical illusions, decapitated heads, Edgar Rice Burroughs’ many-legged, compound-eyed apts, pointillisme, cephalopods (how could he not?), scurvy, phacopids, Purkinje shifts…you get the idea. It’s a hodge-podge, a little bit of everything, a fascinating cabinet of curiousities where every door opened reveals some peculiar variant of an eye.

Don’t think it’s lacking in science, though, or is entirely superficial. This is a book that asks the good questions: how do we know what we know? Each topic is addressed by digging deep to see how scientists came to their conclusion, and often that means we get an entertaining story from history or philosophy or the lab. Explaining the evolution of our theories of vision, for example, leads to the story of Abu’Ali al-Hasan ibn al-Hasan ibn al-Haythem, who pretended to be mad to avoid the cruelty of a despotic Caliph, and who spent 12 years in a darkened house doing experiments in optics (perhaps calling him “mad” really wasn’t much of a stretch), and emerged at the death of the tyrant with an understanding of refraction and a good theory of optics that involved light, instead of mysterious vision rays emerging from an eye. Ings is also a novelist, and it shows — these are stories that inform and lead to a deeper understanding.

If the book has any shortcoming, though, it is that some subjects are barely touched upon. Signal transduction and molecular evolution are given short shrift, for example, but then, if every sub-discipline were given the depth given to basic optics, this book would be unmanageably immense. Enjoy it for what it is: a literate exploration of the major questions people have asked about eyes and vision for the last few thousand years.

Usher syndrome part IV: Clinical management and research directions

Guest Blogger Danio, one last time:

Part I
Part II
Part III

The current standard of pediatric care mandating that all newborns undergo hearing screenings has been applied successfully throughout much of the industrialized world. Early identification of hearing impairments gives valuable lead-time to parents and health care providers during which they can plan medical and educational interventions to improve the child’s development, acquisition of language skills, and general quality of life.

Up to 12% of children born with hearing loss have Usher syndrome. However, diagnosing Usher syndrome as distinct from various forms of congenital hearing impairment is often impossible until the onset of retinal degeneration years later. The considerable number and size of the genes involved makes genetic screening impractical with the current methods, unless there is a family or community history that can shorten the list of targets by implicating a particular Usher gene or subtype.

The educational and medical interventions undertaken to improve a deaf or hearing-impaired child’s cognitive and social development can vary extensively, based in part on whether the child in question is expected to lose his or her vision later in life. Thus an earlier diagnosis of Usher syndrome is an immediate and critical research goal. The most imminent hope for such a diagnostic advance lies in gene chip screening. With this technology, the patient’s DNA can be screened against a microarray of human genes known to cause deafness (and/or Usher syndrome) when mutated, and variances in the DNA sequence of any screened gene would be detected and analyzed. One such chip is already available for commercial use, and another appears to be approaching clinical availability. The rapid and affordable analysis these microarrays offer will be of tremendous benefit in the early diagnosis and management of Usher syndrome.
[Read more…]

Usher syndrome, part III: the plot thickens

Guest Blogger Danio:

The time has come to delve into the retinal component of Usher syndrome. In Part II, I briefly described the results of protein localization studies, in which most members of the Usher cohort were found at the connecting cilium of the photoreceptor and at the photoreceptor synapse. The following diagram summarizes these findings:

Usher protein localization in photoreceptor cells. From Reiners, et al. 2006

So, as we saw in the ear, proteins with the equipment for physically interacting with one another are gathering in specific places, and thus multi-protein complexes are likely being formed at these locations. The cluster of Usher proteins around the connecting cilium has been the focus of most of the current retinal studies, and to understand the potential importance of an Usher complex at that subcellular location we must address the importance of the connecting cilium itself.
[Read more…]