Irwin Neher: Chemistry helps neuroscience: the use of caged compounds and indicator dyes for the study of neurotransmitter release

Ah, a solid science talk. It wasn’t bad, except that it was very basic—maybe if I were a real journalist instead of a fake journalist I would have appreciated it more, but as it was, it was a nice overview of some common ideas in neuroscience, with some discussion of pretty new tools on top.

He started with a little history to outline what we know, with Ramon Y Cajal showing that the brain is made up of network of neurons (which we now know to be approxiamately 1012 neurons large). He also predicted the direction of signal propagation, and was mostly right. Each neuron sends signals outwards through an axon, and receives input from thousands of other cells on its cell body and dendrites.

Signals move between neurons mostly by synaptic transmission, or the exocytosis of transmitter-loaded vesicles induced by changes in calcium concentration. That makes calcium a very interesting ion, and makes calcium concentration an extremely important parameter affecting physiological function, so we want to know more about it. Furthermore, it’s a parameter that is in constant flux, changing second by second in the cell. So how do we see an ion in real time or near real time?

The answer is to use fluorescent indicator dyes which are sensitive to changes in calcium concentration — these molecules fluoresce at different wavelenths or absorb light at different wavelengths depending on whether they are bound or not bound to calcium, making the concentration visible as changes in either the absorbed or emitted wavelength of light. There is a small battery of fluorescent compounds — Fura-2, fluo 3, indo-1 — that allow imaging of localized increases in calcium.

There’s another problem: resolution. Where the concentration of calcium matters most is in a tiny microdomain, a thin rind of the cytoplasm near the cell membrane called the cortex, which is where vesicles are lined up, ready to be triggered to fuse with the cell membrane by calcium, leading to the expulsion of their contents to the exterior. This microdomain is tiny, only 10-50nm thick, and is below the limit of resolution of your typical light microscope. If you’re interested in the calcium concentration at one thin, tiny spot, you’ve got a problem.

Most presynaptic terminals are very small and difficult to study; they can be visualized optically, but it’s hard to do simultaneous electrophysiology. One way Neher gets around this problem is to use unusually large synapses, the calyx of Held synapse, which is part of an auditory brainstem pathway. It’s an important pathway in sound localization, and the signals must be very precise. They have a pecial structure, a cup-like synapse that envelops the post-synaptic cell body — they’re spectacularly large, so large that one can insert recording electrodes both pre- and post-synaptically, and both compartments can be loaded with indicator dyes and caged compounds.

The question being addressed is the concentration of Ca2 at the microdomain of the cytoplasmic cortex, where vesicle fusion occurs. This is below the level of resolution of the light microscope, so just imaging a calcium indicator dye won’t work — they need an alternative solution. The one they came up with was to use caged molecules, in particular a reagent call Ca-DMN.

Caged molecules are cool, with one special property: when you flash UV light of just the right wavelength at them, they fall apart into a collection of inert (you hope) photoproducts, releasing the caged molecule, which is calcium in this case. So you can load up a cell with Ca-DMN, and then with one simple signal, you can trigger it to release all of its calcium, generating a uniform concentration at whatever level you desire across the entire cell. So instead of triggering an electrical potential in the synaptic terminal and asking what concentration of calcium appears at the vesicle fusion zone, they reversed the approach, generating a uniform calcium level and then asking how much transmitter was released, measured electrophysiologically at the post-synaptic cell. When they got a calcium level that produced an electrical signal mimicking the natural degree of transmitter release, they knew they’d found the right concentration.

Caged compounds don’t have to be just calcium ions: other useful probes are caged ATP, caged glutamate (a neurotransmitter), and even caged RNA. The power of the technique is that you can use light to manipulate the chemical composition of the cell at will, and observe how it responds. These are tools that can be used to modify cell states, to characterize excretory properties, or to generate extracellular signals, all with the relatively noninvasive probe of a brief focused light flash.

Mario Molina: Energy and climate change: is there a solution?

There are a few people who will now appear on the blog who will be extremely peevish about Molina’s talk, because he simply clearly stated the scientific consensus. We are now living in the anthropocene, when so many people exist that that we are affecting the planet’s functions. CO2 and CH4 concentrations have been changing rapidly in recent decades, along with changes in temperature, and the fact of the matter is that the changes in the chemical composition of the atmosphere are causally connected to changes in temperature.

He showed long term records of 450,000 years of temperature and chemistry, which show regular changes in temperature and chemical composition, even of regular cycles of change. But recent changes are much larger, and the changes in the last century were not expected from known natural causes — they don’t fit the prior pattern. Only pseudoscientific (he was not at all mealy-mouthed: yes, he called the people who question anthropogenic change to be pseudoscientific) papers currently question the causal relationship of human activities to climate change.

There are some events that should give us pause. The glaciers feeding China’s rivers are shrinking, and the Tibetan plateau has important role in climate of China — what happens when China’s huge population faces major droughts? He mentioned specific events like Katrina, the exreme weather events. We can’t tell for certain that an individual event is climate change related, but statistics show a pattern of increasing events, such as wildfires and droughts. 400 million people are living under extreme drought conditions, and very dry land has increased worldwide in a short period of time: 15% of land was so classified in 1970, but it’s now up to 30% in 2002.

Trends show that greenhouse gases are increasing. What needs to be done? We need a revolution in the way society functions to prevent CO2 from rising abouve 350-450 ppm. Can it be done?

Molina is generally optimistic. He thinks that we can limit CO2 with existing technologies. His recipe is improved fuel economy, more efficient builidongs, improved power plant efficiency, substituting natural gas for coal, using carbon capture and storage, developing alternative power sources (nuclear, wind, solar, biofuels), and forest management. We need to do ALL, there will not be a single magic bullet that solves the problem.

He argues that we are not running out of fossil fuels (there is lots of coal), but we are running out of oil. However, we will run out of atmospheric capacity to cope with emissions before we run out of oil.

We are playing a game, like roulette. We are gambling: to win, a policy should result in a temp increase of less than 2° C. What policy does is shift the probabilities of winning — we are paying to move from one roulette wheel with bad odds to another with lower risk. We want to buy stabilization of probabilities and reduce uncertainty, and it’s not that expensive. An investment of a few percent of GDP produces a big improvement of our odds. He compared it to a hypothetical airplane trip. If you were told you could board a plane right now that has a 10% chance of both engines failing, or you could wait a few hours to take a different plane that cost 10% more but had a negligible chance of engine failure, which would you do? For most of us, the choice is simple, since the first plane has a good chance of catastrophic failure, and we’d rather avoid that sort of thing.

Less optimistically, he brought up the possibility of tipping points and the instability of the system. It is a big worry that we have a risk of entering practically irreversible modes: he gave the example of melting of arctic summer ice, since once the ice cap is gone, it is not trivial to restore it. Some tipping points may occur relatively soon. We are at risk of catastrophic climate change.

He ended with simple actions we should take now:

  • Put a price on carbon emissions.

  • Increase investment in energy tech research

  • Expand international cooperation

  • Emphasize win-win solutions

The big problem is that right now 3/4ths of the planet is striving to reach the ecoonomic standards of the developed countries — they should, and they have every right to aspire to it, but it is physically impossible for them to do it with the same wasteful strategies of the developed nations.

Aaron Ciechanover: Drug discovery and biomedical research in the 21st century: the third revolution

The first few talks this morning focused primarily on policy as illuminated by science; only the third talk was pure science.

Chiechanover’s talk was on both the history and future of drug research, which he characterized in terms of three major revolutions in the last century.

The first revolution was a period of accidental discoveries in 1930s-1960s, where the discovery of a useful drug comes first, by observation of therapeutic effects, followed by chemical isolation, and only at the end (if at all), is the mechanism of action worked out. He gave the example of aspirin. Willow tree bark used for pain relief since at least Aristotle, but the active agent (salicylin) was only isolated in the 19th century by Buchner, and it was initially useless medically: it was water-insoluble and extremely bitter. Gerhard acetylated it to make it soluble near the end of the 19th century (but didn’t take advantage of it as a clinical tool), and Hoffman (who made Bayer rich) repeated the acetylation and turned it into a useful drug, using it to treat his father, who was sick with arthritis. It’s mechanism, as an inhibitor of prostaglandin synthesis, was not discovered until the late 20th century. The drug also prevents platelet aggregation, so is also being used in heart disease prevention, and its anti-inflammatory action may also make it a preventative for some cancers. However, it is a story of complete serendipity.

Another example of fortuitous discovery was Fleming’s penicillin, which was a major factor in nearly doubling human lifespan in about a century, and antibiotics in general opened up the potential for all kinds of life-saving procedures, such as surgery.

The Second revolution occurred in the 1970s-2000s, and was planned. The key innovation here is high throughput, brute force screening of large libraries of chemical compounds, which he compared to “fishing in a swamp”. We have no idea what we’ll find, but there is the expectation that some compound will be found that will have a useful effect. It is a procedure that still relies serendipity, we’ve merely elevated the chances of finding something, and of exploiting it rapidly.

The example given was the work of Akira Endo, who knew that fungi were resistant to parasites, presumably because they contained agents to suppress bacterial cell wall synthesis by inhibiting cholesterol production. His work led to the discovery of statins, which has become a $20 billion/year industry for reducing to cholesterol levels in patients with heart disease. It has also been found to reduce the probability of heart attacks in patients who only have susceptibility for heart disease, and is now being used as a preventative in healthy patients (which is always a great way to vastly increase profits). It may also help with Alzheimers and malignancies, by mechanisms not currently known.

The third revolution is ongoing. The new strategy is understanding the mechanism first, followed by targeted design. He illustrated the problem with current pharmaceuticals by pointing out that men with prostate cancer and women with breast cancer are treated with the same tools: imaging technology, histology, and chemotherapy. These are different diseases! At the same time, two women may be diagnosed with breast cancer, but one will be estrogen sensitive and the other will be estrogen insensitive, which means that an effective treatment for one may be a lethal waste of time for the other. We aren’t treating the disease specifically, but are using a one-treatment-fits-all formula for for general disease. What we need is a molecular diagnosis of tumors to fit treatment plans individually.

He thinks we are entering an era of personalized medicine. One example he gave was herceptin, which is an antibody targeted for the EGF receptor. People with a mutant, constitutively active EGF receptor are susceptible to certain kinds of cancers, so this is a very useful drug for down-regulating EGF activity. But for people with wild-type receptors, it’s completely useless. The utility of this drug relies on diagnosis by PCR of specific alleles to find candidates for drug use.

He sees great promise in cheaper whole genome sequencing as an important tool for personalized medicine, and is looking forward to the days of the thousand-dollar genome. He also advocates a systems approach: interdisciplinary action will be needed to put together useful solutions.

The problems he foresees with personalized, targeted therapies are:

  • Multigenic deseases. Most of these diseases and susceptibilities aren’t going to be the product of single alleles, but of multiple, interacting genes. That means answers won’t be simple, but will require an understanding of combinatorial effects.

  • Malignancies are typically the product of genomic instability. They are moving targets.

  • Complications of human experimentation. We can’t just pin down patients and run them through a series of carefully controlled trials, so working out the effective details of personalized medicine are going to be hard.

  • Lack of good animal models. A mouse is not a human. We can’t do the necessary experiments on people, but at the same time we can’t entirely trust the results of animal experiments.

  • Costs and legal liabilities. Medicine is done for profit. How do you pay for tools that work on tiny percentages of the population?

  • Bioethical problems. Information has repercussions. How do individuals cope with the knowledge that they might have, for instance, elevated susceptibility to breast cancer? How will it affect their relationships with family and spouses (or prospective mates)?

  • This was a good talk, but very, very general. I’m hoping we get some more scientific meat in other talks.

    By the way, the way the meetings are run at Lindau is a little different than I’m used to — there is no Q&A afterwards! However, what they do instead is schedule small group meetings with the Nobelist speakers and the “Young Investigator” group of attendees, to which people here as press (like me!) are not invited, which is too bad from my point of view, but is probably a very good way to give people with more direct interests good access to the speaker.

    Clock reset in progress

    It is time for the first big challenge of the week: getting my circadian rhythms straightened around. It feels like about 11:30 in the evening, my biological time, but it’s actually 6:30am Lindau time. Today is actually tomorrow.

    My strategy was actually planned. Yesterday was mostly travel through the night, and I got no sleep. I noticed as I was flying east and getting very, very tired that, as the sun came up at what was an unnatural time for me and started zapping my photoreceptors, I woke up very nicely…and then kept going through the agonizing day of missed flights and boring waits. Then last night (local time) I finally got to Lindau in time for some of the social events and just kept my brain going through the evening. I finally went to bed about 10:30 Lindau, and slept wonderfully.

    I had set an alarm, but didn’t need it — again, the first rosy light of dawn comes in through my window, and I was up and feeling pretty good. I think I’ll take the next step in completing my adjustment to this 7 hour shift in time, and I suspect it’s one the circadian researchers haven’t contemplated (I shall have to ask Bora): a fine breakfast of German pastries. That should keep me going through the morning talks.

    By the way, this is the Lindau Nobel conference, and the topic this year is chemistry. I am not a chemist, which means I may not have a clue what anyone is talking about. However, I’ve got one angle that may help: I am a microscopist. This morning, I’m looking forward to Neher talking about caged compounds — there’s also some juicy stuff on protein degradation and renewable energy. Later this week, of course, it’s Shimomura, Chalfie, and Tsien on bioluminescence, which is all grist for the microscopist mill. I might just come out of this with some new understanding.

    Und Bier! Tonight I must find a good dunkel — last night the conference only provided a so-so bottled Pils. Then this evening I shall try to post some summaries of the day’s science while under the influence. It should be entertaining!