Richest man in the world is a talking ass

I read that Elon Musk is currently the richest man in the world, which is as complete an indictment of capitalism as could imagine. Whenever he opens his mouth he exposes his ignorance, yet people still think he’s saying something worthwhile. He’s just a guy with an undeserved mountain of money that he uses to buy stuff, nothing more, and it’s a shame that his follies haven’t caught up with him yet, if ever — money seems to be a pretty effective cushion against the consequences of your actions.

A few years ago, he bought a company called Neuralink, which was supposedly working on brain-machine interfaces, but is actually 99% hype, and put an unqualified tech bro in charge. The real neuroscientists could tell the whole idea was a load of overstuffed bullshit, but the media stumbled all over itself in a rush to give him more press.

You would think that someone — the investors, the journalists, the people hoping for medical breakthroughs — would notice the similarities between Elon Musk and Elizabeth Holmes, especially since the Holmes trial just wrapped up with a conviction, but noooo.

Now Neuralink is looking to find human guinea pigs for his overhyped technology.

The Silicon Valley company, which has already successfully implanted artificial intelligence microchips in the brains of a macaque monkey named Pager and a pig named Gertrude, is now recruiting for a “clinical trial director” to run tests of the technology in humans.

“As the clinical trial director, you’ll work closely with some of the most innovative doctors and top engineers, as well as working with Neuralink’s first clinical trial participants,” the advert for the role in Fremont, California, says. “You will lead and help build the team responsible for enabling Neuralink’s clinical research activities and developing the regulatory interactions that come with a fast-paced and ever-evolving environment.”

Musk, the world’s richest person with an estimated $256bn fortune, said last month he was cautiously optimistic that the implants could allow tetraplegic people to walk.

“We hope to have this in our first humans, which will be people that have severe spinal cord injuries like tetraplegics, quadriplegics, next year, pending FDA [Food and Drug Administration] approval,” he told the Wall Street Journal’s CEO Council summit.

“I think we have a chance with Neuralink to restore full-body functionality to someone who has a spinal cord injury. Neuralink’s working well in monkeys, and we’re actually doing just a lot of testing and just confirming that it’s very safe and reliable and the Neuralink device can be removed safely.”

He is nowhere near that capability. He is lying. I know that here in America no one is going to care about the victims of irresponsible experimentation, but I would hope that someone would give a loving, concerned thought to the investors who are being taken to the cleaners by this fraud. Hope is not data. Appealing to the suffering victims of tragedy is one of the oldest tricks in the con artist’s toolbox.

That’s not all. Someone has confused science-fiction novels with reality.

Musk said that he thinks in the future we will be able to save and replay memories. “I mean this is obviously sounding increasingly like a black mirror episode but yeah essentially if you have a whole brain interface everything that’s encoded in memory you could you could upload you could basically store your memories as a backup and restore the memories then ultimately you could potentially download them into a new body or into a robot body the future is going to be weird one!” said Musk.

A “whole brain interface” — what’s that? How are you going to do that? You think you can skim off the totality of a person’s experience with a little chip you can pop on and off? Where are memories encoded so you can extract them? How are you planning to overwrite a brain’s structure and chemistry and connectivity so you can replace one brain’s identity with another?

I don’t think he’s going to have much luck hiring someone to oversee human experimentation in these areas, since Josef Mengele is dead. He’ll probably just hire some tech-dude with no qualifications and no sense of ethics — I hear those guys are sprouting up all over Silicon Valley.

Psychologists don’t really believe that, do they?

They really need to get out more, dissect a frog brain or something, if they’re still clinging to that triune brain nonsense. According to Salon, some psychologists still think that’s valid. The author summarizes an article that…

…addresses (and debunks) one of the most commonly-used metaphors in evolutionary psychology, the idea that the human brain evolved from lower life forms and hence has evolutionary remnants from those animals — akin to an onion with layers.

If you’ve ever heard someone speak of you possessing a “lizard brain” or “fish brain” that operates on some subconscious, primal level, you’ve heard this metaphor in action. This is called the triune-brain theory; as the authors write, the basic crux of it is that “as new vertebrate species arose, evolutionarily newer complex brain structures were laid on top of evolutionarily older simpler structures; that is, that an older core dealing with emotions and instinctive behaviors (the ‘reptilian brain’ consisting of the basal ganglia and limbic system) lies within a newer brain capable of language, action planning, and so on.”

Whoa. That’s silly. Of course, I have an edge: my early career in graduate school was spent studying the neuroanatomy and physiology of fish, and yes, they have a hindbrain, midbrain, and forebrain — all the pieces are there, they develop to different degrees in different lineages, and there aren’t linear ‘steps’ in evolution where, all of a sudden, there are jumps to whole new brain architectures appearing. Even before that, as an undergraduate taking neuroscience from Johnny Palka, I recall how insistent he was that we had to regard the brain of Drosophila as both existing and capable of sophisticated processing. (It’s true, some people think insects don’t have brains. They’re wrong.)

I wonder if this is another consequence of the belief in Haeckel’s erroneous ideas. I’ve skimmed through Dr Spock’s Baby Book, and was surprised to see how much rekapitulationstheorie saturates that book. The creationists love to claim that introductory biology texts teach it as fact, when my experience is that they explain how it’s wrong; they should look into the child psychology texts if they want better examples of a bad idea being promoted today.

So I had to look into the paper described in the Salon article. It’s titled “Your Brain Is Not an Onion With a Tiny Reptile Inside”, which is excellent. It gets right down to addressing the misconception from the very first words. The abstract is also succinct and clear.

A widespread misconception in much of psychology is that (a) as vertebrate animals evolved, “newer” brain structures were added over existing “older” brain structures, and (b) these newer, more complex structures endowed animals with newer and more complex psychological functions, behavioral flexibility, and language. This belief, although widely shared in introductory psychology textbooks, has long been discredited among neurobiologists and stands in contrast to the clear and unanimous agreement on these issues among those studying nervous-system evolution. We bring psychologists up to date on this issue by describing the more accurate model of neural evolution, and we provide examples of how this inaccurate view may have impeded progress in psychology. We urge psychologists to abandon this mistaken view of human brains.

Then Cesario, Johnson, and Eisthen name names. They show that this misbegotten misconception is a real issue by going through the literature and introductory textbooks.

Within psychology, a broad understanding of the mind contrasts emotional, animalistic drives located in older anatomical structures with rational, more complex psychological processes located in newer anatomical structures. The most widely used introductory textbook in psychology states that

in primitive animals, such as sharks, a not-so-complex brain primarily regulates basic survival functions. . . . In lower mammals, such as rodents, a more complex brain enables emotion and greater memory. . . . In advanced mammals, such as humans, a brain that processes more information enables increased foresight as well. . . . The brain’s increasing complexity arises from new brain systems built on top of the old, much as the Earth’s landscape covers the old with the new. Digging down, one discovers the fossil remnants of the past. (Myers & Dewall, 2018, p. 68) [no relation –pzm]

To investigate the scope of the problem, we sampled 20 introductory psychology textbooks published between 2009 and 2017. Of the 14 that mention brain evolution, 86% contained at least one inaccuracy along the lines described above. Said differently, only 2 of the field’s current introductory textbooks describe brain evolution in a way that represents the consensus shared among comparative neurobiologists. (See https://osf.io/r6jw4/ for details.)

Not to blame only psychologists, they also point out that Carl Sagan popularized the idea further in The Dragons of Eden. I hate to puncture the warm happy glow Sagan’s name brings to many of us, me included, but that was a bad book. Don’t ask an astronomer to explain brain evolution and consciousness, ever. I’m looking at you, Neil.

The authors illustrate the misconception well. It’s a combination of errors: the idea that evolution is linear rather than branching, that humans are the pinnacle of a long process of perfecting the brain, and that we possess unique cerebral substrates to produce human capabilities. It isn’t, we aren’t, we don’t.

Incorrect views (a, b) and correct views (c, d) of human evolution. Incorrect views are based on the belief that earlier species lacked outer, more recent brain structures. Just as species did not evolve linearly (a), neither did neural structures (b). Although psychologists understand that the view shown in (a) is incorrect, the corresponding neural view (b) is still widely endorsed. The evolutionary tree (c) illustrates the correct view that animals do not linearly increase in complexity but evolve from common ancestors. The corresponding view of brain evolution (d) illustrates that all vertebrates possess the same basic brain regions, here divided into the forebrain, midbrain, and hindbrain. Coloring is arbitrary but illustrates that the same brain regions evolve in form; large divisions have not been added over the course of vertebrate evolution.

I’m kind of disappointed that this obvious flawed thinking has to be pointed out, but I’m glad someone is explaining it clearly to psychologists. Can we get this garbage removed from the textbooks soon? Or at least relegated to a historical note in a sidebar, where the error is explained?

Mystical Experiences @ Death!

That was the title of the lecture I attended last night, by our distinguished visiting professor, Allen Kellehear of the University of Bradford. It was … frustrating. Kellehear does have an excellent background in caring for the dying, and I would have enjoyed (if that’s the word) a discussion of the material and emotional needs of the dying, or hospice policy, or something along those lines, but instead it was an hour of Near Death Experiences (NDEs). I also agreed with his conclusion, that these phenomena are a complex outcome of cultural expectations, and that we actually don’t know much about the biology. It’s just that the journey there was a catalog of unlikely interpretations of mundane events.

He began with the facts and figures, and told us that, for example, 20% of resuscitated individuals report having an NDE, and 30% of people report having a visitation from the dead. My question is: how are these numbers at all meaningful? There is a huge amount of selection bias here (which he admitted to), because my story of losing consciousness and later waking up is not going to draw any attention at all, while Eben Alexander’s fabulous story of going to heaven and meeting an all-powerful, awesome lord of creation gets on the New York Times bestseller list. It’s nice to have statistics, but I want to know how they were collected and interpreted, and without that, they’re meaningless.

I was also confused because later he mentions that these NDE-like experiences were also expressed by people in many stressful situations, like trapped miners. So once again, 20% of what? Shouldn’t the fact that I lost consciousness when I went to bed last night, as I’ve done every night for 6 decades, and did not have an other-worldly, out-of-body experience be counted among the negatives?

He also gave us a list of the canonical events during an NDE: the dark tunnel, the Being of Light, the visiting of dead relatives, etc. I felt like pointing out that he, an authority on this subject, has just now primed a large audience on exactly what they’re supposed to experience if they had an NDE. Not that that’s his fault: there are movies and books and stories told on daytime television that reinforce these perceptions, and there’s a widespread cultural idea about them that we’re already soaking in.

I also wondered…if I were in a coma, and woke up and reported that my consciousness spent that time wandering in a cosmic darkness, or that I remembered visiting the shores of an alien sea and meeting Space Squid, would that even count as an NDE? He’s got a checklist, you know, and if I were asked if I saw the Being of Light, and I said “No,” would that mean I didn’t have an NDE?

Most annoying of all, though, was all the neuroscience bashing. He really is not impressed with the neuroscientific explanations of the phenomenon, and neither am I, because he gave us a long list of scientific explanations that did not include the dominant hypothesis. He talked about scientists sticking electrodes on the heads of unconscious patients to record EEGs during their NDE, or drawing blood to measure blood gases, and hypotheses about anoxia, or endorphins, or ocular pressure increases, or similar attempts to explain NDEs as events that occurred during the trauma or the coma, and the one time he named one of these neuroscientists, it was Michael Persinger. We’re talking fringe of the fringe. The neuroscientists I know would just roll their eyes at these accounts, in the same way we’d dismiss those weird experiments with putting dying people on precision balances to measure the weight of the soul at the moment it left the body. It’s missing the whole point.

But he didn’t even mention how most neuroscientists would explain NDEs. They don’t occur during the event, because the brain is not functioning at all well during that time. They are confabulations assembled by the brain once its function is restored.

Minds abhor gaps. Our consciousness works hard to maintain the illusion of continuity, and we even invent stories to explain where our consciousness “went” during its absence. We do this all the time without even thinking about it.

A mundane example: have you ever lost your keys, or your glasses? It happens all the time. We’re often not thinking about routine events, and we don’t bother to store them in our memories, so I get up in the morning, stumble about in a fog while doing the things I do almost every day, and I don’t have to pay conscious attention to them. But maybe later I wonder where I put my glasses, and my wife tells me, “They’re here on the kitchen counter,” and my brain instantly generates a plausible explanation. “I must have put them there when I was making the coffee,” I think. If I were asked at that moment, I would even put together a fairly detailed narrative about walking into the kitchen and taking them off as I was filling the pot with water — but the thing is, I didn’t know this. I don’t actually remember it. If I had, I wouldn’t have been wondering where I’d put them.

We do this constantly. Memories aren’t detailed recordings of everything you’ve done or experienced, they’re a scattered set of anchoring specifics with a vast amount of narrative filler generated as necessary by your brain, based upon a plausible model of how the world works. So I don’t remember taking my glasses off, but I do have a model of the world that includes me taking them off while doing kitchen tasks, so voila, a story is easily assembled. If I had a world model that included elves, I might have built a story that said, “Those pesky elves must have put them there!”, and then the fun begins, because the observation that my glasses were where I hadn’t remembered putting them becomes confirmation of my model of the world that includes elves.

We really don’t like the idea that our consciousness isn’t always present in our heads, that it’s an epiphenomenon of constant invention, so we have explanations for where it goes when it isn’t particularly active. I intentionally put my glasses on the counter, I just forgot. Most interestingly, we go through a period of unconsciousness every day, and we don’t freak out about where our minds went. We were “sleeping”, we say, our minds were still there, busily doing nothing, and this word “sleep” consoles us that our consciousness did not stop existing for hours and hours.

Similarly, NDEs are a conscious narrative we build to explain what happened to ourselves during radical, traumatic events. We blanked out, our minds stopped humming along, where did our self go? It had to have gone somewhere, it can’t just stop, so our brains build a story from conventional expectations to prevent an existential crisis. It’s what we do. And if it’s near-death, how convenient that we throw in Dead Uncle Bob, who we know is dead, but we have these niggling questions about where Uncle Bob went, so clearly we must have both gone to the same place. The idea that a consciousness ceased to exist is inconceivable, after all.

If Kellehear had actually discussed what neuroscientists believe, it would have been something along those lines, on the ephemeral and contingent nature of consciousness, and he wouldn’t have brought up silly ol’ crackpot Persinger as representative. It would have also revealed that neuroscientists are actually in alignment with his ideas about the importance of history and culture and religion and emotion in shaping human responses to death, that it’s not really a hard-wired part of our neural circuitry. So that was a little unsatisfying.

There was also a bit near the end where he got into a bit of Dawkins bashing — but for all the wrong reasons. He railed against the arrogance of a scientist claiming to know that there is no god. I felt like saying that that arrogance pales in comparison with the ubiquitous, overbearing hubris of claiming to not only know that there is a god, but that one knows exactly what kinds of sexual behaviors that god enjoys, and that one has this certainty in spite of the fact that there is no independent evidence of any kind that this supreme being even exists. But I was being nice. It was also an event packed full of community members — “townies” — who were there to listen to an academic reinforce their model of the world, and they weren’t going to appreciate someone telling them that elves aren’t real.

Teams of Memes, bursting from the seams

Image courtesy of the googles.

Daniel Dennett’s From Bacteria to Bach and Back is a lengthy and winding journey. It is characterized (including by its publisher) as a general explanation of the evolution of minds and various peculiar mental functions, consciousness and language being the two most hotly discussed by philosophers, but there’s a better way to read it. As its best, the book is a tour of Dennett’s personal philosophical repertoire, illustrating how ideas from his books and papers fit together.

Dennett’s general theory of the development of genetics stems from his broad theory of memes, where a meme is any informational entity that can be transmitted and replicated. The rough idea is that minds are meme-machines in the way that organisms are gene-machines (in Dawkins’ analogy of the gene’s-eye-view). This is a fruitful analogy, in some respects, though I think it can and should draw some skepticism from readers. I’ll return to those worries later.

The basic building blocks of Dennett’s view are indicated by gestures and short explanations, which is a challenge since he’s spent so much time discussing and arguing for them elsewhere in his work. In any case, there are really two that it is important to understand.

[Read more…]

I was compelled to post this

I said I didn’t want to say anything about free will, and I still don’t, but Massimo Pigliucci weighed in, and Jerry Coyne responded, and so did Sean Carroll, and of course I created a free will thread for everyone else to talk about it, so I guess there’s a fair bit of momentum behind it all.

I don’t understand why free will was getting all tangled up in indeterminacy vs. determinism, since that seems to be a completely independent issue. I’ll sum up my opinion by agreeing with Jerry Coyne:

Of course, whether the laws of physics are deterministic or probabilistic is, to me, irrelevant to whether there’s free will, which in my take means that we can override the laws of physics with some intangible “will” that allows us to make different decisions given identical configurations of the molecules of the universe. That kind of dualism is palpable nonsense, of course, which is why I think the commonsense notion of free will is wrong.

My mind is a product of the physical properties of my brain; it is not above them or beyond them or somehow independent of them. It doesn’t even make sense to talk about “me”, which is ultimately simply yet another emergent property of the substrate of the brain, modifying the how the brain acts. It is how the brain acts.

I think consciousness is a product of self-referential modeling of how decisions are made in the brain in the absence of any specific information about the mechanisms of decision-making — it’s an illusion generated by a high-level ‘theory of mind’ module that generates highly simplified, highly derived models of how brains work that also happens to be applied to our own brain.

(Also on FtB)

What have the students been up to this week?

It’s another update on the bloggin’ students in my Neuroscience course, and what they’ve been thinking about.

They all welcome visits and comments!

(Also on FtB)

What have my students been thinking about lately?

I gave them an exam, that’s what. That and long boring lecturings at 8am on pattern formation in the nervous system. But otherwise, I’ve had them blogging, so we can take a peek into the brain of a typical college student and see what actually engages them.

I understand these are all the things all college students everywhere are contemplating.

(Also on FtB)

What have my students been thinking about this week?

I’ve got my neurobiology students blogging — all I ask is that they write something relevant to understanding how brains work. Let’s see where their minds are at this week, shall we?

(Also on FtB)

Wiring the brain

This story is some kind of awesome:

For those who don’t want to watch the whole thing, the observation in brief is that color perception is affected by color language. The investigators compare Westerners with our familiar language categories for color (red, blue, green, yellow, etc.) to the people of the Himba tribe in Africa who have very different categories: they use “zoozu”, for instance, for dark colors, which includes reds, greens, blues, and purples, “vapa” for white and some yellows, “borou” for specific shades of green and blue. Linguistically, they lump together some colors for which we have distinct names, and they also discriminate other colors that we lump together as one.

The cool thing about it all is that when they give adults a color discrimination test, there are differences in how readily we process and recognize different colors that corresponds well to our language categories. Perception in the brain is colored (see what I did there?) by our experiences while growing up.

The study is still missing one part, though. It’s presented as an example of plasticity in wiring the brain, where language modulates color perception…but we don’t know whether people of the Himba tribe might also have subtle genetic differences that effect color processing. The next cool experiment would be to raise a European/American child in a Himba home, or a Himba child in a Western home (this latter experiment is more likely to occur than the former, admittedly) and see if the differences are due entirely to language, or whether there are some actual inherited differences. It would also be interesting to see if adults who learned to be bilingual late experience any shifts in color perception.

(Also on FtB)