A patchwork dodo is not a dodo

Somebody has been watching too much Jurassic Park. They should read the original novel, which was a badly written Luddite pot-boiler with a bad take on genetic technology that emphasized the horrible ways technology would inevitably go wrong (that was a tiresome theme in practically all Michael Crichton novels), while the movies just highlighted the glorious resurrection of really cool animals. I guess the latest movie has hordes of perfectly healthy, vigorous dinosaurs swarming across the American West, as if that could happen.

In yet another George Church production, his company, Colossal Biosciences, proposes to resurrect the dodo, just as he said he was going to bring back the mammoth and thylacine. He hasn’t accomplished any of it. I’ll go out on a very thick limb and say he’s never going to succeed. The procedure, using CRISPR to incrementally patch dodo genes into an extant bird species, is fundamentally flawed.

To create a dodo from such genetic information, the company plans to try to modify the bird’s closest living relative, the brightly colored Nicobar pigeon, turning it step by step into a dodo and possibly “re-wilding” the animal in its native habitat.

Colossal has not yet created any kind of animal. It’s still working on developing the necessary processes. And making a dodo might not even be possible. That’s because it is hard to predict how many DNA changes will be needed to transform the Nicobar pigeon into a big-beaked, three-foot-tall dodo.

The dodo had a full, functioning, integrated genome that evolved gradually under a regime of continual selection — every intermediate was viable. Colossal’s approach is to splice a few dodo genes into a pigeon, raise it up, splice in a few more genes, etc. Those dodo genes evolved in a dodo genome. Gene A was in a cooperative relationship with gene B in the dodo, but you’ve just popped gene A into a genome that has a very different version of pigeon gene B. The gene you want to insert might be seriously deleterious in a pigeon context, and you don’t know what the relationship is. The dodo genes might also be optimized for a completely different environment, yet you’re trying to make them viable in lab-bred animals.

It’s insane. They’re going to plunk a few ancient genes into some poor pigeon and declare victory, but all they’re going to do is produce a sad fat flightless bird that is totally maladapted for everywhere, not a dodo at all, but a weirdly warped mutant pigeon. Good luck getting Chris Pratt to herd the flock around the landscape.

At least the dodo is only three feet tall…I can’t imagine what kind of botchwork monstrosity they’re going to build out of elephant stock. And they’re talking about “rewilding” these animals! The world they were adapted to no longer exists, these mutant freaks will not be able to thrive anywhere, and it’s pure fantasy to imagine they can let some loose in some environment that doesn’t want them, where the forces that drove the original extinction still exist, and get a supportable natural population. These are not serious ideas.

But they’ve got serious money.

The two-year-old startup also said today that it had raised a further $150 million in funding (bringing the total it’s raised to $225 million)—some of which will go to a new effort around bird genomics.

How do they do that? Easy. It’s all hype. They’re building on the flashy, fictional pseudoscience plotted by Jurassic Park, with an audience of stupid rich people who are impressed by CGI and confuse it with reality. Hey, if you can sell Bitcoin, you can sell fantasy animals that don’t exist to people with too much money. They even admit it.

Colossal’s investors include the billionaire Thomas Tull, the CIA’s venture capital arm, and the prominent biotech venture capitalist Robert Nelsen. Nelsen invested in the company because de-extinction “is just really cool,” he said in an email. “Mammoths and direwolves are cool.”

Oh god. Billionaires are so fucking stupid. All this money, pouring into an absurd project, and what are they going to do with it? It’s all about profit in the minds of the people throwing cash at it.

Because there isn’t much money to be made in conservation, how Colossal will ever turn a profit is another evolving question. One Colossal executive told MIT Technology Review that the company could sell tickets to see its animals, and Lamm believes the technologies needed to create the mammoth or the dodo will have other commercial uses.

Conservation isn’t profitable, but you know what is? A $225 million freak show, with dismal mutant animals in cages. Pleistocene Park! Yeah, that’s the ticket! The concept made money in that movie and book written by a guy who hated science, so let’s try that!

I knew that venture capitalists were evil and stupid, but it’s disappointing that so many highly trained molecular biologists are being sucked into this futile endeavor by all the hypetrain money flowing into it. And George Church — he used to be a well-regarded Smart Guy, but now his reputation is going to be as an ethically-challenged PT Barnum.

Spiders make it look easy

What’s the difference between engineering and hacking? I think this video is a good illustration. Some guys decided to try and make a giant rideable mechanical hexapod, and documented how the whole project floundered and ultimately failed on video.

It’s infuriating how half-assed they were about the work. They start by welding together big chunks of steel together. No model, no prototyping, no estimating forces, negligible planning. They get something that sort of crudely moves individual components, and then slap together a rough controller (“each of the legs makes the same movement, with different timing,” ha ha), and try to get it to just stand up, and then take a few steps. It’s constantly failing, and then they rush in and replace another component with one that’s more powerful, not worrying about all the cascading consequences of such an action. Eventually they get it to clumsily walk a few steps and tear its own frame apart.

It’s the power of brute stupidity in action. I’m appalled that they got so much money and invested so much time in such a poorly thought-out project.

I half expected them to come to the revelation that it was a hexapod, not a spider at all, and try to weld on two more legs to make it work. That kind of ad hoc make-it-up-as-you-go-along approach characterizes the whole thing. The video is a kind of anti-advertisement for ever hiring these clowns to do a serious project.

The eugenicists are always oozing out of the woodwork

What do you mean, “enhancement”? Who are you to decide what’s better?

Émil Torres explores the relationship between long-termism/effective altruism and scientific racism.

longtermism, which emerged out of the effective altruism (EA) movement over the past few years, is eugenics on steroids. On the one hand, many of the same racist, xenophobic, classist and ableist attitudes that animated 20th-century eugenics are found all over the longtermist literature and community. On the other hand, there’s good reason to believe that if the longtermist program were actually implemented by powerful actors in high-income countries, the result would be more or less indistinguishable from what the eugenicists of old hoped to bring about. Societies would homogenize, liberty would be seriously undermined, global inequality would worsen and white supremacy — famously described by Charles Mills as the “unnamed political system that has made the modern world what it is today” — would become even more entrenched than it currently is.

I would have predicted the connection long ago. EA trips a whole bunch of red flags in my head.

  • The incessant chatter about IQ. We don’t know what IQ is, other than a number generated by an IQ test, so making the concept central to your philosophy is a bit like building your reason for living on phrenology. Sure, you can actually measure the bumps on your skull and use scientific-looking tools like calipers and quantitatively calculate their dimensions, but does it mean anything about how your brain works? No, it does not. At the first mention of IQ, run away.
  • The lack of relevant qualifications. Look at the big guns of EA: Bostrom, MacAskill, Yudkowsky, Alexander, Hanson (I’ll even toss in Sam Harris, although he doesn’t seem to be deeply involved in EA). Do any of them have any background in genetics at all? They do not. Yet they go on and on about dygenesis and eugenesis and trends in populations that have to be countered, or they defend Charles Murray’s (also not a geneticist) racist interpretations of traits of whole populations. This problem goes all the way back to the founders of the eugenics movement, who weren’t geneticists at all, like Francis Galton, or immediately used the crudist, most primitive forms of Mendelism to justify bad science, like Davenport.

  • Transhumanism as a tool for improving humanity. I have some sympathy for the idea of modifying genes and bodies by individuals; that’s a fine idea, I wish we had greater capabilities for that. Where I have problems is when it’s seen as a method of social engineering. Underlying it all is a set of value judgments defining how we should regard diversity in our fellow human beings. If you’re arguing we ought to use gene therapy or drugs to eradicate obesity, or autism, or color-blindness, or whatever, you’ve already decided that a whole lot of existing attributes of the human population are dysgenic or undesirable, yet you don’t know what all the correlates of those traits might be. You’re also viewing those people through a lens that highlights everything about them that you personally consider bad.

    The thing is, we’re all born with a range of traits that are basically random, within certain limits. Everything about you, all 20,000 genes, is a roll of the dice. A philosophy that does not insist that every combination deserves equal respect, equal justice, and equal compassion is an anti-human philosophy, because it denies a fundamental property of our biology.

Those are just the general red flags that can be thrown by a whole suite of common ideas. EA throws one that I would never have imagined anyone would take seriously, this bizarre idea that we ought to consider the happiness of hypothetical, imaginary human beings far more important than the happiness of real individuals in the here and now. I can’t even…this is crazy cultist bullshit. I do not understand how anyone can fall for it. Except…yeah, they’re using the universal excuses of the modern Enlightenment.

And no one should be surprised that all of this is wrapped up in the same language of “science,” “evidence,” “reason” and “rationality” that pervades the eugenics literature of the last century. Throughout history, white men in power have used “science,” “evidence,” “reason” and “rationality” as deadly bludgeons to beat down marginalized peoples. Effective altruism, according to the movement’s official website, “is the use of evidence and reason in search of the best ways of doing good.” But we’ve heard this story before: the 20th-century eugenicists were also interested in doing the most good. They wanted to improve the overall health of society, to eliminate disease and promote the best qualities of humanity, all for the greater social good. Indeed, many couched their aims in explicitly utilitarian terms, and utilitarianism is, according to Toby Ord, one of the three main inspirations behind EA. Yet scratch the surface, or take a look around the community with unbiased glasses, and suddenly the same prejudices show up everywhere.

“Science,” “evidence,” “reason” and “rationality” are supposed to be tools to help lead you to the truth, but it’s all too easy to decide you already possess the truth, and then they transform into tools for rationalization. That’s not a good thing. You can try to rationalize any damn fool nonsense, and that’s the antithesis of the scientific approach.

It’ll be great when they work out the quirks in AI

For instance, the Arabian Journal of Geosciences has a problem: their articles are too obviously fake.

Some titles of the farkakte research: “Simulation of sea surface temperature based on non-sampling error and psychological intervention of music education”; “Distribution of earthquake activity in mountain area based on embedded system and physical fitness detection of basketball”; “The stability of rainfall conditions based on sensor networks and the effect of psychological intervention for patients with urban anxiety disorder.” A complete list of the retracted papers can be found here.

They read a bit like a college student throwing around big words to cover up a lack of understanding. Though purportedly written by humans, the content of each paper definitely reads as if it were put together by a computer that doesn’t quite grasp speech patterns or grammar. The papers are filled with redundancies and generally lack logic.

Right away, I noticed a problem: they should have used the more formal German “verkakte” rather than the alternative Yiddish spelling. Oh, right, and the paper titles are absurd, too.

I see two sources of problems: institutions that demand frequent publications, even where it isn’t warranted, and extremely lazy journal editors who rubberstamp everything.

As amusing (or alarming) as the idea of earthquakes being connected to basketball might be, the screwup highlights issues in science publishing that let farcical research slip into the realm of real work. As highlighted by the Chronicle of Higher Education in August, when the 400-odd papers in the geosciences journal got expressions of concern attached to them, many suspicious papers appear to have been written by scholars affiliated with Chinese institutions, where researchers are incentivized (sometimes financially) to publish in notable journals and where many doctoral students must publish a paper before graduation. The founder and editor-in-chief of the Arabian Journal of Geosciences told the Chronicle at the time that he reads every paper published in the journal each month (which would mean about 10 papers per day, including weekends), and that he thinks the fabricated research got into the journal through hacking.

Sure. Abdullah M. Al-Amri, editor-in-chief of the journal, reads every submission, and all those ridiculous papers must have been hacked into place. He reads every article, except he never takes a look at the journal once it’s been published. And he never reads the correspondence from legit researchers who point out the kind of crap getting splattered all over the pages. He just failed to notice that “Structure of plain granular rock mass based on motion sensor and movement evaluation of dancers” got published.

Just wait until Chinese researchers discover ChatGPT. We desperately need our pseudoscientific garbage to be more readable.

It’s a brain! Don’t trust it

Well, ain’t this a kick in the pants. Here’s a compilation of failed concepts in psychology, for example, the oft-mentioned Stanford Prison Experiment is a badly done botch, the Pygmalion effect is small and inconsistent, the Milgram experiment is full of experimental errors, etc., etc., etc. It’s rather depressing.

As someone who spends a lot of time online, though, I was relieved to learn that’s not responsible for feeling low.

Lots of screen-time is not strongly associated with low wellbeing; it explains about as much of teen sadness as eating potatoes, 0.35%.

So you’re saying I should cut potatoes out of my diet, then?

The impression I get is that a lot of the popular ideas that have emerged out of psychology arise not because the experimenter is rigorous and cautious, but because they either conform to conventional wisdom or are surprisingly contrary. There’s also something analogous to the TED Talk effect, where people are convinced more by the certainty of the presentation of the story than by the data. I’m beginning to develop my own rubric for assessing psychological claims: if it’s so simple that it gets condensed down to just the investigator’s name, it’s probably shoddy work with questionable validity. I’m calling it the Myers Rule.

The author of the list says something I think is worth keeping in mind, though. They’re talking about the concept of Ego Depletion, which has a substantial wiki page.

It’s 3500 words, not including the Criticism section. It is rich with talk of moderators, physiological mechanisms, and practical upshots for the layman. And it is quite possible that the whole lot of it is a phantom, a giant mistake. For small effect sizes, we can’t tell the difference. Even people quite a bit smarter than us can’t.

If I wander around an old bookshop, I can run my fingers over sophisticated theories of ectoplasm, kundalini, past lives, numerology, clairvoyance, alchemy. Some were written by brilliant people who also discovered real things, whose minds worked, damnit.

We are so good at explaining that we can explain things which aren’t there. We have made many whole libraries and entire fields without the slightest correspondence to anything. Except our deadly ingenuity.

Human brains are so easily diddled by grand simplifications (religion, for instance) that they’ll then turn phantasms into sweeping, detailed rules for existence. It’s all superstitious behavior in the psychological sense — we’re all searching for patterns so obsessively that if they aren’t there, our minds start imposing them on the world.

I’m so glad I’m not working in psychology. Evolution and developmental biology would never cultivate popular errors. Wait — but those sciences are studied by human minds, which are clearly kind of squirrely.

Well, well, well…look what just got published in Nature

Nature‘s Journal of Human Genetics, that is. It’s a little piece titled “The collective effects of genetic variants and complex traits” by Mingrui Wang & Shi Huang, and the abstract is a bit odd.

Traditional approaches in studying the genetics of complex traits have focused on identifying specific genetic variants. However, the collective effects of variants have remained largely unexplored. Here, we evaluated whether traits could be influenced by the collective effects of variants across the entire protein coding-region of the genome or the entire genome. We studied the UK Biobank exome sequencing data of 167,246 individuals as well as the genome-wide SNP array data of 408,868 individuals. We calculated for each individual four different measures of genetic variation such as heterozygosity and number of variants and two different measures of the overall deleteriousness of all variants, and performed correlations with 17 representative traits that have been studied previously. Linear regression analysis was performed with adjustment for age, sex, and genetic principal components. The results showed a high correlation among the six different measures and an inverse association of two well-correlated traits (educational attainment and height) with the total number of all variants as well as the overall deleteriousness of all variants. We have also categorized the genes based on whether they are expressed in the brain and found that the association with educational attainment only held for the brain-expressed genes. No other traits examined showed a significant correlation with the brain-expressed genes. The study demonstrates that common traits could be studied by analyzing the overall genetic variation and suggests that educational attainment is inversely related to genetic variation.

Basically, the authors did some correlations on genomic data in a database, and they think they’ve found an inverse association — high genetic diversity in a population is coupled with low educational attainment. That is, coming from a region with high genetic diversity, like say, Africa, is correlated with, for instance, lower education. To which I would suggest that maybe that’s not surprising, that a continent that has been exploited and colonized for centuries, might have historical reasons for its people not having the advantages of the colonizer countries. But this paper wants to imply that that educational handicap is genetic.

Who reviewed this thing, anyway?

The authors use cautious wording in the abstract. The corresponding author, Shi Huang, is letting his racist freak flag fly on Twitter, though. He’s explaining how we’re supposed to interpret it.

Our latest paper. We show that genetic diversity is a new genetic factor in cognition, challenging the Out of Africa model. GD in human is under selection and has little to do with long evolutionary time or being human ancestors.

Excuse me? We’ve already gone from “educational attainment” to “cognition,” which is enough of a leap, but he’s somehow using these correlations to claim that modern humans did not evolve from African ancestors? Data not shown. Then, remarkably, he claims that genetic diversity is somehow selected for, that it has nothing to do with history or ancestry. Fucking data not fucking shown. This makes no sense. How does he get to this conclusion?

The San have the highest GD and the lowest cognition/civilization in humans and are believed to be the ancestors, i.e., Out of Africa. The only alternative is the maximum GD theory that GD has an upper limit, which is inversely proportional to brain function or complexity.

That this African group have high genetic diversity compared to other populations has been noted before, but “lowest cognition”? That’s absurd. This smacks of the discredited pseudo-scientific racism of Shockley and Lynn. “Lowest civilization”…again, how do you measure that? It’s more Western bias.

Then he completely demolishes his credibility. No one believes the San are the ancestors of other groups of people; they can’t be. They’re a modern human culture. They’re as derived as any other population on the planet, equally divergent from our shared distant ancestors. This guy is a professor of genetics? And there he goes again, blithely transforming “educational attainment” into “brain function or complexity.” That’s not valid.

The thread just goes haring after all kinds of absurdities.

This inverse relationship holds well among species (the higher the complexity, the lower the GD), which means that the highest GD of the San may be the reason for their cognition level being the lowest. The origin of humans may actually not be in Africa but in E Asia.

Wait wait wait. So he’s arguing that highly inbred species with high degrees of homozygosity, like many lab animals, are going to have a higher “cognition level” than wild and genetically diverse animals? How did he measure “complexity”? He’s claiming a correlation throughout the animal kingdom, did he really measure the genetic diversity of a large number of species and show that small-brained species are a causal consequence of having greater genetic diversity? Alternative hypothesis: the larger your brain, the more specialized in that category you have to be, and the smaller your population size can be, thereby limiting the number of variants that can exist in the population. You aren’t large-brained because your population has limited diversity, and also, because all Shi Huang has done here is a correlational analysis, he can’t claim causality.

He really has a bug up his butt about the out-of-Africa model. I suspect it’s more about not wanting to have African ancestors, and is fundamentally a racist bias.

For completeness’ sake, here are the rest of his claims. I don’t care anymore. He’s a fool.

Our study analyzed genotype/phenotype from more >400,000 people in the UK, calculated multiple measures of GD for each individual, and examined which traits these measures were associated with using linear regression analysis that has controlled for confounding factors.
Among 17 traits examined, only education attainment, a proxy of IQ, has the best association (inverse) with GD. Only brain-expressed genes, but not brain-non-expressed, showed an association. The association of non-syn variants is higher than that of syn or intronic variants.
Low cognition is subject to natural selection, and so the underlying GD must be also rather than being time-related and not subject to natural selection as assumed by the molecular clock and neutral theory. The finding challenges the assumption of the OOA model.

I tried to dig deeper into who this guy is, but was repelled because he seems to be beloved by the scientific racists. For instance, I got a little of his background from the Free Times (FriaTider), a radical right wing newspaper in Sweden.

Shi Huang received his doctorate from the Univ. of California… and then worked… for a couple of decades, including as an associate professor at The Sanford-Burnham Institute. In 2009 he moved back to China and has since been a professor at Central South University in Hunan. Today he has a professorship in genetics, epigenetics and evolution…

Unfortunately, I got there from a horrible racist blog called “subspecieist”, which, I’m sorry to say, I won’t link to because it is so deeply despicable, but I will mention a previous “discovery” by Shi Huang that got them extremely excited.

Geneticist Dr. Shi Huang: Shocking evidence, Africans closer genetically to Chimpanzees than Eurasians

Jesus. What an ignorant crock of shit. No. That makes no sense at all. Both modern Africans and modern Eurasians are equally distantly removed from our chimpanzee ancestors. Shi Huang really desperately wants to argue that he didn’t have any black ancestors, I guess, and he’ll make all kinds of illogical leaps to demonstrate that.

And this crank still gets published by Nature.

Does he run all of his companies this way?

It shouldn’t be smiling, I don’t think.

More information has been released about how Neuralink is run. Hoo boy. As with Twitter, the problems with this company can be traced right back to the boob who owns the company. Let’s start with the nice bits.

In some ways, Neuralink treats animals quite well compared to other research facilities, employees said in interviews, echoing public statements by Musk and other executives. Company leaders have boasted internally of building a “Monkey Disneyland” in the company’s Austin, Texas facility where lab animals can roam, a former employee said. In the company’s early years, Musk told employees he wanted the monkeys at his San Francisco Bay Area operation to live in a “monkey Taj Mahal,” said a former employee who heard the comment. Another former employee recalled Musk saying he disliked using animals for research but wanted to make sure they were “the happiest animals” while alive.

That’s nice. We just have to picture a Disneyland with visitors in cages, subject to botched experimentation and death. Happiest place on Earth!

Then we get the raw numbers.

In all, the company has killed about 1,500 animals, including more than 280 sheep, pigs and monkeys, following experiments since 2018, according to records reviewed by Reuters and sources with direct knowledge of the company’s animal-testing operations. The sources characterized that figure as a rough estimate because the company does not keep precise records on the number of animals tested and killed. Neuralink has also conducted research using rats and mice.

Killing 1500 animals in and of itself is not horrible — any pharmaceutical company or stockyard is going to have bigger numbers than that. But that one comment, “the company does not keep precise records on the number of animals tested and killed”, is damning. How do you not keep precise records? You’ve got lab notebooks, you’ve got invoices from purchasing, you’ve got an animal care facility that has to be tracking food, medication, and housing for all of their animals. Presumably every experiment and every outcome is documented. I keep better track of spiders, which as invertebrates are not as closely regulated (at all!) as vertebrates, than Musk’s company was monitoring their pigs.

It boils down to a top-down corporate culture that was all about cracking the whip and driving employees to work faster, faster, faster. This is bad policy.

But current and former Neuralink employees say the number of animal deaths is higher than it needs to be for reasons related to Musk’s demands to speed research. Through company discussions and documents spanning several years, along with employee interviews, Reuters identified four experiments involving 86 pigs and two monkeys that were marred in recent years by human errors. The mistakes weakened the experiments’ research value and required the tests to be repeated, leading to more animals being killed, three of the current and former staffers said. The three people attributed the mistakes to a lack of preparation by a testing staff working in a pressure-cooker environment.

One employee, in a message seen by Reuters, wrote an angry missive earlier this year to colleagues about the need to overhaul how the company organizes animal surgeries to prevent “hack jobs.” The rushed schedule, the employee wrote, resulted in under-prepared and over-stressed staffers scrambling to meet deadlines and making last-minute changes before surgeries, raising risks to the animals.

Musk has pushed hard to accelerate Neuralink’s progress, which depends heavily on animal testing, current and former employees said. Earlier this year, the chief executive sent staffers a news article about Swiss researchers who developed an electrical implant that helped a paralyzed man to walk again. “We could enable people to use their hands and walk again in daily life!” he wrote to staff at 6:37 a.m. Pacific Time on Feb. 8. Ten minutes later, he followed up: “In general, we are simply not moving fast enough. It is driving me nuts!”

On several occasions over the years, Musk has told employees to imagine they had a bomb strapped to their heads in an effort to get them to move faster, according to three sources who repeatedly heard the comment. On one occasion a few years ago, Musk told employees he would trigger a “market failure” at Neuralink unless they made more progress, a comment perceived by some employees as a threat to shut down operations, according to a former staffer who heard his comment.

Then we discover how Neuralink ran afoul of UC Davis’s animal care guidelines. They were sloppy fuck-ups.

The first complaints about the company’s testing involved its initial partnership with University of California, Davis, to conduct the experiments. In February, an animal rights group, the Physicians Committee for Responsible Medicine, filed a complaint with the USDA accusing the Neuralink-UC Davis project of botching surgeries that killed monkeys and publicly released its findings. The group alleged that surgeons used the wrong surgical glue twice, which led to two monkeys suffering and ultimately dying, while other monkeys had different complications from the implants.

The company has acknowledged it killed six monkeys, on the advice of UC Davis veterinary staff, because of health problems caused by experiments. It called the issue with the glue a “complication” from the use of an “FDA-approved product.” In response to a Reuters inquiry, a UC Davis spokesperson shared a previous public statement defending its research with Neuralink and saying it followed all laws and regulations.

There is no excuse for that. Neurosurgery on animals is a long-established practice. There are clear-cut protocols that you follow — you don’t jump into the knife work without all your materials lined up and ready, sterilized and double-checked. A lot of it is meticulous routine that all of the participants should have familiarity with. The “wrong glue”, and screwing up at least twice…I don’t know how that happens, unless you’re in an inexcusable rush.

On another occasion, staff accidentally implanted Neuralink’s device on the wrong vertebra of two different pigs during two separate surgeries, according to two sources with knowledge of the matter and documents reviewed by Reuters. The incident frustrated several employees who said the mistakes – on two separate occasions – could have easily been avoided by carefully counting the vertebrae before inserting the device.

What? You count. You study up on the morphology of the relevant vertebrae before you dive in with your rongeurs. With adequate preparation, that should never happen, which suggests that they did not prepare adequately. This is a theme in the report — they kept fucking up on simple things that could have been avoided if there were less pressure to rush.

The mistakes leading to unnecessary animal deaths included one instance in 2021, when 25 out of 60 pigs in a study had devices that were the wrong size implanted in their heads, an error that could have been avoided with more preparation, according to a person with knowledge of the situation and company documents and communications reviewed by Reuters.

AAAaaaaarrrrgh. How do you do that? You’ve just bought tens of thousands of dollars worth of a defined genetic line of experimental animals, and 40% of your experimental subjects are promptly trashed by hasty, incorrect use of your materials. How good can the data coming out of this facility be? Can we trust that 35 implants were done correctly, when 25 are trashed by such an egregious error?

But that’s not going to stop Musk. He wants to barrel ahead with testing on humans in six months.

The problems with Neuralink’s testing have raised questions internally about the quality of the resulting data, three current or former employees said. Such problems could potentially delay the company’s bid to start human trials, which Musk has said the company wants to do within the next six months. They also add to a growing list of headaches for Musk, who is facing criticism of his management of Twitter, which he recently acquired for $44 billion. Musk also continues to run electric carmaker Tesla Inc and rocket company SpaceX.

I don’t think he’s going to get permission with these kinds of revelations about the work at Neuralink. If he does, that would warrant an investigation of corruption at the regulatory agency.

But imagine being a prospective candidate for this kind of experimental surgery. Would you trust these hacks to get the correct device into your head, and use the right glue to hold it in place? Don’t worry, though, if you die on the operating table, they’ll just lose track of any records of the surgery, and no one will ever know what happened to you.


Given his recent demonstrations of incompetence, would you allow Elon Musk to perform brain surgery on you? Not that he, personally, would wield the knife (probably…although he also has had a machine built to do the surgery, and he might want to push buttons), but one of the companies he owns and mismanages would be in charge. That’s what he wants to do with Neuralink.

It’s been six years since Tesla, SpaceX (and now Twitter) CEO Elon Musk co-founded brain-control interfaces (BCI) startup, Neuralink. It’s been three years since the company first demonstrated its “sewing machine-like” implantation robot, two years since the company stuck its technology into the heads of pigs — and just over 19 months since they did the same to primates, an effort that allegedly killed 15 out of 23 test subjects. After a month-long delay in October, Neuralink held its third “show and tell” event on Wednesday where CEO Elon Musk announced, “we think probably in about six months, we should be able to have a Neuralink installed in a human.”

Let’s also mention the self-driving software for his cars, which is going to be killing people soon. Would you want Musk software in your head? Or consider the Boring Company fiasco, which has failed to produce any useful transportation solutions.

And, don’t forget, it’s been about a year since he successfully impregnated a Neuralink executive. That’s the one thing I’d trust Elon Musk to do — fucking someone up.

I skimmed through his 2 3/4 hour tech demo. I was unimpressed. He showed off the pong-playing monkey again, with Elon providing narration to reassure us that all of his monkeys are happy. He had a dummy on an operating table, its head encased in a machine. They had the machine poking needles into an imitation brain. They did nothing to reassure us that long-term implantation was safe. They had no new, concrete, specific results.

One thing I noticed is that all of the engineers who were trotted out were well-spoken and well-rehearsed, kind of the minimum I’d expect…which made Musk’s off-the-cuff, clumsy speaking style more prominent. It was a lot of halting “uhh”s and “umm”s. He’s terrible. A charisma-vacuum. Fortunately for him, he had a paid claque on hand to whoop and holler at his every pronouncement, which just made the whole presentation even more annoying.

That also exposed how bad the content of what he was saying. Here’s a medical device that he claims will help the blind see and the paralyzed walk again (not that that was demonstrated), and what does he think is important? Defeating the long-term risk of AI.

Musk, however, also tends to emphasize non-medical uses, such as using brain implants to even the playing field, if digital artificial intelligence becomes smarter than any human.

“How do we mitigate that risk? At a species level?” Musk asked Wednesday. “Even in a benign scenario, where the AI is very, very benevolent — then how do we go along for the ride?”

This is not a real thing. We are not threatened by AI, and the kinds of clumsy tech Musk is playing with won’t mitigate his imagined existential future danger. He has no grasp of what his hired engineers are doing — he lives in a sci-fi fantasy world in his head.

The worst, though, is his stated purpose for the demo.

Musk noted during the “show and tell” event that the primary goal of the evening was to recruit talent to Neuralink.

“A lot of the time people think that they couldn’t really work at Neuralink because they don’t know anything about biology or how the brain works,” Musk said. “The thing we really want to emphasize here is that you don’t need to because when you break down the skills that are needed to make Neuralink work, it’s actually many of the same skills that are required to make a smartwatch or modern phone work.”

NO. NO NO NO NO NOOOOOOOOOOO. That is all wrong. It’s what a stupid pseudo-engineer would say. The first priority has to be safety, and long-term stability, and building a functional interface with an immensely complex biological organ. These are all medical and biological problems. The engineering…jesus, his major accomplishment has been training a monkey to play Pong. Pong is not difficult. It is not an engineering triumph. The tricky part is the biology. Any ethical review board ought to read that quote and immediately reject his proposal for human trials in 6 months.

He thinks it’s a gadget problem rather than a medical problem.

“In many ways it’s like a Fitbit in your skull, with tiny wires,” Musk said of Neuralink’s device during the 2021 livestream event.

The Fitbit part is relatively trivial, the tiny wires are easy, it’s using them to muck around in a person’s brain that is hard. I don’t think Musk appreciates the difficulty at all.

There are people who are desperate for something to treat the catastrophic medical problems of ALS or spinal cord energy, and that’s Musk’s market. He’s going to gouge them for everything they’re worth and provide dangerous and minimal solutions, all while he’s dreaming of someday uploading his brain to a computer. Don’t fall for it. Don’t let a bumbling narcissistic billionaire get in your skull, especially since his efforts so far have a 65% mortality rate.

Talking about evidence

Have you ever noticed that Christians and creationists have a weird obsession with something they clearly don’t understand? Josh McDowell, J. Warner Wallace, Lee Strobel…they’ve built careers around writing books that purport to provide “evidence” for Jesus, yet when you look at the cases they make, they fall apart pathetically. Let’s talk about what good evidence is on Thursday.