Does he run all of his companies this way?

It shouldn’t be smiling, I don’t think.

More information has been released about how Neuralink is run. Hoo boy. As with Twitter, the problems with this company can be traced right back to the boob who owns the company. Let’s start with the nice bits.

In some ways, Neuralink treats animals quite well compared to other research facilities, employees said in interviews, echoing public statements by Musk and other executives. Company leaders have boasted internally of building a “Monkey Disneyland” in the company’s Austin, Texas facility where lab animals can roam, a former employee said. In the company’s early years, Musk told employees he wanted the monkeys at his San Francisco Bay Area operation to live in a “monkey Taj Mahal,” said a former employee who heard the comment. Another former employee recalled Musk saying he disliked using animals for research but wanted to make sure they were “the happiest animals” while alive.

That’s nice. We just have to picture a Disneyland with visitors in cages, subject to botched experimentation and death. Happiest place on Earth!

Then we get the raw numbers.

In all, the company has killed about 1,500 animals, including more than 280 sheep, pigs and monkeys, following experiments since 2018, according to records reviewed by Reuters and sources with direct knowledge of the company’s animal-testing operations. The sources characterized that figure as a rough estimate because the company does not keep precise records on the number of animals tested and killed. Neuralink has also conducted research using rats and mice.

Killing 1500 animals in and of itself is not horrible — any pharmaceutical company or stockyard is going to have bigger numbers than that. But that one comment, “the company does not keep precise records on the number of animals tested and killed”, is damning. How do you not keep precise records? You’ve got lab notebooks, you’ve got invoices from purchasing, you’ve got an animal care facility that has to be tracking food, medication, and housing for all of their animals. Presumably every experiment and every outcome is documented. I keep better track of spiders, which as invertebrates are not as closely regulated (at all!) as vertebrates, than Musk’s company was monitoring their pigs.

It boils down to a top-down corporate culture that was all about cracking the whip and driving employees to work faster, faster, faster. This is bad policy.

But current and former Neuralink employees say the number of animal deaths is higher than it needs to be for reasons related to Musk’s demands to speed research. Through company discussions and documents spanning several years, along with employee interviews, Reuters identified four experiments involving 86 pigs and two monkeys that were marred in recent years by human errors. The mistakes weakened the experiments’ research value and required the tests to be repeated, leading to more animals being killed, three of the current and former staffers said. The three people attributed the mistakes to a lack of preparation by a testing staff working in a pressure-cooker environment.

One employee, in a message seen by Reuters, wrote an angry missive earlier this year to colleagues about the need to overhaul how the company organizes animal surgeries to prevent “hack jobs.” The rushed schedule, the employee wrote, resulted in under-prepared and over-stressed staffers scrambling to meet deadlines and making last-minute changes before surgeries, raising risks to the animals.

Musk has pushed hard to accelerate Neuralink’s progress, which depends heavily on animal testing, current and former employees said. Earlier this year, the chief executive sent staffers a news article about Swiss researchers who developed an electrical implant that helped a paralyzed man to walk again. “We could enable people to use their hands and walk again in daily life!” he wrote to staff at 6:37 a.m. Pacific Time on Feb. 8. Ten minutes later, he followed up: “In general, we are simply not moving fast enough. It is driving me nuts!”

On several occasions over the years, Musk has told employees to imagine they had a bomb strapped to their heads in an effort to get them to move faster, according to three sources who repeatedly heard the comment. On one occasion a few years ago, Musk told employees he would trigger a “market failure” at Neuralink unless they made more progress, a comment perceived by some employees as a threat to shut down operations, according to a former staffer who heard his comment.

Then we discover how Neuralink ran afoul of UC Davis’s animal care guidelines. They were sloppy fuck-ups.

The first complaints about the company’s testing involved its initial partnership with University of California, Davis, to conduct the experiments. In February, an animal rights group, the Physicians Committee for Responsible Medicine, filed a complaint with the USDA accusing the Neuralink-UC Davis project of botching surgeries that killed monkeys and publicly released its findings. The group alleged that surgeons used the wrong surgical glue twice, which led to two monkeys suffering and ultimately dying, while other monkeys had different complications from the implants.

The company has acknowledged it killed six monkeys, on the advice of UC Davis veterinary staff, because of health problems caused by experiments. It called the issue with the glue a “complication” from the use of an “FDA-approved product.” In response to a Reuters inquiry, a UC Davis spokesperson shared a previous public statement defending its research with Neuralink and saying it followed all laws and regulations.

There is no excuse for that. Neurosurgery on animals is a long-established practice. There are clear-cut protocols that you follow — you don’t jump into the knife work without all your materials lined up and ready, sterilized and double-checked. A lot of it is meticulous routine that all of the participants should have familiarity with. The “wrong glue”, and screwing up at least twice…I don’t know how that happens, unless you’re in an inexcusable rush.

On another occasion, staff accidentally implanted Neuralink’s device on the wrong vertebra of two different pigs during two separate surgeries, according to two sources with knowledge of the matter and documents reviewed by Reuters. The incident frustrated several employees who said the mistakes – on two separate occasions – could have easily been avoided by carefully counting the vertebrae before inserting the device.

What? You count. You study up on the morphology of the relevant vertebrae before you dive in with your rongeurs. With adequate preparation, that should never happen, which suggests that they did not prepare adequately. This is a theme in the report — they kept fucking up on simple things that could have been avoided if there were less pressure to rush.

The mistakes leading to unnecessary animal deaths included one instance in 2021, when 25 out of 60 pigs in a study had devices that were the wrong size implanted in their heads, an error that could have been avoided with more preparation, according to a person with knowledge of the situation and company documents and communications reviewed by Reuters.

AAAaaaaarrrrgh. How do you do that? You’ve just bought tens of thousands of dollars worth of a defined genetic line of experimental animals, and 40% of your experimental subjects are promptly trashed by hasty, incorrect use of your materials. How good can the data coming out of this facility be? Can we trust that 35 implants were done correctly, when 25 are trashed by such an egregious error?

But that’s not going to stop Musk. He wants to barrel ahead with testing on humans in six months.

The problems with Neuralink’s testing have raised questions internally about the quality of the resulting data, three current or former employees said. Such problems could potentially delay the company’s bid to start human trials, which Musk has said the company wants to do within the next six months. They also add to a growing list of headaches for Musk, who is facing criticism of his management of Twitter, which he recently acquired for $44 billion. Musk also continues to run electric carmaker Tesla Inc and rocket company SpaceX.

I don’t think he’s going to get permission with these kinds of revelations about the work at Neuralink. If he does, that would warrant an investigation of corruption at the regulatory agency.

But imagine being a prospective candidate for this kind of experimental surgery. Would you trust these hacks to get the correct device into your head, and use the right glue to hold it in place? Don’t worry, though, if you die on the operating table, they’ll just lose track of any records of the surgery, and no one will ever know what happened to you.

OH NO ELON NO

Given his recent demonstrations of incompetence, would you allow Elon Musk to perform brain surgery on you? Not that he, personally, would wield the knife (probably…although he also has had a machine built to do the surgery, and he might want to push buttons), but one of the companies he owns and mismanages would be in charge. That’s what he wants to do with Neuralink.

It’s been six years since Tesla, SpaceX (and now Twitter) CEO Elon Musk co-founded brain-control interfaces (BCI) startup, Neuralink. It’s been three years since the company first demonstrated its “sewing machine-like” implantation robot, two years since the company stuck its technology into the heads of pigs — and just over 19 months since they did the same to primates, an effort that allegedly killed 15 out of 23 test subjects. After a month-long delay in October, Neuralink held its third “show and tell” event on Wednesday where CEO Elon Musk announced, “we think probably in about six months, we should be able to have a Neuralink installed in a human.”

Let’s also mention the self-driving software for his cars, which is going to be killing people soon. Would you want Musk software in your head? Or consider the Boring Company fiasco, which has failed to produce any useful transportation solutions.

And, don’t forget, it’s been about a year since he successfully impregnated a Neuralink executive. That’s the one thing I’d trust Elon Musk to do — fucking someone up.

I skimmed through his 2 3/4 hour tech demo. I was unimpressed. He showed off the pong-playing monkey again, with Elon providing narration to reassure us that all of his monkeys are happy. He had a dummy on an operating table, its head encased in a machine. They had the machine poking needles into an imitation brain. They did nothing to reassure us that long-term implantation was safe. They had no new, concrete, specific results.

One thing I noticed is that all of the engineers who were trotted out were well-spoken and well-rehearsed, kind of the minimum I’d expect…which made Musk’s off-the-cuff, clumsy speaking style more prominent. It was a lot of halting “uhh”s and “umm”s. He’s terrible. A charisma-vacuum. Fortunately for him, he had a paid claque on hand to whoop and holler at his every pronouncement, which just made the whole presentation even more annoying.

That also exposed how bad the content of what he was saying. Here’s a medical device that he claims will help the blind see and the paralyzed walk again (not that that was demonstrated), and what does he think is important? Defeating the long-term risk of AI.

Musk, however, also tends to emphasize non-medical uses, such as using brain implants to even the playing field, if digital artificial intelligence becomes smarter than any human.

“How do we mitigate that risk? At a species level?” Musk asked Wednesday. “Even in a benign scenario, where the AI is very, very benevolent — then how do we go along for the ride?”

This is not a real thing. We are not threatened by AI, and the kinds of clumsy tech Musk is playing with won’t mitigate his imagined existential future danger. He has no grasp of what his hired engineers are doing — he lives in a sci-fi fantasy world in his head.

The worst, though, is his stated purpose for the demo.

Musk noted during the “show and tell” event that the primary goal of the evening was to recruit talent to Neuralink.

“A lot of the time people think that they couldn’t really work at Neuralink because they don’t know anything about biology or how the brain works,” Musk said. “The thing we really want to emphasize here is that you don’t need to because when you break down the skills that are needed to make Neuralink work, it’s actually many of the same skills that are required to make a smartwatch or modern phone work.”

NO. NO NO NO NO NOOOOOOOOOOO. That is all wrong. It’s what a stupid pseudo-engineer would say. The first priority has to be safety, and long-term stability, and building a functional interface with an immensely complex biological organ. These are all medical and biological problems. The engineering…jesus, his major accomplishment has been training a monkey to play Pong. Pong is not difficult. It is not an engineering triumph. The tricky part is the biology. Any ethical review board ought to read that quote and immediately reject his proposal for human trials in 6 months.

He thinks it’s a gadget problem rather than a medical problem.

“In many ways it’s like a Fitbit in your skull, with tiny wires,” Musk said of Neuralink’s device during the 2021 livestream event.

The Fitbit part is relatively trivial, the tiny wires are easy, it’s using them to muck around in a person’s brain that is hard. I don’t think Musk appreciates the difficulty at all.

There are people who are desperate for something to treat the catastrophic medical problems of ALS or spinal cord energy, and that’s Musk’s market. He’s going to gouge them for everything they’re worth and provide dangerous and minimal solutions, all while he’s dreaming of someday uploading his brain to a computer. Don’t fall for it. Don’t let a bumbling narcissistic billionaire get in your skull, especially since his efforts so far have a 65% mortality rate.

Talking about evidence

Have you ever noticed that Christians and creationists have a weird obsession with something they clearly don’t understand? Josh McDowell, J. Warner Wallace, Lee Strobel…they’ve built careers around writing books that purport to provide “evidence” for Jesus, yet when you look at the cases they make, they fall apart pathetically. Let’s talk about what good evidence is on Thursday.

But what if all of my students are hot?

They are, every single one of them. Even the ones I don’t see because they’re just a black rectangle on Zoom. Apparently, though, attractive girls’ grades suffered when we moved to online courses because they couldn’t appeal to professor’s biases.

It’s a garbage study, though, as Rebecca Watson explains. The paper claims that…

As education moved online following the onset of the pandemic, the grades of attractive female students deteriorated. This finding implies that the female beauty premium observed when education is in-person is likely to be chiefly a consequence of discrimination. On the contrary, for male students, there was still a significant beauty premium even after the introduction of online teaching. The latter finding suggests that for males in particular, beauty can be a productivity-enhancing attribute.

I don’t understand the mechanism behind that — so we have some kind of radar that senses hot men even over wi-fi, but that fails when we try to detect hot women? How is “beauty” a productivity-enhancing attribute?

Did the author consider the possibility that all of our students and professors have been experiencing great strains over the last few years? Deciding that the one decisive parameter was what they look like seems exceptionally reductive.

Then I had to wonder how the scored “beauty”, and it turns out the author just scavenged up photos on social media, had a couple of students look at them, and rate them. This seems rather arbitrary, and dependent on biases by the judges, as well as accidents of photography. I know I hate it when people take candid shots of my face before I’ve put my makeup on, and also, I don’t know about you, but I automatically deduct 2 points from any photo in which the subject is making pouty duck lips. Sorry.

Final gross error: he included enough detail about the subjects that they could in some cases tell what their score was…and their grades. Oh, and big problem, there was no informed consent, none of the students knew they had been incorporated into this “study”.

The author, Adrian Mehic, is an economist, so I’d already be suspicious of his psychological/sociological study, but the ethics violations and the ridiculous conclusion he draws (“attractive women get better grades because they’re being unfairly advantaged”) confirms that this is a dumpster fire of a paper, constructed out of a thoroughly p-hacked grab bag of fuzzy data.

Future! Evolution! Predicted!

Back in the day when I was a naive young man, the news would occasionally run stories about the Future of Humanity, and predict where evolution was going to take us. It was usually a destiny of feeble, shriveled bodies and gigantic domed heads, but there were recent variants. Wall-E instead suggested humans were all going to become obese, trapped in motorized wheelchairs. Or look to Idiocracy, which instead predicts we’re going to be selecting for crass stupidity. You should realize that these are not scientific predictions at all, they are merely cautionary tales conjured up by creators who are telling us about themselves — that they find eggheads and fat people and stupid people repellent, for instance. It’s both ugly and unscientific, because no, evolutionary biologists are not going to make long term predictions about the trajectory of evolution, because selection is a short term and local process.

Are you ready for the next generation of inane predictions? Oh boy, they are at least going in a different direction. Behold Mindy, our destiny, if we continue to do things the artist doesn’t like very much.

You might be wondering how they came up with this remarkable portrait. “Researchers” were commissioned to create a model.

Researchers worked with a 3D designer to create images of a “future human” that accounts for all of the problems long-term tech use may cause.

That does not explain who these “researchers” are. I read through the original source, and it seems to have been Professor Google. They rummaged around through various sites that complain about modern ills — I found some New Age sources, some crank medical sites, and some legit medical sources that talk about the perils of poor posture — and stitched them all together into a rationale for what evolution would favor, an invalid line of reasoning. It’s all entirely driven by contempt for people who use cell phones and do office work. As usual, it’s all about airing the creator’s ill-informed biases. So, if you look at your phone, you are warping the morphology of any progeny you might have! Stop it!

Researchers predict that office work and craning the neck to look at smartphones will lead to humans having a hunched back in the future. Currently, many people consistently adjust their position to look down at their phones, or to look up at their office screens.

If you sit in front of your computer, your grandchildren will grow up to be hunchbacks! At least, according to a holistic medicine guy.

Sitting in front of the computer at the office for hours on end also means that your torso is pulled out in front of your hips rather than being stacked straight and aligned,” says Caleb Backe, a health and wellness expert at Maple Holistics.

Using this logic, I guess if they’d hired “researchers” a hundred and fifty years ago to draw what humans would look like if we continued to employ chimney sweeps, they’d have predicted small thin people with armored skin and flexible joints and extra eyelids…oh, look, these guys also predicted extra eyelids to filter out blue light that disrupts sleep patterns, because of course evolution is all about conveniently delivering exactly what you need.

Are you ready for the disclosure of who actually commissioned this work? It’s a company called “Toll Free Forwarding” which is using it to promote their cell phone service with sensationalist bullshit.

Technology has revolutionized the way we do business. Whether it’s the instant access to infinite knowledge through a device in our pocket, or the ability for businesses to expand into new markets all over the world (like Canada, Australia, and Ireland) with a virtual phone number, the scope of technology’s impact is limitless, and this trend shows no sign of letting up.

I’m not linking to them, because that’s what they want, but you can probably find them if you wanted. I don’t know why you would, because the people running this company are dishonest idiots.

A flimsy excuse to do nothing

When science is being faked in the published literature, that is a big problem. You’d think the gatekeepers would want to do something about it.

In a new report now being made public by Retraction Watch, Artemisia draws attention to four main groups centered on Ali Nazari, Mostafa Jalal, a postdoc at Texas A&M University, Ehsan Mohseni of the University of Newcastle in Australia, and Alireza Najigivi of Sharif University of Technology in Tehran. The whistleblower lists a total of 287 potentially compromised papers in the 42-page report.

Two hundred eighty seven papers with dodgy data, all churned out by these interlinked lab groups! That’s nuts. It’s practically a criminal enterprise. Part of the problem is at the university level, where the number of papers, without concern for their content, is the metric for promotion. But it’s also a problem with the proliferation of poorly managed journals, where the sole concern is volume and collecting those publication fees.

The guilty person at the journal level here is Guido Schmitz, at the University of Stuttgart, who is the editor of the International Journal of Materials Research, where this crap is published. He had an astonishing response when the bad papers in his journal were reported to him.

I can assure that I do not like fraud in scientific results and I will do my best to prevent them. But on the other hand, I hate anonymous accusations. So it would be my pleasure to follow up this matter after you have discovered your personality to me and send contact data under which I can reach you.

Wait wait wait. Because the person reporting the problem to him, “Artemisia”, is an anonymous whistleblower, he refuses to do anything? That makes no sense. If they had a Nobel prize attached to their name, would he jump up and take care of the problem immediately? Not knowing who they are is reason enough to ignore a credible complaint? This is not how it’s supposed to work. Any action taken would not be on the basis of the say-so of the person reporting it — his fucking job is to evaluate the evidence given and act appropriately.

He has been handed documentation that shows these papers contain falsified images, and he chooses to sit on his hands and not do anything because he doesn’t see a named authority behind the complaint. It’s rank credentialism. He’s also snooty and dismissive.

I will not take any action based on an anonymous accusation. As soon as you discover your clear name, contact address and your personal motivation in this issue, I will consider the appropriate and required means.

That doesn’t matter. If Bozo the Clown hands you evidence that figures were faked and data manipulated, you do due diligence and look at the work and verify the concerns, and then you take action, based on the evidence, not your perception of the authority of the complainant. If you don’t, why were you appointed to be editor of this journal?

Who needs religion when you’ve got these clowns promoting bad ideas?

That’s an unholy trinity if ever I saw one: Bostrom, Musk, Galton. They’re all united by terrible, simplistic understanding of genetics and a self-serving philosophy that reinforces their confidence in bad ideas. They are longtermists. Émile Torres explains what that is and why it is bad…although you already knew it had to be bad because of its proponents.

As I have previously written, longtermism is arguably the most influential ideology that few members of the general public have ever heard about. Longtermists have directly influenced reports from the secretary-general of the United Nations; a longtermist is currently running the RAND Corporation; they have the ears of billionaires like Musk; and the so-called Effective Altruism community, which gave rise to the longtermist ideology, has a mind-boggling $46.1 billion in committed funding. Longtermism is everywhere behind the scenes — it has a huge following in the tech sector — and champions of this view are increasingly pulling the strings of both major world governments and the business elite.

But what is longtermism? I have tried to answer that in other articles, and will continue to do so in future ones. A brief description here will have to suffice: Longtermism is a quasi-religious worldview, influenced by transhumanism and utilitarian ethics, which asserts that there could be so many digital people living in vast computer simulations millions or billions of years in the future that one of our most important moral obligations today is to take actions that ensure as many of these digital people come into existence as possible.

In practical terms, that means we must do whatever it takes to survive long enough to colonize space, convert planets into giant computer simulations and create unfathomable numbers of simulated beings. How many simulated beings could there be? According to Nick Bostrom —the Father of longtermism and director of the Future of Humanity Institute — there could be at least 1058 digital people in the future, or a 1 followed by 58 zeros. Others have put forward similar estimates, although as Bostrom wrote in 2003, “what matters … is not the exact numbers but the fact that they are huge.”

They are masters of the silly hypothetical — these are the kind of people who spawned the concept of Roko’s Basilisk, “that an all-powerful artificial intelligence from the future might retroactively punish those who did not help bring about its existence”. It’s “the needs of the many outweigh the needs of the few”, where the “many” are padded with 1058 hypothetical, imaginary people, and you are expected to serve them (or rather, the technocrat billionaire priests who represent them) because they outvote you now.

The longtermists are terrified of something called existential risk, which is anything that they fear would interfere with that progression towards 1058 hardworking capitalist lackeys working for their vision of a Randian paradise. It’s their boogeyman, and it doesn’t have to actually exist. It’s sufficient that they can imagine it and are therefore justified in taking actions here and now, in the real world, to stop their hypothetical obstacle. Anything fits in this paradigm, it doesn’t matter how absurd.

For longtermists, there is nothing worse than succumbing to an existential risk: That would be the ultimate tragedy, since it would keep us from plundering our “cosmic endowment” — resources like stars, planets, asteroids and energy — which many longtermists see as integral to fulfilling our “longterm potential” in the universe.

What sorts of catastrophes would instantiate an existential risk? The obvious ones are nuclear war, global pandemics and runaway climate change. But Bostrom also takes seriously the idea that we already live in a giant computer simulation that could get shut down at any moment (yet another idea that Musk seems to have gotten from Bostrom). Bostrom further lists “dysgenic pressures” as an existential risk, whereby less “intellectually talented” people (those with “lower IQs”) outbreed people with superior intellects.

Dysgenic pressures, the low IQ rabble outbreeding the superior stock…where have I heard this before? Oh, yeah:

This is, of course, straight out of the handbook of eugenics, which should be unsurprising: the term “transhumanism” was popularized in the 20th century by Julian Huxley, who from 1959 to 1962 was the president of the British Eugenics Society. In other words, transhumanism is the child of eugenics, an updated version of the belief that we should use science and technology to improve the “human stock.”

I like the idea of transhumanism, and I think it’s almost inevitable. Of course humanity will change! We are changing! What I don’t like is the idea that we can force that change into a direction of our choosing, or that certain individuals know what direction is best for all of us.

Among the other proponents of this nightmare vision of the future is Robin Hanson, who takes his colonizer status seriously: “Hanson’s plan is to take some contemporary hunter-gatherers — whose populations have been decimated by industrial civilization — and stuff them into bunkers with instructions to rebuild industrial civilization in the event that ours collapses”. Nick Beckstead is another, who argues that saving lives in poor countries may have significantly smaller ripple effects than saving and improving lives in rich countries, … it now seems more plausible to me that saving a life in a rich country is substantially more important than saving a life in a poor country, other things being equal. Or William MacAskill, who thinks that If scientists with Einstein-level research abilities were cloned and trained from an early age, or if human beings were genetically engineered to have greater research abilities, this could compensate for having fewer people overall and thereby sustain technological progress.

Just clone Einstein! Why didn’t anyone else think of that?

Maybe because it is naive, stupid, and ignorant.

MacAskill has been the recipient of a totally uncritical review of his latest book in the Guardian. He’s a philosopher, but you’ll be relieved to know he has come up with a way to end the pandemic.

The good news is that with the threat of an engineered pandemic, which he says is rapidly increasing, he believes there are specific steps that can be taken to avoid a breakout.

“One partial solution I’m excited about is called far ultraviolet C radiation,” he says. “We know that ultraviolet light sterilises the surfaces it hits, but most ultraviolet light harms humans as well. However, there’s a narrow-spectrum far UVC specific type that seems to be safe for humans while still having sterilising properties.”

The cost for a far UVC lightbulb at the moment is about $1,000 (£820) per bulb. But he suggests that with research and development and philanthropic funding, it could come down to $10 or even $1 and could then be made part of building codes. He runs through the scenario with a breezy kind of optimism, but one founded on science-based pragmatism.

You know, UVC, at 200-280nm, is the most energetic form of UV radiation — we don’t get much of it here on planet Earth because it is quickly absorbed by any molecule it touches. It’s busy converting oxygen to ozone as it enters the atmosphere. So sure, yeah, it’s germicidal, and maybe it’s relatively safe for humans because it cooks the outer, dead layers of your epidermis and is absorbed before it can zap living tissue layers, but I don’t think it’s practical (so much for “science-based pragmatism”) in a classroom, for instance. We’re just going to let our kiddos bask in UV radiation for 6 hours a day? How do you know that’s going to be safe in the long term, longtermist?

Quacks have a “breezy kind of optimism”, too, but it’s not a selling point for their nostrums.

If you aren’t convinced yet that longtermism/effective altruism isn’t a poisoned chalice of horrific consequences, look who else likes this idea:

One can begin to see why Elon Musk is a fan of longtermism, or why leading “new atheist” Sam Harris contributed an enthusiastic blurb for MacAskill’s book. As noted elsewhere, Harris is a staunch defender of “Western civilization,” believes that “We are at war with Islam,” has promoted the race science of Charles Murray — including the argument that Black people are less intelligent than white people because of genetic evolution — and has buddied up with far-right figures like Douglas Murray, whose books include “The Strange Death of Europe: Immigration, Identity, Islam.”

Yeah, NO.

Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should

As usual, First Dog on the Moon scores.

Oh yeah. This again. Some molecular biologists with no training in population genetics or ethics think they can go into a lab and resurrect an extinct species.

Almost 100 years after its extinction, the Tasmanian tiger may live once again. Scientists want to resurrect the striped carnivorous marsupial, officially known as a thylacine, which used to roam the Australian bush.

The ambitious project will harness advances in genetics, ancient DNA retrieval and artificial reproduction to bring back the animal.

They won’t succeed. At best, they’ll assemble a maladapted hybrid something or other to be exhibited in some freak show of a zoo. It won’t be a thylacine, it’ll be a Frankenstein’s monster of an extant marsupial with no home environment and no prospects for the future and no population of conspecifics with which to live and no history. So much bugs me about this story.

They talk about “the thylacine genome”. There’s no such thing. A living population has many genomes. How many individuals are they sampling? How many individuals will they generate? Where will they live? These are carnivores — what will they feed on? Or are they just planning on conjuring up a technology demonstration that they’ll put in a cage and then move on to some other “project”?

They make a token nod towards the problem of extinctions, but aren’t very convincing.

“We would strongly advocate that first and foremost we need to protect our biodiversity from further extinctions, but unfortunately we are not seeing a slowing down in species loss,” said Andrew Pask, a professor at the University of Melbourne and head of its Thylacine Integrated Genetic Restoration Research Lab, who is leading the initiative.
“This technology offers a chance to correct this and could be applied in exceptional circumstances where cornerstone species have been lost,” he added.

No, it won’t accomplish any of that. The species is extinct because their habitat is destroyed and people killed them. That’s where you start, by rebuilding their environment, not with PCR machines and microinjection apparatus and flasks in incubators. It’s no surprise who is behind this: a guy with impressive credentials in molecular biology who thinks every problem is a lab exercise.

The project is a collaboration with Colossal Biosciences, founded by tech entrepreneur Ben Lamm and Harvard Medical School geneticist George Church, who are working on an equally ambitious, if not bolder, $15 million project to bring back the woolly mammoth in an altered form.

Yeah, right. He was claiming that he’d be bringing back the mammoth within two years…five years ago. He was also working on a dating app to eliminate genetic diseases (I guess he never heard of eugenics?).

Church has also speculated about resurrecting Neandertals. Nope. Not going to happen. If his thoughts on these matters were more than a millimeter deep, he wouldn’t be jumping onto high profile media to promote these sci-fi fantasies. It’s bad science.