This is some real super-villain shit, you know

Neuralink has begun human trials, we think. The problem is that all we know about it is an announcement made by head jackass Musk on Twitter, which isn’t exactly a reputable source. That doesn’t stop Nature from commenting on it. I’m not used to seeing rumors published in that journal, and if you think about it, this is basically a condemnation of the experiment.

…there is frustration about a lack of detailed information. There has been no confirmation that the trial has begun, beyond Musk’s tweet. The main source of public information on the trial is a study brochure inviting people to participate in it. But that lacks details such as where implantations are being done and the exact outcomes that the trial will assess, says Tim Denison, a neuroengineer at the University of Oxford, UK.

The trial is not registered at ClinicalTrials.gov, an online repository curated by the US National Institutes of Health. Many universities require that researchers register a trial and its protocol in a public repository of this type before study participants are enrolled. Additionally, many medical journals make such registration a condition of publication of results, in line with ethical principles designed to protect people who volunteer for clinical trials. Neuralink, which is headquartered in Fremont, California, did not respond to Nature’s request for comment on why it has not registered the trial with the site.

So…no transparency, no summary of the goals or methods of the experiment, and no ethical oversight. All anyone knows is that Elon Musk’s team sawed open someone’s skull and stuck some wires and electronics directly into their brain, for purposes unknown, and with little hope of seeing the outcome published in a reputable journal. OK.

Besides the science shenanigans, I’m also curious to know about what kind of NDAs and agreements to never ever sue Neuralink the patients/victims had to sign. There has got to be some wild legal gyrations going on, too.

First charge my phone, next step…the Moon!

That thing to the right is a USB-C charger, a totally mundane device that we tend to take for granted. You’ve probably got one, or something similar. I have a similar device by a different company, it’s a boring utilitarian widget you need to keep a device powered up.

It contains a 48mHz CPU, 8K RAM, and 128K of flash memory. I’m torn between sneering at those pathetic stats and being impressed that a mere charger has that much computing power — my first home computer had a 1mHz CPU and 16K RAM, and heck, now mere cables contain complex circuitry.

But now compare that USB charger to the Apollo 11 guidance computer.

Compared with the Apollo 11 Guidance Computer it runs at ~48 times the clock speed with 1.8x the program space.

They compared several USB chargers to the Apollo guidance computer.

I guess once I dismantle this here stupid $30 wall charger, all I need to do is assemble a few million kilograms of simple stuff like rocket engines and fuel and a life support system, and I’ll be taking off for the moon. See you all later!

An exercise in geography

It’s a journey of increasing uneasiness. A guy in an office is browsing Google Earth, and sees a dark circle in the middle of nowhere in Madagascar, and gets curious: he sees buildings inside of an old crater. He traces satellite photos back in time, and learns that the buildings weren’t there before 2008. He digs deeper, never leaving his office, but looking for photos and people on the internet who might help him figure out what’s going on in this remote place, which has virtually no footprint on the web.

So far, I’m with him. This is interesting! How much can you figure out about an isolated spot on the globe without lifting your butt from a chair? He’s calling up people in Madagascar, scientists and college professors, and asking them what’s going on in that crater. It’s purely academic, he just seems to be getting a bit obsessively invested in this random question, and now he’s pestering people on the island.

Then he hires a half dozen people to make the trek from the nearest city to this isolated place. He has the money, he can draft a few locals to do some leg work for him, all to satisfy his curiosity. Unfortunately, it’s the rainy season, the roads are terrible, they can’t get there.

So he waits for the dry season, hires another bunch of locals to make a second trip, and they get there at last. It’s a village of about 300 people. They’re tense and worried. They’re suspicious and wonder why these strangers have suddenly shown up on their doorstep.

They don’t tell them that this well-off white man a few thousand miles away had seen their homes from outer space and had spent a lot of time and money to invade their privacy and make a video telling the whole world about them. As a reward for exposing these people, the video creator got a million views on YouTube and thousands of comments telling him how great he was. But what does the village get? Did they even want this kind of attention?

Here’s the video. It’s professionally done. A lot of people spent a lot of money satisfying idle curiosity, which you’d think I’d appreciate, but I don’t know…how would I feel if a film crew showed up at my house, and they announced that this wealthy Malagasy guy was kind of curious about what I was doing and had commissioned them to come report on how I spent my time? He was too busy to make the trip himself, but he’d definitely make sure everyone knew my business.

Maybe I’d feel less queasy about it all if the narrator had cared enough to make the trip himself, and wasn’t parading the people around on the internet like some kind of exhibit.

UIs matter

Well, this is a horror story about coding incompetence. There is this nice gadget that can be controlled from your phone to automagically dispense insulin, a real boon to diabetics. You just type how big a dose you need into your phone, and it signals and discreet little device to deliver it. No manual injections!

Except their software drops the initial decimal point. If you type “0.5”, it’s fine, it delivers 0.5 units of insulin. If you type “.5”, it ignores the decimal point, and delivers 5 units. You better not type “.50”, or oh boy, here comes 50 units.

This must have been a fun letter for Omnipod to send out.

Dear Valued Customer,

You are receiving this letter as our records indicate that you are a user of the Omnipod® 5 App from Google Play on your compatible Android smartphone. The app is part of the Omnipod 5 Automated Insulin Delivery System. This notice is a voluntary Medical Device Correction related to an issue with the Omnipod 5 App bolus calculator. Insulet has received 2 reports of adverse events related to this issue.

We have received reports from Omnipod 5 smartphone app users where the bolus calculator is not recording the decimal point if it is the first value entered when changing a bolus dose. If the user does not recognize the issue, this may lead to delivery of more insulin than intended, which can lead to severe hypoglycemia.

I’m imagining corporate lawyers having heart attacks when this bug was discovered.

Hey, computer science instructors, this’ll be a good example to use if your students complain about mundane data entry tasks!

What is going on with OpenAI?

It’s mystifying. I’m not a fan of the company, OpenAI — they’re the ones hyping up ChatGPT, they’re 49% owned by Microsoft that, as usual, wants to take over everything, and their once and future CEO Sam Altman seems like a sleazy piece of work. But he has his fans. He was abruptly fired this past week (and what’s up with that?) and there was some kind of internal revolt and now he’s being rehired? Appointed to a new position?. Confusion and chaos! It’s a hell of a way to run a company.

Here, though, is a hint of illumination.

Sam Altman, the CEO of OpenAI, was unexpectedly fired by the board on Friday afternoon. CTO Mira Murati is filling in as interim CEO.

OpenAI is a nonprofit with a commercial arm. (This is a common arrangement when a nonprofit finds it’s making too much money. Mozilla is set up similarly.) The nonprofit controls the commercial company — and they just exercised that control.

Microsoft invested $13 billion to take ownership of 49% of the OpenAI for-profit — but not of the OpenAI nonprofit. Microsoft found out Altman was being fired one minute before the board put out its press release, half an hour before the stock market closed on Friday. MSFT stock dropped 2% immediately.

Oh. So this is a schism between the controlling non-profit side of the company, and the money-making for-profit side. It’s an ideological split! But what are their differences?

The world is presuming that there’s something absolutely awful about Altman just waiting to come out. But we suspect the reason for the firing is much simpler: the AI doom cultists kicked Altman out for not being enough of a cultist.

There were prior hints that the split was coming, from back in March.

In the last few years, Silicon Valley’s obsession with the astronomical stakes of future AI has curdled into a bitter feud. And right now, that schism is playing out online between two people: AI theorist Eliezer Yudkowsky and OpenAI Chief Executive Officer Sam Altman. Since the early 2000s, Yudkowsky has been sounding the alarm that artificial general intelligence is likely to be “unaligned” with human values and could decide to wipe us out. He worked aggressively to get others to adopt the prevention of AI apocalypse as a priority — enough that he helped convince Musk to take the risk seriously. Musk co-founded OpenAI as a nonprofit with Altman in 2015, with the goal of creating safer AI.

In the last few years, OpenAI has adopted a for-profit model and churned out bigger, faster, and more advanced AI technology. The company has raised billions in investment, and Altman has cheered on the progress toward artificial general intelligence, or AGI. “There will be scary moments as we move towards AGI-level systems, and significant disruptions, but the upsides can be so amazing that it’s well worth overcoming the great challenges to get there,” he tweeted in December.

Yudkowsky, meanwhile, has lost nearly all hope that humanity will handle AI responsibly, he said on a podcast last month. After the creation of OpenAI, with its commitment to advancing AI development, he said he cried by himself late at night and thought, “Oh, so this is what humanity will elect to do. We will not rise above. We will not have more grace, not even here at the very end.”

Given that background, it certainly seemed like rubbing salt in a wound when Altman tweeted recently that Yudkowsky had “done more to accelerate AGI than anyone else” and might someday “deserve the Nobel Peace Prize” for his work. Read a certain way, he was trolling Yudkowsky, saying the AI theorist had, in trying to prevent his most catastrophic fear, significantly hastened its arrival. (Yudkowsky said he could not know if Altman was trolling him; Altman declined to comment.)

Yudkowsky is a kook. What is he doing having any say at all in the operation of any company? Why would anyone sane let the LessWrong cultists anywhere near their business? It does explain what’s going on with all this chaos — it’s a squabble within a cult. You can’t expect it to make sense.

This assessment, though, helps me understand a little bit about what’s going on.

Sam Altman was an AI doomer — just not as much as the others. The real problem was that he was making promises that OpenAI could not deliver on. The GPT series was running out of steam. Altman was out and about in the quest for yet more funding for the OpenAI company in ways that upset the true believers.

A boardroom coup by the rationalist cultists is quite plausible, as well as being very funny. Rationalists’ chronic inability to talk like regular humans may even explain the statement calling Altman a liar. It’s standard for rationalists to call people who don’t buy their pitch liars.

So what from normal people would be an accusation of corporate war crimes is, from rationalists, just how they talk about the outgroup of non-rationalists. They assume non-believers are evil.

It is important to remember that Yudkowsky’s ideas are dumb and wrong, he has zero technological experience, and he has never built a single thing, ever. He’s an ideas guy, and his ideas are bad. OpenAI’s future is absolutely going to be wild.

There are many things to loathe Sam Altman for — but not being enough of a cultist probably isn’t one of them.

We think more comedy gold will be falling out over the next week.

Should I look forward to that? Or dread it?


It’s already getting worse. Altman is back at the helm, there’s been an almost complete turnover of the board, and they’ve brought in…Larry Summers? Why? It’s a regular auto-da-fé, with the small grace that we don’t literally torture and burn people at the stake when the heretics are dethroned.

Sympathy for Linda Yaccarino?

But not much sympathy. She eagerly jumped at her job at Xitter, and is being paid handsomely…it’s just that she’s trying to do something impossible.

Linda Yaccarino, the CEO of X/Twitter, appeared to be trying some damage control this afternoon, as she posted, “X’s point of view has always been very clear that discrimination by everyone should STOP across the board — I think that’s something we can and should all agree on. When it comes to this platform — X has also been extremely clear about our efforts to combat antisemitism and discrimination. There’s no place for it anywhere in the world — it’s ugly and wrong. Full stop.”

Yaccarino has been trying to convince advertisers that X/Twitter is a safe place for them to place their spots. She wrote on Tuesday, “We’re always working to protect the public conversation.”

And then Elon Musk started typing and said all the things Yaccarino claims to be opposing. It must be hard to work in a company run by a modern-day Nazi, while trying to pretend not to be a bunch of rich fascists.

At the rate things are going right now, though, she may not have the job for long. Either she’ll wise up and quit, or the company will melt into toxic sludge right there under her desk. It was already unprofitable, and now a lot of big companies are yanking their ads away.

Major blue-chip companies are announcing they will suspend all advertising on X (formerly known as Twitter) after owner Elon Musk endorsed an antisemitic conspiracy theory and Media Matters reported that X was placing ads alongside white nationalist and pro-Nazi content.

So far, they’ve lost IBM, Disney, Lions Gate Entertainment, Warner Brothers, Paramount, Comcast, and Apple. Apple was their biggest advertiser (no wonder cables cost so much), shelling out $100 million per year on Twitter ads — that revenue has just evaporated.

In addition, the White House has condemned him, although that doesn’t immediately scorch his pocketbook.

Joe Biden has excoriated Elon Musk’s “abhorrent” tweets two days after the X owner posted his full-throated agreement with an antisemitic post.

A statement from the White House issued on Friday said: “We condemn this abhorrent promotion of antisemitic and racist hate in the strongest terms, which runs against our core values as Americans.”

Musk’s response? He’s suing Media Matters for posting news about the loss of advertisers.

Worst. Businessman. Ever. Get out while you can, Linda! Unless you’re also a closet Nazi.


It’s not a good day for the social media Nazi. Another rocket blew up, and you should read his whining lawsuit in which he complains that the media are attacking free speech.

A cable problem explained

In the past, I have complained bitterly about the difficulty of getting reliable cables for Apple computers, and also about how ridiculously overpriced they all are. I may have been wrong. There’s a good reason for that.

Adam Savage takes apart several USB-C cables, from a $3 cheapie to a $10 Amazon to a $130 (!!!!!) official Apple cable, and I learned something new.

Whoa…those expensive cables are packed full of complex circuitry, which I did not expect. Back in the day, I used to make custom RS-232 cable all the time — cut open the cable, splay out all the little wires, solder them to the appropriate pin in the connector, and you were done. That’s all it was, was a wire-to-wire connection between two adapters. I don’t think I could make a USB cable, and it’s not only the teeny-tiny wires that would exceed my soldering ability, but I wouldn’t be able to cope with all the miniature ICs in there.

Dramatic Toaster

We’ve owned a very bad toaster for years. It was slow, cheap, and it had this ambiguous dial with no labeling that you ‘adjusted’ to set how dark your bread would be toasted — but without any markings, it was basically random. You diddled the dial and hoped it wouldn’t burn it, or you hovered over the toaster watching it feebly glow, and then if you started to see smoke you’d push a cancel button. It was terrible, but serviceable.

My daughter came to visit and was shocked at how us poors live. We don’t even have a decent toaster! When we visited her a few weeks ago, she was in the process of moving to Madison, and she gave us her old toaster. It was a Cuisinart. It looked like a polished slab of steel, with an LED display to let you know the settings, and most amazingly, you pushed a button after you add bread, and a motor gracefully lowered it into the device. When it was done, it didn’t pop up, the motor raised the toast dramatically and presented it to you.

That is how, in the morning, I have Dramatic Toast with my breakfast.

Why do I tell you this story? Because this morning I saw a photo of the back end of a Tesla Cybertruck.

That’s my new toaster! Very nice.

Pharma is wobbling between useless and lethal

On the one hand, you’ve got powerful chemicals that can be used to make deadly addictive drugs like methamphetamine, stuff made in bulk to be used as precursors to other, legitimate organic chemistry products, so valuable that they get stolen in industrial quantities by criminals (remember the Dead Freight episode of Breaking Bad, in which they rob a train to make drugs?). On the other hand, you’ve got big pharma peddling pills that do absolutely nothing, stuff like Oscillococcinum, a homeopathic remedy that is sold over the counter at my local grocery store. Add another useless drug, phenylephrine, which is in just about every cold remedy available, partly because the effective medicine, pseudoephedrine, has been displaced by the garbage, since pseudoephedrine was actually desired by meth heads who wanted to cook up meth at home.

Pharmaceutical companies are all about making money, not helping people’s health problems. Take a look at this exposé by Skepchick and Ars Technica — Big Pharma is not your friend. It’s not just the Sacklers and OxyContin, they’re all rotten to the core.

OK, it’s not just Big Pharma. Blame Big Capitalism. The lack of regulation and the ability of the rich to just buy the legislation they want is what’s killing us.

Fortunately, better hygiene and the use of masks has meant I’ve avoided the usual fall/winter colds for a while now.

Don’t trust self-driving cars

I think I can scratch a self-driving car off my Christmas list for this year…and for every year. I can always use more socks, anyway. The Washington Post (owned by another tech billionaire) has a detailed exposé of the catastrophic history of so-called autonomous vehicles.

Teslas guided by Autopilot have slammed on the brakes at high speeds without clear cause, accelerated or lurched from the road without warning and crashed into parked emergency vehicles displaying flashing lights, according to investigation and police reports obtained by The Post.

In February, a Tesla on Autopilot smashed into a firetruck in Walnut Creek, Calif., killing the driver. The Tesla driver was under the influence of alcohol during the crash, according to the police report.

In July, a Tesla rammed into a Subaru Impreza in South Lake Tahoe, Calif. “It was, like, head on,” according to a 911 call from the incident obtained by The Post. “Someone is definitely hurt.” The Subaru driver later died of his injuries, as did a baby in the back seat of the Tesla, according to the California Highway Patrol.

Tesla did not respond to multiple requests for comment. In its response to the Banner family’s complaint, Tesla said, “The record does not reveal anything that went awry with Mr. Banner’s vehicle, except that it, like all other automotive vehicles, was susceptible to crashing into another vehicle when that other vehicle suddenly drives directly across its path.”

Right. Like that ever happens. So all we have to do is clear the roads of all those other surprising vehicles, and these self-driving cars might be usable. That’s probably Elon Musk’s end goal, to commandeer the entirety of the world’s network of roads so that he can drive alone.

Speaking of Musk, he has a long history of lying about the capabilities of his autopilot system.

Tesla CEO Elon Musk has painted a different reality, arguing that his technology is making the roads safer: “It’s probably better than a person right now,” Musk said of Autopilot during a 2016 conference call with reporters.

Musk made a similar assertion about a more sophisticated form of Autopilot called Full Self-Driving on an earnings call in July. “Now, I know I’m the boy who cried FSD,” he said. “But man, I think we’ll be better than human by the end of this year.”

Lies. Lies, lies, lies, that’s all that comes out of that freak’s mouth. If you want more, Cody has a new video that explains all the problems with this technology. I know, it’s over an hour long, but the first couple of minutes contains a delightful montage of Musk making promises over the years, all of which have totally failed.

Can we just stop this nonsense and appreciate that human brains are pretty darned complex and there isn’t any AI that is anywhere near having the flexibility of a person? Right now we’re subject to the whims of non-scientist billionaires who are drunk on the science-fantasies they read as teenagers.