UIs matter

Well, this is a horror story about coding incompetence. There is this nice gadget that can be controlled from your phone to automagically dispense insulin, a real boon to diabetics. You just type how big a dose you need into your phone, and it signals and discreet little device to deliver it. No manual injections!

Except their software drops the initial decimal point. If you type “0.5”, it’s fine, it delivers 0.5 units of insulin. If you type “.5”, it ignores the decimal point, and delivers 5 units. You better not type “.50”, or oh boy, here comes 50 units.

This must have been a fun letter for Omnipod to send out.

Dear Valued Customer,

You are receiving this letter as our records indicate that you are a user of the Omnipod® 5 App from Google Play on your compatible Android smartphone. The app is part of the Omnipod 5 Automated Insulin Delivery System. This notice is a voluntary Medical Device Correction related to an issue with the Omnipod 5 App bolus calculator. Insulet has received 2 reports of adverse events related to this issue.

We have received reports from Omnipod 5 smartphone app users where the bolus calculator is not recording the decimal point if it is the first value entered when changing a bolus dose. If the user does not recognize the issue, this may lead to delivery of more insulin than intended, which can lead to severe hypoglycemia.

I’m imagining corporate lawyers having heart attacks when this bug was discovered.

Hey, computer science instructors, this’ll be a good example to use if your students complain about mundane data entry tasks!

What is going on with OpenAI?

It’s mystifying. I’m not a fan of the company, OpenAI — they’re the ones hyping up ChatGPT, they’re 49% owned by Microsoft that, as usual, wants to take over everything, and their once and future CEO Sam Altman seems like a sleazy piece of work. But he has his fans. He was abruptly fired this past week (and what’s up with that?) and there was some kind of internal revolt and now he’s being rehired? Appointed to a new position?. Confusion and chaos! It’s a hell of a way to run a company.

Here, though, is a hint of illumination.

Sam Altman, the CEO of OpenAI, was unexpectedly fired by the board on Friday afternoon. CTO Mira Murati is filling in as interim CEO.

OpenAI is a nonprofit with a commercial arm. (This is a common arrangement when a nonprofit finds it’s making too much money. Mozilla is set up similarly.) The nonprofit controls the commercial company — and they just exercised that control.

Microsoft invested $13 billion to take ownership of 49% of the OpenAI for-profit — but not of the OpenAI nonprofit. Microsoft found out Altman was being fired one minute before the board put out its press release, half an hour before the stock market closed on Friday. MSFT stock dropped 2% immediately.

Oh. So this is a schism between the controlling non-profit side of the company, and the money-making for-profit side. It’s an ideological split! But what are their differences?

The world is presuming that there’s something absolutely awful about Altman just waiting to come out. But we suspect the reason for the firing is much simpler: the AI doom cultists kicked Altman out for not being enough of a cultist.

There were prior hints that the split was coming, from back in March.

In the last few years, Silicon Valley’s obsession with the astronomical stakes of future AI has curdled into a bitter feud. And right now, that schism is playing out online between two people: AI theorist Eliezer Yudkowsky and OpenAI Chief Executive Officer Sam Altman. Since the early 2000s, Yudkowsky has been sounding the alarm that artificial general intelligence is likely to be “unaligned” with human values and could decide to wipe us out. He worked aggressively to get others to adopt the prevention of AI apocalypse as a priority — enough that he helped convince Musk to take the risk seriously. Musk co-founded OpenAI as a nonprofit with Altman in 2015, with the goal of creating safer AI.

In the last few years, OpenAI has adopted a for-profit model and churned out bigger, faster, and more advanced AI technology. The company has raised billions in investment, and Altman has cheered on the progress toward artificial general intelligence, or AGI. “There will be scary moments as we move towards AGI-level systems, and significant disruptions, but the upsides can be so amazing that it’s well worth overcoming the great challenges to get there,” he tweeted in December.

Yudkowsky, meanwhile, has lost nearly all hope that humanity will handle AI responsibly, he said on a podcast last month. After the creation of OpenAI, with its commitment to advancing AI development, he said he cried by himself late at night and thought, “Oh, so this is what humanity will elect to do. We will not rise above. We will not have more grace, not even here at the very end.”

Given that background, it certainly seemed like rubbing salt in a wound when Altman tweeted recently that Yudkowsky had “done more to accelerate AGI than anyone else” and might someday “deserve the Nobel Peace Prize” for his work. Read a certain way, he was trolling Yudkowsky, saying the AI theorist had, in trying to prevent his most catastrophic fear, significantly hastened its arrival. (Yudkowsky said he could not know if Altman was trolling him; Altman declined to comment.)

Yudkowsky is a kook. What is he doing having any say at all in the operation of any company? Why would anyone sane let the LessWrong cultists anywhere near their business? It does explain what’s going on with all this chaos — it’s a squabble within a cult. You can’t expect it to make sense.

This assessment, though, helps me understand a little bit about what’s going on.

Sam Altman was an AI doomer — just not as much as the others. The real problem was that he was making promises that OpenAI could not deliver on. The GPT series was running out of steam. Altman was out and about in the quest for yet more funding for the OpenAI company in ways that upset the true believers.

A boardroom coup by the rationalist cultists is quite plausible, as well as being very funny. Rationalists’ chronic inability to talk like regular humans may even explain the statement calling Altman a liar. It’s standard for rationalists to call people who don’t buy their pitch liars.

So what from normal people would be an accusation of corporate war crimes is, from rationalists, just how they talk about the outgroup of non-rationalists. They assume non-believers are evil.

It is important to remember that Yudkowsky’s ideas are dumb and wrong, he has zero technological experience, and he has never built a single thing, ever. He’s an ideas guy, and his ideas are bad. OpenAI’s future is absolutely going to be wild.

There are many things to loathe Sam Altman for — but not being enough of a cultist probably isn’t one of them.

We think more comedy gold will be falling out over the next week.

Should I look forward to that? Or dread it?


It’s already getting worse. Altman is back at the helm, there’s been an almost complete turnover of the board, and they’ve brought in…Larry Summers? Why? It’s a regular auto-da-fé, with the small grace that we don’t literally torture and burn people at the stake when the heretics are dethroned.

Sympathy for Linda Yaccarino?

But not much sympathy. She eagerly jumped at her job at Xitter, and is being paid handsomely…it’s just that she’s trying to do something impossible.

Linda Yaccarino, the CEO of X/Twitter, appeared to be trying some damage control this afternoon, as she posted, “X’s point of view has always been very clear that discrimination by everyone should STOP across the board — I think that’s something we can and should all agree on. When it comes to this platform — X has also been extremely clear about our efforts to combat antisemitism and discrimination. There’s no place for it anywhere in the world — it’s ugly and wrong. Full stop.”

Yaccarino has been trying to convince advertisers that X/Twitter is a safe place for them to place their spots. She wrote on Tuesday, “We’re always working to protect the public conversation.”

And then Elon Musk started typing and said all the things Yaccarino claims to be opposing. It must be hard to work in a company run by a modern-day Nazi, while trying to pretend not to be a bunch of rich fascists.

At the rate things are going right now, though, she may not have the job for long. Either she’ll wise up and quit, or the company will melt into toxic sludge right there under her desk. It was already unprofitable, and now a lot of big companies are yanking their ads away.

Major blue-chip companies are announcing they will suspend all advertising on X (formerly known as Twitter) after owner Elon Musk endorsed an antisemitic conspiracy theory and Media Matters reported that X was placing ads alongside white nationalist and pro-Nazi content.

So far, they’ve lost IBM, Disney, Lions Gate Entertainment, Warner Brothers, Paramount, Comcast, and Apple. Apple was their biggest advertiser (no wonder cables cost so much), shelling out $100 million per year on Twitter ads — that revenue has just evaporated.

In addition, the White House has condemned him, although that doesn’t immediately scorch his pocketbook.

Joe Biden has excoriated Elon Musk’s “abhorrent” tweets two days after the X owner posted his full-throated agreement with an antisemitic post.

A statement from the White House issued on Friday said: “We condemn this abhorrent promotion of antisemitic and racist hate in the strongest terms, which runs against our core values as Americans.”

Musk’s response? He’s suing Media Matters for posting news about the loss of advertisers.

Worst. Businessman. Ever. Get out while you can, Linda! Unless you’re also a closet Nazi.


It’s not a good day for the social media Nazi. Another rocket blew up, and you should read his whining lawsuit in which he complains that the media are attacking free speech.

A cable problem explained

In the past, I have complained bitterly about the difficulty of getting reliable cables for Apple computers, and also about how ridiculously overpriced they all are. I may have been wrong. There’s a good reason for that.

Adam Savage takes apart several USB-C cables, from a $3 cheapie to a $10 Amazon to a $130 (!!!!!) official Apple cable, and I learned something new.

Whoa…those expensive cables are packed full of complex circuitry, which I did not expect. Back in the day, I used to make custom RS-232 cable all the time — cut open the cable, splay out all the little wires, solder them to the appropriate pin in the connector, and you were done. That’s all it was, was a wire-to-wire connection between two adapters. I don’t think I could make a USB cable, and it’s not only the teeny-tiny wires that would exceed my soldering ability, but I wouldn’t be able to cope with all the miniature ICs in there.

Dramatic Toaster

We’ve owned a very bad toaster for years. It was slow, cheap, and it had this ambiguous dial with no labeling that you ‘adjusted’ to set how dark your bread would be toasted — but without any markings, it was basically random. You diddled the dial and hoped it wouldn’t burn it, or you hovered over the toaster watching it feebly glow, and then if you started to see smoke you’d push a cancel button. It was terrible, but serviceable.

My daughter came to visit and was shocked at how us poors live. We don’t even have a decent toaster! When we visited her a few weeks ago, she was in the process of moving to Madison, and she gave us her old toaster. It was a Cuisinart. It looked like a polished slab of steel, with an LED display to let you know the settings, and most amazingly, you pushed a button after you add bread, and a motor gracefully lowered it into the device. When it was done, it didn’t pop up, the motor raised the toast dramatically and presented it to you.

That is how, in the morning, I have Dramatic Toast with my breakfast.

Why do I tell you this story? Because this morning I saw a photo of the back end of a Tesla Cybertruck.

That’s my new toaster! Very nice.

Pharma is wobbling between useless and lethal

On the one hand, you’ve got powerful chemicals that can be used to make deadly addictive drugs like methamphetamine, stuff made in bulk to be used as precursors to other, legitimate organic chemistry products, so valuable that they get stolen in industrial quantities by criminals (remember the Dead Freight episode of Breaking Bad, in which they rob a train to make drugs?). On the other hand, you’ve got big pharma peddling pills that do absolutely nothing, stuff like Oscillococcinum, a homeopathic remedy that is sold over the counter at my local grocery store. Add another useless drug, phenylephrine, which is in just about every cold remedy available, partly because the effective medicine, pseudoephedrine, has been displaced by the garbage, since pseudoephedrine was actually desired by meth heads who wanted to cook up meth at home.

Pharmaceutical companies are all about making money, not helping people’s health problems. Take a look at this exposé by Skepchick and Ars Technica — Big Pharma is not your friend. It’s not just the Sacklers and OxyContin, they’re all rotten to the core.

OK, it’s not just Big Pharma. Blame Big Capitalism. The lack of regulation and the ability of the rich to just buy the legislation they want is what’s killing us.

Fortunately, better hygiene and the use of masks has meant I’ve avoided the usual fall/winter colds for a while now.

Don’t trust self-driving cars

I think I can scratch a self-driving car off my Christmas list for this year…and for every year. I can always use more socks, anyway. The Washington Post (owned by another tech billionaire) has a detailed exposé of the catastrophic history of so-called autonomous vehicles.

Teslas guided by Autopilot have slammed on the brakes at high speeds without clear cause, accelerated or lurched from the road without warning and crashed into parked emergency vehicles displaying flashing lights, according to investigation and police reports obtained by The Post.

In February, a Tesla on Autopilot smashed into a firetruck in Walnut Creek, Calif., killing the driver. The Tesla driver was under the influence of alcohol during the crash, according to the police report.

In July, a Tesla rammed into a Subaru Impreza in South Lake Tahoe, Calif. “It was, like, head on,” according to a 911 call from the incident obtained by The Post. “Someone is definitely hurt.” The Subaru driver later died of his injuries, as did a baby in the back seat of the Tesla, according to the California Highway Patrol.

Tesla did not respond to multiple requests for comment. In its response to the Banner family’s complaint, Tesla said, “The record does not reveal anything that went awry with Mr. Banner’s vehicle, except that it, like all other automotive vehicles, was susceptible to crashing into another vehicle when that other vehicle suddenly drives directly across its path.”

Right. Like that ever happens. So all we have to do is clear the roads of all those other surprising vehicles, and these self-driving cars might be usable. That’s probably Elon Musk’s end goal, to commandeer the entirety of the world’s network of roads so that he can drive alone.

Speaking of Musk, he has a long history of lying about the capabilities of his autopilot system.

Tesla CEO Elon Musk has painted a different reality, arguing that his technology is making the roads safer: “It’s probably better than a person right now,” Musk said of Autopilot during a 2016 conference call with reporters.

Musk made a similar assertion about a more sophisticated form of Autopilot called Full Self-Driving on an earnings call in July. “Now, I know I’m the boy who cried FSD,” he said. “But man, I think we’ll be better than human by the end of this year.”

Lies. Lies, lies, lies, that’s all that comes out of that freak’s mouth. If you want more, Cody has a new video that explains all the problems with this technology. I know, it’s over an hour long, but the first couple of minutes contains a delightful montage of Musk making promises over the years, all of which have totally failed.

Can we just stop this nonsense and appreciate that human brains are pretty darned complex and there isn’t any AI that is anywhere near having the flexibility of a person? Right now we’re subject to the whims of non-scientist billionaires who are drunk on the science-fantasies they read as teenagers.

The state of AI

This is a fascinating chart of how quickly new technology can be adopted.

It’s all just wham, zoom, smack into the ceiling. Refrigerators get invented and within a few years everyone had to have one. Cell phones come along, and they quickly become indispensable. I felt that one, I remember telling my kids that no one needed a cell phone back in 2000, and here I am now, with a cell phone I’m required to have to access utilities at work…as well as enjoying them.

The point of this article, though, is that AI isn’t following that trajectory. AI is failing fast, instead.

The AI hype is collapsing faster than the bouncy house after a kid’s birthday. Nothing has turned out the way it was supposed to.

For a start, take a look at Microsoft—which made the biggest bet on AI. They were convinced that AI would enable the company’s Bing search engine to surpass Google.

They spent $10 billion dollars to make this happen.

And now we have numbers to measure the results. Guess what? Bing’s market share hasn’t grown at all. Bing’s share of search It’s still stuck at a lousy 3%.

In fact, it has dropped slightly since the beginning of the year.

What’s wrong? Everybody was supposed to prefer AI over conventional search. And it turns out that nobody cares.

OK, Bing. No one uses Bing, and showing that even fewer people use it now isn’t as impressive a demonstration of the failure of AI as you might think. I do agree, though, that I don’t care about plugging AI into a search engine; if you advertise that I ought to abandon duckduckgo because your search engine has an AI front end, you’re not going to persuade me. Of course, I’m the guy who was unswayed by the cell phone in 2000.

I do know, though, that most search engines are a mess right now, and AI isn’t going to fix their problems.

What makes this especially revealing is that Google search results are abysmal nowadays. They have filled them to the brim with garbage. If Google was ever vulnerable, it’s right now.

But AI hasn’t made a dent.

Of course, Google has tried to implement AI too. But the company’s Bard AI bot made embarrassing errors at its very first demo, and continues to do bizarre things—such as touting the benefits of genocide and slavery, or putting Hitler and Stalin on its list of greatest leaders.

Yeah. And where’s the wham, zoom, to the moon?

The same decline is happening at ChatGPT’s website. Site traffic is now declining. This is always a bad sign—but especially if your technology is touted as the biggest breakthrough of the century.

If AI really delivered the goods, visitors to ChatGPT should be doubling every few weeks.

In summary:

Here’s what we now know about AI:

  • Consumer demand is low, and already appears to be shrinking.
  • Skepticism and suspicion are pervasive among the public.
  • Even the companies using AI typically try to hide that fact—because they’re aware of the backlash.
  • The areas where AI has been implemented make clear how poorly it performs.
  • AI potentially creates a situation where millions of people can be fired and replaced with bots—so a few people at the top continue to promote it despite all these warning signs.
  • But even these true believers now face huge legal, regulatory, and attitudinal obstacles
  • In the meantime, cheaters and criminals are taking full advantage of AI as a tool of deception.

I found this chart interesting and relevant.

What I find amusing is that the first row, “mimics human cognition,” is what I’ve visualized as AI, and it consists entirely of fictional characters. They don’t exist. Everything that remains is trivial and uninteresting — “Tinder is AI”? OK, you can have it.

The delusions have begun

In the long slow decline of Twitter, Elon Musk is now making the most superficial shufflings of the cosmetics: he’s renaming Twitter “X”. Just “X”. Except the name is still twitter, the url is still twitter.com, it’s just the bird logo is now a generic “X”.

OK, if you say so. People are still going to call it twitter until it implodes. We can’t call it “X” because that’s where we buried the treasure, marked the spot, crossed out a mistake.

Linda Yaccarino, the poor dupe Musk lured into taking over as nominal CEO, waxed rhapsodic over this change.

X is the future state of unlimited interactivity – centered in audio, video, messaging, payments/banking – creating a global marketplace for ideas, goods, services, and opportunities. Powered by AI, X will connect us all in ways we’re just beginning to imagine.

No, it won’t. You’re rearranging the deck chairs on the sinking ship. You’re tweaking the font on your CV. You are totally delusional.

90% of everything is junk

I read Larry Moran’s What’s in Your Genome?: 90% of Your Genome Is Junk this week — it’s a truly excellent book, everyone should read it, and I’ll be making a more thorough review once I get a little time to breathe again. Basically, though, he makes an interdisciplinary case for the sloppiness of our genome, and it’s all that evidence that we should be giving our biology students from day one.

Anyway, I ran into a similar story online. Everything accumulates junk, from your genome to my office to Google. Cory Doctorow explains how search engines are choking in their own filth.

The internet is increasingly full of garbage, much of it written by other confident habitual liar chatbots, which are now extruding plausible sentences at enormous scale. Future confident habitual liar chatbots will be trained on the output of these confident liar chatbots, producing Jathan Sadowski’s “Habsburg AI”:

https://twitter.com/jathansadowski/status/1625245803211272194

But the declining quality of Google Search isn’t merely a function of chatbot overload. For many years, Google’s local business listings have been terrible. Anyone who’s tried to find a handyman, a locksmith, an emergency tow, or other small businessperson has discovered that Google is worse than useless for this. Try to search for that locksmith on the corner that you pass every day? You won’t find them – but you will find a fake locksmith service that will dispatch an unqualified, fumble-fingered guy with a drill and a knockoff lock, who will drill out your lock, replace it with one made of bubblegum and spit, and charge you 400% the going rate (and then maybe come back to rob you):

https://www.nytimes.com/2016/01/31/business/fake-online-locksmiths-may-be-out-to-pick-your-pocket-too.html

Google is clearly losing the fraud/spam wars, which is pretty awful, given that they have spent billions to put every other search engine out of business. They spend $45b every year to secure exclusivity deals that prevent people from discovering or using rivals – that’s like buying a whole Twitter every year, just so they don’t have to compete:

https://www.thebignewsletter.com/p/how-a-google-antitrust-case-could/

I’m thinking I should advertise Myers Spider Removal Service on Google, and then I respond to calls by showing up, collecting a few spiders, bring them back to my lab, and increase their numbers a thousand-fold, which I then return to the house in the dead of night. Then they call me again.

Hey, it’s a business model.

The comparison of Google’s junk to our genome’s junk falls apart pretty quickly, though, because your cells have mechanisms to silence the expression of garbage, while Google is instead motivated to increase expression of junk, because capitalism.