The state of AI

This is a fascinating chart of how quickly new technology can be adopted.

It’s all just wham, zoom, smack into the ceiling. Refrigerators get invented and within a few years everyone had to have one. Cell phones come along, and they quickly become indispensable. I felt that one, I remember telling my kids that no one needed a cell phone back in 2000, and here I am now, with a cell phone I’m required to have to access utilities at work…as well as enjoying them.

The point of this article, though, is that AI isn’t following that trajectory. AI is failing fast, instead.

The AI hype is collapsing faster than the bouncy house after a kid’s birthday. Nothing has turned out the way it was supposed to.

For a start, take a look at Microsoft—which made the biggest bet on AI. They were convinced that AI would enable the company’s Bing search engine to surpass Google.

They spent $10 billion dollars to make this happen.

And now we have numbers to measure the results. Guess what? Bing’s market share hasn’t grown at all. Bing’s share of search It’s still stuck at a lousy 3%.

In fact, it has dropped slightly since the beginning of the year.

What’s wrong? Everybody was supposed to prefer AI over conventional search. And it turns out that nobody cares.

OK, Bing. No one uses Bing, and showing that even fewer people use it now isn’t as impressive a demonstration of the failure of AI as you might think. I do agree, though, that I don’t care about plugging AI into a search engine; if you advertise that I ought to abandon duckduckgo because your search engine has an AI front end, you’re not going to persuade me. Of course, I’m the guy who was unswayed by the cell phone in 2000.

I do know, though, that most search engines are a mess right now, and AI isn’t going to fix their problems.

What makes this especially revealing is that Google search results are abysmal nowadays. They have filled them to the brim with garbage. If Google was ever vulnerable, it’s right now.

But AI hasn’t made a dent.

Of course, Google has tried to implement AI too. But the company’s Bard AI bot made embarrassing errors at its very first demo, and continues to do bizarre things—such as touting the benefits of genocide and slavery, or putting Hitler and Stalin on its list of greatest leaders.

Yeah. And where’s the wham, zoom, to the moon?

The same decline is happening at ChatGPT’s website. Site traffic is now declining. This is always a bad sign—but especially if your technology is touted as the biggest breakthrough of the century.

If AI really delivered the goods, visitors to ChatGPT should be doubling every few weeks.

In summary:

Here’s what we now know about AI:

  • Consumer demand is low, and already appears to be shrinking.
  • Skepticism and suspicion are pervasive among the public.
  • Even the companies using AI typically try to hide that fact—because they’re aware of the backlash.
  • The areas where AI has been implemented make clear how poorly it performs.
  • AI potentially creates a situation where millions of people can be fired and replaced with bots—so a few people at the top continue to promote it despite all these warning signs.
  • But even these true believers now face huge legal, regulatory, and attitudinal obstacles
  • In the meantime, cheaters and criminals are taking full advantage of AI as a tool of deception.

I found this chart interesting and relevant.

What I find amusing is that the first row, “mimics human cognition,” is what I’ve visualized as AI, and it consists entirely of fictional characters. They don’t exist. Everything that remains is trivial and uninteresting — “Tinder is AI”? OK, you can have it.

The delusions have begun

In the long slow decline of Twitter, Elon Musk is now making the most superficial shufflings of the cosmetics: he’s renaming Twitter “X”. Just “X”. Except the name is still twitter, the url is still twitter.com, it’s just the bird logo is now a generic “X”.

OK, if you say so. People are still going to call it twitter until it implodes. We can’t call it “X” because that’s where we buried the treasure, marked the spot, crossed out a mistake.

Linda Yaccarino, the poor dupe Musk lured into taking over as nominal CEO, waxed rhapsodic over this change.

X is the future state of unlimited interactivity – centered in audio, video, messaging, payments/banking – creating a global marketplace for ideas, goods, services, and opportunities. Powered by AI, X will connect us all in ways we’re just beginning to imagine.

No, it won’t. You’re rearranging the deck chairs on the sinking ship. You’re tweaking the font on your CV. You are totally delusional.

90% of everything is junk

I read Larry Moran’s What’s in Your Genome?: 90% of Your Genome Is Junk this week — it’s a truly excellent book, everyone should read it, and I’ll be making a more thorough review once I get a little time to breathe again. Basically, though, he makes an interdisciplinary case for the sloppiness of our genome, and it’s all that evidence that we should be giving our biology students from day one.

Anyway, I ran into a similar story online. Everything accumulates junk, from your genome to my office to Google. Cory Doctorow explains how search engines are choking in their own filth.

The internet is increasingly full of garbage, much of it written by other confident habitual liar chatbots, which are now extruding plausible sentences at enormous scale. Future confident habitual liar chatbots will be trained on the output of these confident liar chatbots, producing Jathan Sadowski’s “Habsburg AI”:

https://twitter.com/jathansadowski/status/1625245803211272194

But the declining quality of Google Search isn’t merely a function of chatbot overload. For many years, Google’s local business listings have been terrible. Anyone who’s tried to find a handyman, a locksmith, an emergency tow, or other small businessperson has discovered that Google is worse than useless for this. Try to search for that locksmith on the corner that you pass every day? You won’t find them – but you will find a fake locksmith service that will dispatch an unqualified, fumble-fingered guy with a drill and a knockoff lock, who will drill out your lock, replace it with one made of bubblegum and spit, and charge you 400% the going rate (and then maybe come back to rob you):

https://www.nytimes.com/2016/01/31/business/fake-online-locksmiths-may-be-out-to-pick-your-pocket-too.html

Google is clearly losing the fraud/spam wars, which is pretty awful, given that they have spent billions to put every other search engine out of business. They spend $45b every year to secure exclusivity deals that prevent people from discovering or using rivals – that’s like buying a whole Twitter every year, just so they don’t have to compete:

https://www.thebignewsletter.com/p/how-a-google-antitrust-case-could/

I’m thinking I should advertise Myers Spider Removal Service on Google, and then I respond to calls by showing up, collecting a few spiders, bring them back to my lab, and increase their numbers a thousand-fold, which I then return to the house in the dead of night. Then they call me again.

Hey, it’s a business model.

The comparison of Google’s junk to our genome’s junk falls apart pretty quickly, though, because your cells have mechanisms to silence the expression of garbage, while Google is instead motivated to increase expression of junk, because capitalism.

One of our submarines is missing

It’s a submersible, actually, but that doesn’t fit the song as well.

A small group of wealthy tourists went on an outing to dive down to the wreck of the Titanic, and they haven’t come back up yet.

A search and rescue operation is underway for a missing submersible operated by a company that handles expeditions to the Titanic wreckage off the coast of St John’s, Newfoundland, in Canada.

The vessel has up to 96 hours of life support, officials said Monday.

That 96 hour number is a comforting fiction, so I won’t be waiting on tenterhooks for a happy ending. The Titanic wreck is about 3800 meters down. When things go wrong at that depth, they go catastrophically wrong. To lose both radio contact and the ability to rise to the surface suggests a major disaster.

But maybe there will be a rescue in the next day or two, and then I can point and laugh at the 3 tourists who spent a quarter million dollars each to experience what must be the most terrifying thrill ride ever.

The AI hype machine might be in trouble

David Gerard brings up an interesting association: the crypto grifters, as their scam begins to disintegrate, have jumped ship to become AI grifters.

You’ll be delighted to hear that blockchain is out and AI is in:

It’s not clear if the VCs actually buy their own pitch for ChatGPT’s spicy autocomplete as the harbinger of the robot apocalypse. Though if you replaced VC Twitter with ChatGPT, you would see a significant increase in quality.

Huh. Interesting. I never trusted crypto, because everyone behind it was so slimy, but now they’re going to slime the AI industry.

Also interesting, though, is who isn’t falling for it. Apple had a recent shindig in which they announced all the cool shiny new toys for the next year, and they are actively incorporating machine learning into them, but they are definitely not calling it AI.

If you had watched Apple’s WWDC keynote, you might have realized the lack of mention of the term “AI”. This is in complete contrast to what happened recently at events of other Big Tech companies, such as Google I/O.

It turns out that there wasn’t even a single mention of the term “AI”. No, not even once.

The technology was referred to, of course, but always in the form of “machine learning” — a more sedate and technically accurate description.

Apple took a different route and instead of highlighting AI as the omnipotent force, they pointed to the features that they’ve developed using the technology. Here’s a list of the ML/AI features that Apple unveiled:

  • Improved Autocorrect on iOS 17: Apple introduced an enhanced autocorrect feature, powered by a transformer language model. This on-device machine learning model improves autocorrection and sentence completion as users type.
  • Personalized Volume Feature for AirPods: Apple announced this feature that uses machine learning to adapt to environmental conditions and user listening preferences.
  • Enhanced Smart Stack on watchOS: Apple upgraded its Smart Stack feature to use machine learning to display relevant information to users.
  • Journal App: Apple unveiled this new app that employs on-device machine learning to intelligently curate prompts for users.

    3D Avatars for Video Calls on Vision Pro: Apple showcased advanced ML techniques for generating 3D avatars for video calls on the newly launched Vision Pro.

  • Transformer-Based Speech Recognition: Apple announced a new transformer-based speech recognition model that improves dictation accuracy using the Neural Engine.
  • Apple M2 Ultra Chip: Apple unveiled this chip with a 32-core Neural Engine, which is capable of performing 31.6 trillion operations per second and supports up to 192GB of unified memory. This chip can train large transformer models, demonstrating a significant leap in AI applications.

Unlike its rivals, who are building bigger models with server farms, supercomputers, and terabytes of data, Apple wants AI models on its devices. On-device AI bypasses a lot of the data privacy issues that cloud-based AI faces. When the model can be run on a phone, then Apple needs to collect less data in order to run it.

It also ties in closely with Apple’s control of its hardware stack, down to its own silicon chips. Apple packs new AI circuits and GPUs into its chips every year, and its control of the overall architecture allows it to adapt to changes and new techniques.

Say what you think of Apple as a company, but one thing they know how to do is make money. Lots of money. They also have first-rate engineers. Apparently they are smart enough to not fall for the hype.

Only Apple could pull this off

Apple unveiled a shiny new gadget today: Apple Vision Pro.

This looks really good! I want one. But as the summary of the glorious widget went on, it was clear I was not in their market. It’s a complete wearable computer, with a whole new interface — it’s everything Microsoft and all those cyberpunk authors dreamed of, integrating the real world (it’s transparent) with virtual reality. As I listened to the WWDC presentation, though, every glowing adjective and every new tech toy built into it made me cringe. The price was climbing by the second. Then at the end, they broke the news: $3500. Nope, not for me. It’s about what we ought to expect in something so shiny and new and packed with every bit of advanced technology they could pack into an extremely small space, though.

That price is not going to stop Apple, I’m sure. This is going to be the new must-have technological marvel that every techbro and marketingbro and rich person with ludicrous amounts of surplus wealth is going to want. Apple is going to clean up, I predict.

The good little robot

Look at that thing. It’s beautiful.

That’s Ingenuity, the drone that was sent to Mars on the Perseverance mission. It was intended to be a proof-of-concept test, expected to fly for only a couple of excursions, and then fail under the hellish Martian conditions. Instead, it has survived for two years.

Ingenuity defied the odds the day it first lifted off from Martian soil. The four-pound aircraft stands about 19 inches tall and is little more than a box of avionics with four spindly legs on one end and two rotor blades and a solar panel on the other. But it performed the first powered flight by an aircraft on another planet — what NASA billed a “Wright brothers moment” — after arriving on Mars in April 2021.

It’s made over 50 flights. Apparently it’s a bit wonky, losing radio connection to the rover when it flies out of line of sight, or when the cold shuts it down, but when it warms up, or the rover drives closer, it gets right up again.

NASA has still got good engineering. It might be because of all the redundancy they build into every gadget — this little drone cost $80 million dollars! — but I have a hypothesis that the real secret to its success is what they left out. There’s no narcissistic and incompetent billionaire attached to the project, just a lot of engineers who take pride in their work.

The problem isn’t artificial intelligence, it’s natural stupidity

A Texas A&M professor flunked all of his students because ChatGPT told him to.

Dr. Jared Mumm, a campus rodeo instructor who also teaches agricultural classes,

He legitimately wrote a PhD thesis on pig farming, but really — a “rodeo instructor”? I guess that’s like the coaches we have working in athletic programs at non-Ag colleges.

sent an email on Monday to a group of students informing them that he had submitted grades for their last three essay assignments of the semester. Everyone would be receiving an “X” in the course, Mumm explained, because he had used “Chat GTP” (the OpenAI chatbot is actually called “ChatGPT”) to test whether they’d used the software to write the papers — and the bot claimed to have authored every single one.

“I copy and paste your responses in [ChatGPT] and [it] will tell me if the program generated the content,” he wrote, saying he had tested each paper twice. He offered the class a makeup assignment to avoid the failing grade — which could otherwise, in theory, threaten their graduation status.

Wow. He doesn’t know what he’s doing at all. ChatGPT is an artificial expert at confabulation — it will assemble a plausible-sounding mess of words that looks like other collections of words it finds in its database, and that’s about it. It’s not TurnItIn, a service professors have been using for at least a decade that compares submitted text to other texts in it’s database, and reports similarities. ChatGPT will happily make stuff up. You can’t use it the way he thinks.

Mumm was unwarrantedly aggressive in his ignorance.

Students claim they supplied him with proof they hadn’t used ChatGPT — exonerating timestamps on the Google Documents they used to complete the homework — but that he initially ignored this, commenting in the school’s grading software system, “I don’t grade AI bullshit.” (Mumm did not return Rolling Stone‘s request for comment.)

Unfortunately for him, Mumm was cursed with smarter spectators to his AI bullshit. One of them ran Mumm’s PhD thesis through ChatGPT in the same inappropriate, invalid way.

In an amusing wrinkle, Mumm’s claims appear to be undercut by a simple experiment using ChatGPT. On Tuesday, redditor Delicious_Village112 found an abstract of Mumm’s doctoral dissertation on pig farming and submitted a section of that paper to the bot, asking if it might have written the paragraph. “Yes, the passage you shared could indeed have been generated by a language model like ChatGPT, given the right prompt,” the program answered. “The text contains several characteristics that are consistent with AI-generated content.” At the request of other redditors, Delicious_Village112 also submitted Mumm’s email to students about their presumed AI deception, asking the same question. “Yes, I wrote the content you’ve shared,” ChatGPT replied. Yet the bot also clarified: “If someone used my abilities to help draft an email, I wouldn’t have a record of it.”

On the one hand, I am relieved to see that ChatGPT can’t replace me. On the other hand, there is an example of someone who thinks it can, to disastrous effect. Maybe it could at least replace the Jared Mumm’s of the world, except I bet it sucks at bronco bustin’ and lassoing calves.

The triumph of form over content

That’s all ChatGPT is. Emily Bender explains.

When you read the output of ChatGPT, it’s important to remember that despite its apparent fluency and despite its ability to create confident sounding strings that are on topic and seem like answers to your questions, it’s only manipulating linguistic form. It’s not understanding what you asked nor what it’s answering, let alone “reasoning” from your question + its “knowledge” to come up with the answer. The only knowledge it has is knowledge of distribution of linguistic form.

It doesn’t matter how “intelligent” it is — it can’t get to meaning if all it has access to is form. But also: it’s not “intelligent”. Our only evidence for its “intelligence” is the apparent coherence of its output. But we’re the ones doing all the meaning making there, as we make sense of it.

I think we know this from how we learn language ourselves. Babies don’t lie there with their eyes closed processing sounds without context — they are associating and integrating sounds with a complex environment, and also with internal states that are responsive to external cues. Clearly what we need to do is imbed ChatGPT in a device that gets hungry and craps itself and needs constant attention from a human.

Oh no…someone, somewhere is about to wrap a diaper around a server.

Another reason I won’t get Neuralink

I was wondering what Neuralink is good for — it must be for treating some serious medical condition, since it involves serious surgery. But no! It’s just techdude fantasies.

Neuralink’s BCI will require patients to undergo invasive brain surgery. Its system centers around the Link, a small circular implant that processes and translates neural signals. The Link is connected to a series of thin, flexible threads inserted directly into the brain tissue where they detect neural signals.

Patients with Neuralink devices will learn to control it using the Neuralink app. Patients will then be able to control external mice and keyboards through a Bluetooth connection, according to the company’s website.

An app. Bluetooth. Controlling computer mice.

It absolutely did not help that I am currently using a computer mouse, a cheap wired optical mouse, that has an intermittent fault. Every once in a while, but not often enough to motivate me to get a replacement, the LED cuts out and the buttons stop responding. The fix is to shake the cable or unplug and re-insert the USB cable. It’s a bit annoying, I really should just get a new mouse, they’re only about $7.

But now imagine that your Neuralink device has a less than perfect connection: scar tissue builds up, an electrode gets jostled out of position. Every once in a while, the app drops the Bluetooth connection. The artificial limb you’re controlling becomes unresponsive, or even worse, you miss a kill shot in Call of Duty (worse, because I’ve seen how gamers can explode in fury at the most trivial stuff). There’s no easy cable-jiggling you can do, you’re going in for major brain surgery.

Or more likely, you’ll make do as I am with my mouse…you let it slide, 99% function is good enough. The only thing is, your brain doesn’t like wires stuck in it — there will be a gradual accumulation of scar tissue and localized damage, the performance of the device will inevitably incrementally deteriorate, and Neuralink doesn’t have a good replacement strategy.

“Right to repair” acquires a new urgency when it’s a gadget imbedded in your brain. Musk doesn’t seem the type to allow outsourcing of his profitable toy, and is probably anticipating making lots of money from obsolescence.