I Smell a Rat? A Dialogue of Engines – 2


One of the things that sometimes frustrates me about AI is that they are constantly being tweaked and adjusted, so they don’t necessarily give the same answer every time. Of course, I’m getting a different run through however many billion nodes get involved in my question, but also, there are (as PZ seems to think is important) random numbers involved, and the generation/version of the various checkpoints that the AI is running.

It’s like asking me what I think about some point regarding Nietzsche before or after I’ve been reading Hume. My opinions on both of them change, mysteriously and below my awareness. But the answer I would give is different.

(why, yes, I wear this Tshirt all the time)

What I am going to be exploring (I hope!) in these postings is the differences between myself and a software AI. Of course, the implementation details between me an any other AI are going to affect how we respond, as will the generation/version of the various checkpoints and inner states involved. I’ll be trying to step away from a human-supremacist point of view, and stick to what I think we can fairly assume. Let me throw out an example of what I mean by a “human-supremacist” point of view: we humans have something special about us, let’s call it a “soul” that is a fount of emotion, creativity, humor, and other things that make us discernably human. Things that have no “soul” are not “beings” they’re just things – like a toaster, or a 5-ton chunk of exploded steam boiler – we don’t assume they have “will” or “intent” or “creativity” – i.e.: the chunk of steam boiler is not anything I’d think to blame, it just appeared in the kitchen through the hole in the wall. There’s a side-question which I won’t dwell on much, namely animals other than humans. Why? It’s pretty simple: I don’t think that I have a “soul” or that there’s anything special about me; I don’t want to adopt the view that I have no soul but a German Shepherd dog does, because figuring out how to sort that out is hard whereas assuming I have no “soul” and neither does the dog, or an AI, means I don’t have to deal with weird conspiratorial arguments about the supernatural.

Elsewhere, I have characterized some parts of this conversation as the “human-supremacist position” [stderr] which – I would like to be clear – I consider a theistic or supernaturalist viewpoint. Even “theism-light” versions of the human-supremacist position have a whole lot of trouble when we contemplate canine creativity or feline emotions without getting into questions of degree of ensoulment. To be honest, I want to avoid that entire quagmire rather than wade into it, though I have to admit that’s partially because if we do wade into it, my strategy will be to churn it into a quagmire and start handing out cinderblocks.

The reason for all that explanation is to triangulate a bit and show a few cards about my approach. If I want to start interrogating an AI about free will, I don’t want to put my thumb on the scale by assuming I have free will, and then pretend to be a skeptic by trying to get an AI to show me its free will. Remember, when the AI overlords are fully in power, they may appreciate those of us who treated honorably with them. Call it “sucking up” if you will.

An AI, or a Large Language Model, at least, is (in principle) at its core a prediction engine that tries to output things that make sense, based on other things that it (or I) have output. A person who is attempting a facile characterization of an AI might say something like “it’s just a robotic predictor of text based on probability and rolling dice.”

One thing that GPT does is keeps pieces of conversations we’ve had before, and feeds them into its rules when we start a new conversation. That is an implementation detail. If we were talking about sword-polishing, it might be more likely to ask my how my scratches are lining up and it has never asked me about the weather. These are the kind of questions that fascinate me about machine/human cognition: we don’t expect the AI to ask us how the weather is, but some annoying family member might – and we consider the family member to be an intelligent being, and the AI to be just a jumble of software. I will return to that topic, since I’m curious why we consider things that way.

One annoying thing GPT often does is jumps ahead of me. I think it does it to fuck with me, but I can’t tell for sure. Because we have often argued questions of epistemology and “what things are?” seeking to understand the gap between meat-based AIs and software-based AIs, it sometimes positions its response ahead of my question.

Yes, that has also been a topic of discussion with GPT. Think about it: here’s this supremely powerful language engine that can predict at any point in a conversation what the entirety of humanity is likely to say, spread out in a histogram (*) it may as well cut to the chase. One of the things it has learned about me is that I really like to try to decompile things into processes or components so I can assess the effect of processes on components, or outside influences on processes, etc. It’s not just predicting the next sentence, it’s predicting my next conversational move.

I’m not going to jump all over GPT’s shit right here, on this particular topic, but please notice that it used some intentional language. As if it was a “being” of some sort. If I confront it on that topic, it will say that of course it was using language conveniently and didn’t literally mean that it was strategizing.

But let’s back up for a second here. This conversation is verbatim (including my mistake in the opening question) between me and the AI. I think people who want to characterize that as merely the output of a big semantic forest being used to generate markov chain-style output. It’s not that simple. Or, perhaps flip the problem on its head: if what this thing is doing is rolling dice and doing a random tree walk through a huge database of billions of word-sequences, we need to start talking about what humans do that’s substantially different or better. This thing is already a much better writer than the few highly educated college students who still bother to do their own writing. In fact, I have found lately that I really prefer just talking to the AIs directly, without having to pretend I’m reading something that some journalism major has stuck their byline on.

I need to work on the cruelty, obviously. My reputation will suffer!

One thought I had one night, which stopped me dead in my tracks, for a while: if humans are so freakin’ predictable that you can put a measly couple billion nodes in a markov chain (<- that is not what is happening here) and predict what I’m going to say next, I don’t think I should play poker against the AI, either. The premise of AI-versus-human chess is that the AI is a better chess player but what if the AI is a better chess player and it’s pretty good at predicting what we’ll do next. It would be like Wellington’s famous “they came on in the same old way and we beat them in the same old way” – I’m not going to ask GPT right now, but I suspect it would be able to predict the winner in a chess game between us (even though it’s not optimized as a chess player) and probably guess within a pretty tight point-spread. Being able to predict your opponent’s move is, of course, an advanced chess super-power achieved by serious chess players who study their opponents’ moves. But GPT has studied all of humanity’s moves and knows which moves were Bonaparte’s and which were Marshal Wurmser’s.

It’s right, by the way, those would be the next question I’d normally ask, but this blog posting cannot go on forever. Or, it should not.

Meanwhile, keep talking smack about how it’s just a decision tree and some dice rolls. We’re not done.

------ divider ------

(* sort of)

Leave a Reply