That’s all ChatGPT is. Emily Bender explains.
When you read the output of ChatGPT, it’s important to remember that despite its apparent fluency and despite its ability to create confident sounding strings that are on topic and seem like answers to your questions, it’s only manipulating linguistic form. It’s not understanding what you asked nor what it’s answering, let alone “reasoning” from your question + its “knowledge” to come up with the answer. The only knowledge it has is knowledge of distribution of linguistic form.
It doesn’t matter how “intelligent” it is — it can’t get to meaning if all it has access to is form. But also: it’s not “intelligent”. Our only evidence for its “intelligence” is the apparent coherence of its output. But we’re the ones doing all the meaning making there, as we make sense of it.
I think we know this from how we learn language ourselves. Babies don’t lie there with their eyes closed processing sounds without context — they are associating and integrating sounds with a complex environment, and also with internal states that are responsive to external cues. Clearly what we need to do is imbed ChatGPT in a device that gets hungry and craps itself and needs constant attention from a human.
Oh no…someone, somewhere is about to wrap a diaper around a server.