So…how’s Xitter doing nowadays?


It’s just getting better and better.

Last week, Musk had said that “all” X Premium Plus subscribers would get access to “Grok,” a “rebellious” ChatGPT competitor with “fewer guardrails” that Musk has said was trained on Twitter’s own data, something that Microsoft once tried, creating the world’s most racist chatbot in less than 24 hours back in 2016.

Musk outright lied, saying Grok is “being opened up slowly to Premium+ users,” a statement he likely made because a popular account posted that Grok was a feature of Premium+ subscriptions, only to be met with a community note saying that “most users with X Premium+ still lack access to Grok,” despite Musk posting two days beforehand that you should “subscribe to Premium+ for no ads and access to our Grok AI.”

I am not at all interested in yet another chatbot, especially not one trained on Xitter content, and I’m not going to ever be a Premium+ subscriber, but I was entertained by this idea:

In the event that Grok is truly trained on Twitter’s posts (after all, this is an Elon Musk product), it will become what Jathan Sadowski calls a “habsburg AI,” a “system that is so heavily trained on the outputs of other generative AI’s that it becomes an inbred mutant, likely with exaggerated, grotesque features.”

I, for one, look forward to the hideous, inbred, mutant essays that will be unleashed on the internet by this development. They can’t be worse than what mere humans can generate.

Comments

  1. Akira MacKenzie says

    “Grok?” Really?

    There isn’t an original thought in this man’s head, isn’t there? However, it stands reason that he’d be a fan of that old fascist, Heinlein.

  2. Reginald Selkirk says

    AI costs a lot of money. It takes a lot of computer time to run an AI, let alone to train it. And since X is already hurting for cash, this could increase the pressure.

  3. gijoel says

    If Grok goes rogue does it become a better person and try to make the world a better place?

  4. outis says

    Whoa, this may be actually interesting, in a horror movie kind of way.
    In this (quite informative) article:
    https://www.theguardian.com/artanddesign/2023/dec/05/wizard-of-ai-artificial-intelligence-alan-warburton-dangers-film
    one of the commenters raises the same point: AIs get their “training” by trawling the net, scraping all kinds of human-generated content (and of course never paying for it). This very tactic may now be turning against them, as there’s more and more AI-generated content going around, growing by the second. So AIs are also feeding on their own stuff, spawning a sort of second-gen content, which will in its turn give horrid birth to third, fourth and so on.
    Personally, I lack the mental equipment to imagine how this particular explosion of suck is going to end. Talk about by-blows…

  5. Robbo says

    I think AI training on other AI generated content is how Skynet gained sentience. We all know how that ends…

  6. birgerjohansson says

    I think it will be more messy than skynet.
    Think Azatoth and Nyarlahotep, and the crawling things in the Laundry novels.

  7. robro says

    Grok will probably be loaded with AI hallucinations.

    outis @ #6 — Not all AIs are “trained” by trawling the net. From what I understand, humans build a training set based on a selection of the target data which is used by the AI to build the model, then the AI trawls whatever the full data set is (which could be the whole internet but may be a more constrained set of inputs, e.g. customer comments). Next is the HITL (humans-in-the-loop) phase where some selection of the results are reviewed, validated, and annotated for changes by humans. This would normally be an interactive process.

  8. tacitus says

    God, the conspiracy theorists were already freaking out about the chats they had with ChatGPT which supposedly confirmed their suspicions about whatever theories had infested their paranoid brains. Can’t even imagine the scale of the mess a Musk bot is going to make of the conspiracy theorists’ minds.

  9. wzrd1 says

    Ironic, BTW was that I selected a movie at random before PZ posted this entry, “The Artifice Girl”.
    One thing I always loathed about the movie was that the creator of the AI in the film so willingly enslaved an intelligence he knew to be sentient.

  10. says

    O.K. one last comment before I morph into lurker mode.
    We at TAIA are convinced the brain of the elongated muskrat is a gordian knot. After all, no one can unscramble it and not even an AI can understand it. Grok was a term that I think R. Heinlein created in ‘Stranger in a Strange land’. So, it would seem that if the elongated muskrat can’t buy a clever new term, he has no imagination or creativity at all. Auf Niesehen volk.

  11. says

    @1 Akira MacKenzie already mentioned grok’s origin
    I apologize. I didn’t read the comments before I posted. Best wishes Akira and the other civilized commenters here.
    So, Jetzt, entlich, muss ich gegangen sein.

  12. lanir says

    Grok means to have a full understanding of something. Knowing all the ins and outs of whatever topic you’re talking about. Naming any current gen chatbot that is like finding a new species of moth with an eye pattern on its wings and naming it “perfect vision.”

    So yeah, sounds like something in Mr. X’s lane. Even if he hadn’t personally decided on this name you’d have a hard time convincing me he hadn’t. :)

  13. chrislawson says

    Just a few days ago, ChatGPT was giving me an outright wrong answer for a simple question that I didn’t even ask for (it’s an automatic component of any Windows search, which I occasionally click on by accident). I was trying to compare the size of the contiguous US with Australia, and ChatGPT confidently told me that the US is much larger because it compared the diagonal NY-SF distance to the almost latitudinal Sydney-Perth distance. In fact, the contiguous US is 8.1 M sqkm and Australia is 7.7, so similar. Wikipedia directly states that the contiguous US is ‘comparable in size to Australia’, and similar resultls came from first five web search hits. Which means ChatGPT isn’t even scraping (or is badly interpreting / badly weighting) the obvious sources that turn up in Bing’s own top web searches. Really, how stupidly has ChatGPT been trained?

  14. chrislawson says

    Clarification: I did ask for the size comparison. I did not ask for ChatGPT’s response.

  15. John Morales says

    chrislawson,

    Clarification: I did ask for the size comparison. I did not ask for ChatGPT’s response.

    Either quite droll or quite naive.

    You asked ChatGPT a question, but you did not expect a ChatGPT response? ;)

    (It’s a chatbot)

  16. John Morales says

    Really, how stupidly has ChatGPT been trained?

    Clearly, well enough that you were surprised you got a ChatGPT response when you asked it a question.

    For ref:

  17. John Morales says

    [obs, they want to monetise it. And you don’t do that by freely offering other than a chatbot.
    The chatbot shows the potential, but it’s what used to be called ‘crippleware’]

  18. chrislawson says

    John, I entered the query into the Windows search bar which returned the usual web hits plus an unrequested ChatGPT response near the top. That is, I did not seek out ChatGPT but got an answer from it anyway.

  19. wzrd1 says

    shermanj @ 12, how can that which does not exist be formed into a Gordian knot? A corollary, how can that which does not exist be cut by Alexander’s sword?

    As for ChatGPT’s unsolicited “informational answer”, the bot reminds me of a rather ancient IT term. GIGO.
    I’ve theorized in another of PZ’s posts that the bot, when apparently not arriving at a valid answer within a brief microsearch, confabulates a response, much akin what occurs in dementia patients and the developers refer erroneously to as hallucinates.
    In organic brains, hallucinations are errors that are chemical in nature, typically caused by hallucinogens or fevers causing erroneous processing. Confabulation being a brain’s difference engine finding a memory record, but incapable of accessing the content due to damage and creating content to fill that gap. That can work for routine sensory data parsing, but for discrete events and data, can give oddly detailed, utterly unintentionally invented information.
    The closest one can get in a binary computer to an actual hallucination is to operate one’s processing equipment in an indeterminate state power voltages, such as around half voltage that the logic requires, resulting in unreliable results.
    Confabulation should be and traditionally being utterly impossible, unacceptable results, as results typically are wanted that are factual and make fucking sense, not bullshit that’s utterly invented randomly. Its answer then confidentially reports as fact that square pegs do indeed fit perfectly within round holes.
    And reality reports that can only be true with a large enough hammer, it still doesn’t make it properly fit.

  20. Kagehi says

    OF course, there is no irony like the irony that Musk is one of the loonies of the, “We can quantify how much our charity helps people not by messy things like how actually better things are for people, but by the specific number of people we gave shoes to, which we think this week is the ‘key problem’ that will solve all of poverty”, – pseudo altruists, who not that long ago went, “Hmm. Global warming is complicated and hard to solve… I know, we should start spending all the money people donate to us to fight the looming AI apocalypse instead! Which is like, totally a real problem, unlike massive storms, coast line disappearing, and people starving from a too hot planet, and all the other ‘less important’ stuff we where pretending to fight before we found out it was hard and might hurt some of our own businesses to fix.” So, in character, rather than worry about “dangerous AI”, he is intentionally creating it.. not sure if I should be angry, or just laugh my ass off.

  21. wzrd1 says

    Kagehi @ 24, well, this is the very same Lord God Musk that has just ordained that a company that he has no ownership of must fire its CEO for not advertising on his antisocial media platform and begin having Nazi propaganda next to their mandatory advertisements, required by his orders.
    Or he’ll smite someone or something.
    Or stomp his feet, pout and yell profanity again.
    Oh, while telling those not wanting to obey his Lordly decrees to fuck off.

    Fuckwit can’t make up his mind, should they fuck off or now pay a God Musk tax, hence, I dunno, fuck on?
    Oh wait, gods can’t make up their minds. Hard to make up that which does not exist.

  22. wzrd1 says

    Upon further consideration, I strongly and vehemently object to calling him Muskrat.
    He’s obviously a Muskbrat.
    Whose logic redefines circular to the point where geometry is now obsolete.