You should really check this out


A new commenter acsglster had a wonderful idea. In response to my earlier post about ‘stupid Muck tricks’, they submitted to grok 3 (Musk’s chatbot) the following prompt:

Elon Musk sent an email to around 3 million federal government employees asking them to respond with 5 bullet points of things they did last week. He proposed to feed the responses to a LLM (probably you) with a view to some kind of activity-based analysis of who to retain and who to fire. What is the feasibility of such an idea?

The response grok 3 came back with is something to behold. Check it out.

Comments

  1. Dunc says

    I agree that it’s a good response, on the face of it, but like so much of the discussion around this, I think it’s based on a fundamental error in assuming good faith. The question is not “can this reasonably be made to work”, since it’s fairly trivially obvious that the answer to that is “no”. The real question is “given that this obviously can’t work as advertised, why are they doing it anyway?” I think Brian Merchant has a pretty good handle on that here: What’s really behind Elon Musk and DOGE’s AI schemes.

    There is no AI system in existence that can analyze a single email and credibly determine whether someone’s job is necessary or not. It’s a fiction, and one that, given just a little bit of scrutiny, collapses under the weight of reality.

    […]

    But, stupid or not, it’s a powerful fiction. It joins the echelon of other AI projects helmed by Musk and his cohort, like the “AI-first strategy” DOGE is implementing, the government chatbots they’re building, and the systems designed to automatically remove pronouns and DEI verbiage from government websites. The very idea that DOGE’s AI can streamline and automate the government is already being used to justify the hollowing out and the reshaping of the federal workforce.

    […]

    There’s been a long historical pattern in many industries of using new technologies not just to automate tasks but to simultaneously insulate management from having to take responsibility for their decisions, particularly their anti-labor ones,” Mar Hicks, a historian of technology at the School of Data Science at the University of Virginia, tells me.

    Hicks continues:

    “In fact, there are examples where automation has been brought in specifically for this purpose even when it’s clear that automation isn’t working as expected or intended, or isn’t fit for the purpose. But by the time workers and ordinary people have fought it out and gotten the faulty systems out of the way, it’s too late: the damage has been done—or perhaps more accurately the systems have provided cover for management in exactly the way intended.”

    [Italics original, bold mine]

    I’d recommend reading the whole piece, it’s not that long.

    The important things to grasp here are that (a) these people are acting in bad faith, and (b) they’re using that bad faith to provide cover for a radical reshaping of the apparatus of the state -- and one which will be more-or-less impossible to reverse, even if the courts do eventually rule against them.

  2. birgerjohansson says

    I know we cannot fire Musk or Trump, but -borrowing an expression from Trump- can we semi-fire them? After all, laws and rules are of academic interest these days.

  3. OverlappingMagisteria says

    Huh… sounds like AI is better at being in charge of DOGE than Musk is. I think we’ve found a position that can be eliminated.

Leave a Reply

Your email address will not be published. Required fields are marked *

Click the "Preview" button to preview your comment here.