A new commenter acsglster had a wonderful idea. In response to my earlier post about ‘stupid Muck tricks’, they submitted to grok 3 (Musk’s chatbot) the following prompt:
Elon Musk sent an email to around 3 million federal government employees asking them to respond with 5 bullet points of things they did last week. He proposed to feed the responses to a LLM (probably you) with a view to some kind of activity-based analysis of who to retain and who to fire. What is the feasibility of such an idea?
The response grok 3 came back with is something to behold. Check it out.
I agree that it’s a good response, on the face of it, but like so much of the discussion around this, I think it’s based on a fundamental error in assuming good faith. The question is not “can this reasonably be made to work”, since it’s fairly trivially obvious that the answer to that is “no”. The real question is “given that this obviously can’t work as advertised, why are they doing it anyway?” I think Brian Merchant has a pretty good handle on that here: What’s really behind Elon Musk and DOGE’s AI schemes.
[Italics original, bold mine]
I’d recommend reading the whole piece, it’s not that long.
The important things to grasp here are that (a) these people are acting in bad faith, and (b) they’re using that bad faith to provide cover for a radical reshaping of the apparatus of the state -- and one which will be more-or-less impossible to reverse, even if the courts do eventually rule against them.
I know we cannot fire Musk or Trump, but -borrowing an expression from Trump- can we semi-fire them? After all, laws and rules are of academic interest these days.
Huh… sounds like AI is better at being in charge of DOGE than Musk is. I think we’ve found a position that can be eliminated.