Ars Technica has a list of the worst features of the internet. It’s depressing how much of the stuff mentioned is just growing and taking over everything. Sadly, Google gets mentioned three times, for their voice assistant, search, and the incorporation of AI.
I encountered a terrible example of AI assistance. Here’s some AI advice on hygiene.
Does the AI not understand the words “front” and “back”, or is it very confused about the location of the urethra and anus?
Or try this one.
Hasn’t improved from ratdickgate I see.
That is shitty advice.
AI does not understand the meaning of words. Period. It manipulates tokens according to a statistical model.
It also does not understand locations, anatomy, or the concept of 3 dimensional space. Or indeed anything else that cannot be reduced to a statistical model of how language appears in its training data. It does not know what a urethra is, or an anus, or anything at all about how these things might be related to one another, either conceptually or spatially.
It appears that somebody clicked the negative feedback, filed a report, and it’s fixed now.
So I can’t see what website was pushing the bad information that got pulled into the search result.
The source links for each snippet let you know which high ranked info source is pushing the bad information so they can be called out and removed from future search results.
I like google assistant, I can say turn off the lights when I go to bed.
But they track everywhere your phone goes.
They see what your reading online. When PZ mentions someone I often google that person and google fills in the rest after 3 letters.
Search history, if I type USS V it knows I’m looking for Vulcan or Vincennes, just now I tried Iran Air and flight 655 came up first. Flight 655 is why I’ve searched for Vincennes.
I like google assistant, too, and I also use it to control lights. Unfortunately, it’s gotten a bit flaky. Sometimes it turns on the wrong lights, and while I used to rely on the timer function for a gentle alarm — lights ramping up in the morning rather than a loud noise — sometimes it forgets. Not what you want in your alarm clock.
I almost copied and pasted the bit of the quran on this but decided that someone might get upset.
When has that stopped anyone?
Sory PZ but i’ve got too near being clobbered in the past.
https://www.iium.edu.my/deed/lawbase/risalah_maliki/book04.html#:~:text=4.1d.&text=Then%20he%20wipes%20any%20impurity,hand%20will%20not%20be%20impure.
If you like
And now Elon Musk and his minions are purportedly using Musk’s version of AI to reconfigure parts of The Payment Automation Manager and Secure Payment System at the Bureau of the Fiscal Service.
https://www.wonkette.com/p/hey-anybody-going-to-stop-elon-musk
As for who is paying …
https://rollcall.com/2025/02/04/white-house-opens-funding-spigot-for-doge-expenses/
Quoted text above is cross posted from The Infinite Thread. Thanks to Sky Captain from some of the references above.
LLMs have no concept of truth, or of anything, really. They’re plausibility engines. They can generate plausible responses to input.
Yep, or, as I put it, you LLMs are a two year old, with unlimited access to nearly all human data, no concept of right or wrong, truth and lies, or anything else, who can’t be held accountable for making a mistake, because they don’t understand what being mistaken is, and if asked something that isn’t so absolutely statistically plausible that it has a clear answer, such as, “What is dark matter?”, is as equally likely to give the vague, “No one really know but it does these things.” (but also possibly tack on that it tastes like spearmint, if its probability engine doesn’t think the existing answer is enough, for some reason), as it is to claim that, “it is made up of plastic from Zerg dolls, and influences gravity via reverse leptons, which are produced in a blender, while beating eggs.” You really have no idea what sort of insane crap it will come up with, when/if its “plausibilities” are insufficient to give out a solid answer. At least the two year old though KNOWS they are making it up, because they have no idea what the answer actually is.
I mean, I suppose, if you had a “front end” for the things, which did either comparison with actual factual data (probably not feasible), or additional probability checks, to verify if the crazy it was outputting conformed to its own data sets statistical framework in anything like a sensible way… maybe? But, to put it even in terms of our own cognition, we “likely” run thousands of these sort of “plausibility” generating processes, then filter them through a mess of additional stages, which ask, “Do these results actually conform to each other?”, and even if the worst cases, we might end up with, “Well, it could either be A or B, but my long held understanding of how the world works means A more likely.” In the case of, say, evolution, for example, almost all of use are going to run through this, and come up with A, but a creationist will end up with B. We might “think about”, “What if the world actually worked like that, then B might make sense, but, even then, we would figuratively smack ourselves in the head, go, “Nope, that is just stupid.”, and say A anyway.
Now, take someone on drugs, or with certain kinds of brain damage, and give them the same situation and… what you get is a dozen LLMs all screaming that person’s head that, “This is true!”, “No, this is!”, “No, its really this…!”, and, if they are lucky some subset of this madness still tells them, “Uh, well, the version that most conforms to reality is choice 5, so we should do that.” Except when it doesn’t, and such people do or say something that conforms to one of the other “ideas” they had about what should be the correct thing.
So, from that perspective, its not a two year old, but a full on, “Lives in their own world.”, schizophrenic, who has lost near total grasp on what the real world is, and what is purely the fantasies running through their perceptions on that reality at any given moment. I.e., the sort, kind of like a family friend, who in their few, “in this reality” moments would be an utter genius, but the rest of the time thinks they are a house plant, or Ming the Merciless, or a field mouse, or who the F knows at any given moment. You can’t have a grasp of reality if a) you don’t “exist” in reality, for one, and thus have no constraints working to limit what your brain “imagines” is going on, based on known data, and b) everything you think is a stream of statistical gibberish, with no filters, checks and balances, or capacity to double check any of it with the results of said output. Maybe.. if an LLM could know the outcome of its result, and had some sort of feedback system, by which, when it turned out to be wrong, there was an actual cost to the LLM for doing so.. But, since no such feedback exists, and it thus can’t comprehend consequences, even if it had the capacity to “learn” how to not make such mistakes, which is dubious at best, there is no way to give it feedback that will actually do so.
Now, LLM “like” AI that are specifically designed to do narrow tasks, such as even image generation, sort of do, in that they get “tuned” to remove behavior that produces failed outputs, and thus, over time, improve, even while they still don’t comprehend what they are doing, or why. But.. those are “SFAI”, specific function AI, not “GAI” general AI, and all this nutbars pushing the use of LLMs have convinced themselves that, in utter ignorance of how and why the human brain self checks itself, and filters garbage, even if they might have a workable “framework”, for the most baseline function to work, that just throwing more and more data at, and manually tuning them, will eventually produce something that knows it is thinking, why it is doing so, and thus when its a) being asked to make things up, vs. give accurate info, or even what the F either of those things are, or why they matter.
In the mean time we will see increased uses of LLM to help cops write police reports, where the notes are, “Went to blah, saw blah, gave a warning.”, and randomly for no reason, this will get turned into, “Went to ‘wrong address’, saw ‘suspect that has been mentioned in a few recent reports, but not the person in the notes’, ‘chased them’, ‘got shot at’ and warned ‘the neighbors should the suspect come back’. Because… heh, all that is stuff other people have been putting into a lot of reports, and it was trained on, and it got the basics right – they went some place, saw something, and gave a warning. Its not like the exact details matter, right? And, its not at all going to cause issues to the anyone involved, since no one is likely to read it, and act on it, right? Sigh…
Something more or less like this already happened. The cops got suspended, I think, probably not fired, and the “use of the tools”, was going to be, “further studied”. But.. of course, not less than a month ago, and weeks after this prior screw up, the “local” PD for my city told the local paper, “We are looking into using AI to help us (mis-)write reports.” :head-desk:
If you want to turn off the AI assistant in Google, just add the word fuck to your question. Works like a charm.
Interestingly,
I first tested this with PZ’s search phrase “how do you even wipe front to back” and the response you get for that is “An AI overview is not available for this search”.
Oh noes. Even the AI is being censored…
I think the appropriate word here is “unconstitutional” bureaucracy. All bureaucracies are unelected; not all of them are arrogating powers that the constitution duly grants to Congress.
As for “DOGE”‘s putative budget, I’m not sure it even matters. Musk has thrust both arms down to the elbows into the Treasury’s cookie jar with no evident checks or balances or oversight! He’s gone from a quarter-trillionaire to having de facto control over at least ten times that much money, pretty much overnight. It’s starting to look like he might not be satisfied until he owns the planet. Everyone thought he was ambitious to own Mars … turns out that was misdirection on his part as to which planet he actually schemed to conquer.
Also, calling it a “bureaucracy” is more than a stretch. A bureaucracy is a bunch of people in suits and ties in a cubicle farm signing forms in triplicate, with a formal organizational structure and all the usual institutional structures both inside and supporting them. This is a few weirdos in trench coats barging into places and grabbing fistfuls of whatever’s not nailed down. Where I come from, the term for such a group is not “bureaucracy”, it is “heist crew”, with Trump being the “inside man” who disabled the building’s alarms and opened the loading dock door for them to slip in.
AI=artificial idiocy
Not that generative AI “knows” anything…it doesn’t…but there are ways to encode relationships which a gen AI can…maybe…use to produce more accurate and refined results.
One technique for doing this is called “graph RAG” where RAG is Retrieval-Augmented Generation (waves hand), and “graph” is a knowledge graph of structured information about some domain where relationships between entities are represented as edges which have a type. The graph contributes to the “augmented retrieval” process…somehow or another.
The fundamental graph is a taxonomy, which some of you are familiar with. In the taxonomy, the hierarchical structure represents a type of relationship between things, sometimes referred to as “parent-child”. It’s a one-to-many relationship with type represented by the kind of taxonomy it is. Any entity in the graph can multiple relationships, not just those defined by a taxonomy.
At least that’s the theory. We’ll see if it really matters…and works…in the next year or so. There are some information domains where hallucinations and toxic content that might randomly appear in the generated output are severally frowned upon by execs, lawyers and marketing people, as you can well imagine. Yet gen AI promises to automate delivering information to people (aka customers) that is more specific to a person’s context, thus there’s a lot of interest in using it but eliminating hallucinations and toxic content.
Depending on your gen AI and its access to the structured data in the graph, you could relate things like urethra, annus, what they do, and social customs associated with those things. It’s just a lot of work. I’m sure better toilet training is easier.
And yeah, I’m over my head and way more involved in this than I could ever imagine. I would rather be playing guitar and singing.
I asked my trusty search engine (not Google) if water will freeze at 27 degrees. The first thing is before I finished typing I got suggested query completion as “will water freeze at 27 degrees.” The completion suggestion is some form of ML/AI and surprisingly specific which might be because a lot of people have asked that specific question.
Second, the answer is quite different that the one in the screen grab:
JimB @13: or just add -ai at the end of your search term/phrase
With the freezing of water the AI has not associated freezing with colder, smaller numbers with colder or understood the direction of the freezing transition.
That demonstrayes the flaw of a language model with out a conceptualisation or fact relationship model.