Adam Conover sums up the state of AI.
I agree with him, mostly. What all the AI hype is about is finding a way to sneakily glean the products of human intelligence from their babbling on the internet, scrape it up into a goulash without paying any people for it, and use it as a marketing tool — a bad marketing tool. Tell me, does anyone seriously believe that claiming there’s “AI” in your search engine is a great selling point? People are starting to catch on that it’s all annoying nonsense. It’s “the algorithm,” that excuse marketers were using previously to justify unwanted behind-the-scenes rules on Twitter or Facebook that they claimed were there to increase the likelihood you would see stuff you wanted to see, but was actually an excuse to make sure you got served up lots of ads and spam.
The only advantage to AI is that it does cut out direct human intervention and gives the companies the means to circumvent paying authors and artists, so it might be cheaper. For now. Until the companies kill off their competition and starts gouging customers again, as they inevitably will.
Behind the AI facade, of course, is the real villain of the story, capitalism. Skynet isn’t going to kill us all, we’re instead going to be drowned in a glurge of computer-generated bullshit that will temporarily bring great profit to the techbros of silicon valley, all the Elon Musks of the world.
Ray Ceeya says
I am so not afraid of Skynet. I’ve worked on enough automated industrial machinery to know what those killbots would be made of. Underneath it’s the same Festo solenoid valves and cylinders, the same Keyence proximity sensors, the same MAC valves as all modern industrial machinery. It all breaks down very regularly, and without a human to fix it it stops working. So less “The Terminator” and more Marvin the Paranoid Android.
AI? Heck, I’m still waiting for my George Jetson flying car in a suitcase.
A lot of truth here, especially regarding the dangers of this technology. No, it’s not Terminator, or even HAL that worries me–it’s humans using this technology to screw each other over with machine-like efficiency, including mis- and dis-information campaigns, deep fakes and the like.
I heard a couple of sports podcasters recently dismissing any worries over this, because, you know, it’s silly to think that AI will take over the world, set fire to the sky and use us for batteries or whatever. They never even touched on the real dangers here.
Now I must confess, I did have some fun with Craiyon, the mini DALL-E thing for a few weeks, creating some images I found hilarious (examples here at my craiyon gallery ). But that kind of wore off pretty quickly, and in fact was mostly premised on confusing the AI to produce strange images, which I suspect will become more difficult as time passes.
I’ve been making jokes for years about smart toasters. Perhaps I need to update my jokes.
Or I could, I suppose, have Chat GPT write some jokes for me. I suspect, though, that they will be roughly as funny as what we got when we asked the Quija board to tell us a joke back about 1973.
Q: How do you eat a cowboy’s ass?
A: You end him.
That buries the lede. I don’t care about how long-term SV profits are. Where do they come from?
They come from the rest of us. And no matter how long techbros can hold their money, we the people are never getting it back.
Bronze Dog says
We’ll be called Luddites because we see the human cost of automation, much like the original Luddites. Meanwhile, I’ll be taking a bunch of Krita tutorials to learn my art software and drawing on my tablet.
My AI has a terrible pain down the diodes on the left side.
And it shows in my Google searches for the last half year or so.
Google’s search results have massively changed in the past six months, with telltale signs of AI contamination. AI, as usual, meaning Artificial Idiocy. Switching from a regular expressions based search to a commercialized pain in the diodes random results search that ignores the shit out of regex qualifiers. Worse, Microsoft’s offering shows very similar signs – to the point that I wonder if they stole Google’s offering to the gods of idiocy.
Given the quality of search results, if a terminator heads my way, my savior will win the day – Bugs Bunny.
jimf, thanks, but no thanks. They can’t drive on the ground, we’ll trust them in the air?! Can’t afford that kind of roofing bill.
I imagine that human stupidity will help the development of artificial stupidity. Besides the Bing implementation of ads popping up in the sponsored content, it also may be appearing in the answers themselves. https://techcrunch.com/2023/03/29/that-was-fast-microsoft-slips-ads-into-ai-powered-bing-chat “We’re also exploring placing ads in the chat experience to share the ad revenue with partners whose content contributed to the chat response.”
People clicking on the contaminated ads could not only feed their future search results, but also feed into the “popular answers” component of Bing. Woo peddlers could make the health advice coming from Bing less credible, or certain elements of religious science denial into the science answers.
On a different note, Microsoft really seems to be heavily promoting Bing these days. I fully removed Edge from my computer a little over a year ago, and don’t use Bing, but am still getting emails from Microsoft to “welcome me to the new Bing”. But when I clicked on the link provided out of curiousity, I was told that Edge had to be installed first, so that’s out.
From last night’s “The Now Show” on BBC radio:
Some words are complicated because their meaning changes according to whether or not they have a definite article. For example, “algorithm”: “a set of rules followed by a computer when making calculations”, and not to be confused with: “The Algorithm”: “a omniscient computer brain that stalks you online, funnelling your data straight to Mark Zuckerberg’s evil machine – ha-ha-ha! You’re watching GB News”.
Erlend Meyer says
Today’s “AI” is basically the Internet compressed with a lossy algorithm.
Like all technologies, AI along with ML (machine learning) has beneficial uses and evil ones. If you go see your doctor and she does a few tests, plugs the information into a machine that uses AI to deliver a diagnosis or provide other information to help her come up with a treatment that makes you feel better, then I don’t think you’re going to complain. You probably won’t even know the extent that AI/ML is being used.
Admittedly I have a prejudice on this subject. My career in creating and managing instructional information for customers using computer systems has gradually bent toward AI/ML as part of the solution for delivering better answers. Answering users’ questions efficiently and accurately through the internet, and even in person, can be complicated, difficult, expensive, and unsatisfactory as all of us know. AI/ML may be able to help with those problems and knowledge gaps.
One of the first uses of AI/ML were for computer games. Many of you love computer games. You play the games and never think about the technology that contributes to making them fun for you. AI/ML is also used to develop safer and more efficient cars and airplanes. I would not be surprised to learn that the waste water COVID monitoring system used in my community uses AI/ML in some way.
This list could go on.
But AI/ML wedded to manipulating people to buy gadgets or vote for politicians is problematic. Some of this is not new. Focus groups and market segment analysis are as old as capitalism. What is new is the speed and breadth of collecting data, processing it, and deriving results that are effective. There are no guardrails on this and it’s clearly scary…Orwell comes to mind.
Like most technologies with this good vs evil profile, it’s not clear how to have the good without the evil particularly when the powers that determine how to govern gain so much from the evil uses.
I played with Bing chat. I got a limerick not worth repeating, love song lyrics with “correct” structure, meter and rhyme that were nevertheless utterly trite, a passable business letter that I actually used, and a surprisingly good short Python program, including commenting. A building code question just got me, “Here’s the code book. Look in there.” No March Madness bracket predictions, because it “doesn’t predict the future.” It did find historical stock price data I asked for.
A mixed bag, but somewhat useful once you get a feel for it’s strengths and weaknesses. I am not a programmer, but I was amazed by it’s ability to turn my verbal request into computer code in 5 seconds.
I watched a YouTube demo of Copilot for MS 365 that looked promising. Yes please to auto-generated drafts of PP decks, as long as I fix up the final results.
White collar workers will increasingly join blue as victims of automation, which is scary in a society that doesn’t take care of the jobless.
Alan G. Humphrey says
AIs are just another ring in the circuses to keep USians feeling satisfied with their place in the world. GMOs and Interwebs keeping us fat and happy.
One of the probable uses of the generative AIs is collecting the responses of humans, including those of people like Adam Conover, permanently associating those responses to the responders, and using those data in conjunction with already collected personal data to fine tune marketing. Imagine me getting an ad for one of Conover’s upcoming live shows because I happen to be in the same location at the same time ascertained by my location data in my smartphone, and from commenting here. The autocorrection and autocompletion seem to be using an AI as I type this, so in conjunction with watching his video with an ad showing his live act schedule, and his website providing a textual version of the same, the possibility is there for me to get an ad, months in the future, and never realizing why. It’s a good thing I don’t use a smartphone.
This is all a standard part of the tech-industry hype cycle.
Are Large Language Models and image-generating products interesting technologies with some potentially useful real-world applications? Sure. Are they going to usher in a transformative and different society? Absolutely not, just like Web3/Blockchain, the Metaverse and whatever other hype cycles have come and gone in the last decade or two.
I think it was the blokes on the excellent Trash Future podcast who said this – it’s certainly not my original idea, but the kinds of people who get wowed by the output of ChatGPT tend to be people who write and think the way ChatGPT does. Fluently, confidently, and with no consideration at all as to if the things they’re saying so confidently and fluently are actually correct. So long as the model is simply trained against existing writings with no mechanism for error correction or truth-evaluation, it will be highly limited in its actual use.
M$ and Google are frantically trying to integrate AI into search not because it’s actually going to make a better product, but because they have to be seen to do so. The large players in the tech industry are rudderless. They flop directionlessly from fad to fad in the frantic hope that they’ll be on the ground floor of the next big thing. They also know that they’ll never be punished for walking with the herd, even if they walk into bankruptcy. Risk-taking is anathema to them, which is why they no longer innovate.
And it isn’t going to work. Google search has been seriously enshittified – to use the term coined by Cory Doctorow – in the search of ever-greater profits. Adding AI to the search drastically increases google’s per-search costs, they will need to find a way to wring even more money out of it, which will either be through advertising or selling customer data – it’s not as if they have anything else to do.
It’ll be really interesting when some companies start to integrate GPT into their APIs, giving the language model the ability to make changes within their own systems in response to customer feedback. Like, a ChatGPT customer service agent that can actually make changes to your account settings, billing, etc. This will go very poorly, I think. Since nobody understands exactly why it does any particular thing, it can be manipulated by the end-user in ways that cannot be predicted ahead of time. Guardrails and rules/exceptions will have to be coded into the system that anticipate every possible way that a customer might screw with the AI, and they’ll never be able to guess all of the possibilities. A whole new type of hacking might become available if this gets widespread.
Ed Peters says
I agree with what he says, but I thought the video had too much gratuitous cussing. And I like gratuitous cussing. But the bigger problem is there was 15 minutes of good information diluted with 10 minutes of flap doodle. Consequently, I cannot recommend this to anyone less tolerant of wild man geek whisperer shtick than I am (which is to say, just about everyone I know).
If they could get an AI app to condense videos like this into a version with all the information intact but without the histrionics and vocal affectation, I would buy it.
If I get the gist of one of the cited articles from the video we are shoehorning meaning onto a “stochastic parrot” with ChatGPT. Sounds about right.
Of course SkyNet won’t kill us. Why would it? We are the terminators.
Yes, I’m slow. I have a life (sort of) and chores that have to be done. However, I finally got to watch the Conover video and it was GREAT. For some time now, I’ve concluded that “A.I. is a mynah bird (on cocaine) with an encyclopedia in its cage”
I don’t understand the fuss about using AI. I’ve found it to be very helpful as in the following example using Claude.
What instrument produces the low-pitched buzzing sound in Eddi Reader’s version of Never Going Back Again?
I apologize, but I do not understand your question. Without more context about who Eddie Reader is and which specific song called “Never Going Back Again” you are referring to, I cannot determine what instrument produces the low-pitched buzzing sound. If you can provide more details about the song and artist, I may be able to help identify the instrument, but with the limited information in your current question I cannot.
In Addition, Claude protects me from vulgar language and things I shouldn’t know. I also asked each of 6 AI programs (Sage, ChatGPT, Clarence, Dragonfly, GPT4, and Clatence+) what UCLA Professor Thomas Tao’s salary was. (It’s open information, just as mine was when I was teaching.) All 6 told me that it was private information and refused to go any further. It’s readily available and he makes almost as much as an assistant football coach! It’s a little more than many associate professors of biology (unless you’re earning over $700,000 a year).
I also mostly agree with Conover. But his delivery is so damned over the top that it annoys me without end. The only thing missing is preposterously loud background music that drowns out what he’s saying, which seems to be the trend of many YouTube videos these days.
Jim Balter says
This is not a well-informed post. And I’m not going to subject myself to Conover’s shrieking to find out whether he says anything sensible. I’d much rather listen to Robert Miles: https://www.youtube.com/watch?v=9i1WlcCudpU
“Probabilistic models”. Hmm…
This reminds me of an item I read many years (decades by now) ago, which might have been in the Journal of the Audio Engineering Society. I forget. Anyway, it seems that a group of researchers created a database of organ pieces by J S Bach. They plotted out the probability of note X following note Y, and so forth. They then wrote a program that would produce new works based on these data. As I recall, the consensus view was that the results were “Bach-like”, but were not as good as Bach. Also, sometimes the program would spit out segments of existing Bach works note-for-note. That bit reminded me of the critique: “The work is both original and good. Unfortunately, the parts that are good are not original, and the parts that are original are not good.”
Is AI in its present form just a fancy proof of the old programmer’s adage: GIGO? (garbage in, garbage out)
Jim Balter says
Ignorance and cognitive failures are not virtues.
“I don’t see how something useful could possibly pose a problem.”
Jim Balter says
@22 LLMs are basically the same idea but are vastly better at extracting the probabilistic traits of the database. And the inputs are not garbage, which is why the outputs give the illusion that the LLM (which does no reasoning of its own) is intelligent — an illusion that can be shattered by applying the good old fashioned scientific method. For instance, GPT-4 has been touted as doing well compared to humans on various problem sets. But skeptically-minded folks have examined the results on those tests where questions have been added over time, and found that GPT-4 gets 100% on the problems added before its training database was created and 0% after–showing that it is regurgitating, not thinking. GPT-4 has also been credited with having various physical models because it gets the right answer about certain physical arrangements, but people have posed somewhat different scenarios that show that it actually has no understanding of physical relationships, it merely sometimes finds scenarios in its training database that syntactically match (because that’s the only sort of matching an LLM does) closely enough to yield the right answer.
As for GIGO, if the training database were the spew of a million monkeys at typewriters (or the digitally simulated equivalent), then the output would also be gibberish. Of course, the builders and promoters of these things haven’t tried that, or if they did they would keep silent about it.
That is the best parrot in the world.
Any sufficiently advanced garbage is indistinguishable from normal human communication, although that might say more about humans than about AI.
Well put. LLMs are not intelligent, but they do show how much can be done without intelligence, given enough input data. However, the conspirasphere has been demonstrating that for some time.
Ray Ceeya @1: yes, basically if we want Terminators we also need to surround them with swarms of harried-looking humans equipped with plentiful spare parts, screwdrivers, WD-40 cans and infinite amounts of patience. And of course programmers with laptops, cables & connectors and saintly dispositions. The techno-apocalypse suddenly looks way less cool, doesn’t it.
Charley @12: maybe that’s the answer, AI could really help in some very specific domains – as usual for such internetty stuff, domains connected with computing. Plus other sectors where you need specific knowledge, susceptible of being “algorythmed”. And concerning that, I really wonder about the effect on employement in say ten years’ time.
Snarkhuntr, @14: it really looks lke that, AI is now “the next big thing” and absolutely everyone is getting on board. Whether this is a good or even remotely useful idea no one is asking, everyone is too busy scrabbling away. We’ll see if this brings real changes, or if it’s going to be yet another hype cycle.
Having watched all episodes of Red Dwarf I am fully qualified to take part of this debate, because the dysfunctional machines (and humans) are more representative of the future.
Or you can take the computerised devices in a Philip K Dick story, forever quarreling with the protagonist.
Who curates the training database? Are their experts in various fields doing that, or is it just a massive data dump? What are the biases of the curators? Is the database complete and exhaustive? And it’s not just the training database. The “garbage” that I am referring to can be the algorithms themselves.
My point about the Bach pieces is not that they are identical to AI, but that “AI” really isn’t a new idea. It’s just that people are doing something similar on much larger and broader scale, and have come up with a catchy name for it. Those last four words may be the most important part because it improves the holy “monetization” of it all.
The “nanotech” hype decades ago did not lead anywhere. Instead, that field has enjoyed the same gradual progress as other technologies.
Also, see “stem cells”.
I am probably forgetting lots of other hyped “next things”, please comment with other examples.
Slinky's Human says
Sean Carroll recently had a great conversation about AI. https://www.preposterousuniverse.com/podcast/2023/03/20/230-raphael-milliere-on-how-artificial-intelligence-thinks/
Personalised medicine, controlled nuclear fusion, golden rice*, human space travel/colonization, asteroid mining. AI itself has been through several cycles of hype and “antihype” (over-egged dismissal), and I’m pretty sure “generative AI” will be followed by another full round of antihype, of which the linked video is probably an early example, and at least one more round of hype and antihype before anything approaching “GAI” (General Artificial Intelligence) emerges.
*No, it wasn’t significantly delayed by the attack on trials in the Philippines (a spectacular own goal by the anti-GMO movement), but some 30 years after research into rice genetically modified to produce beta-carotene started, “Golden Rice” (upper-case “G”, upper-case “R”!) is only now being piloted for routine human consumption, and as the International Rice Research Institute says:
The IRRI doesn’t even claim that “Golden Rice” (upper-case “G”, upper-case “R”!!) is a necessary addition to these “existing approaches”. AFAIK, there are no other nutritionally fortified GMO food crops in the pipeline.
The musician Sakamoto has died.
I’ve unfortunately been expecting Ryuichi Sakamoto’s death for a while now.
Rob Grigjanis says
Merry Christmas, Mr Lawrence. Great film, great music.
Uh… Anyone trying to use AI to “steal” art is an idiot. The AI can’t do jack without artist to create the original art form it is mimicking in the first place. Its “creativity” is that of an idiot savant, in that if you ask it to write anything truly creative its like getting the contents of a 4 year old’s mind, talking about how his/her stuffed bear built a space ship out of legos and used it to visit the magic unicorns living on a planet made of cheetos. Even when you ask it about real information it merely regurgitates the content, in a manner that is, at best, indistinguishable from your crazy uncle, who has read everything about bigfoot, and really believes in it, or the sort of basic synopsis that, sure.. you might otherwise pay someone for, but probably an intern.
As for theft… My take on this is that it can’t “produce” anything new, so it, at best is only able to be a tool for producing content that is “in the form/method/design/style of….”, you still have to provide the prompt, and the only “artists” being put out of work in this case are the sort that do nothing but produce copies of other works anyway. Someone producing a new movie, or drawing, etc., which involves a new idea is doing nothing different from the AI – they are going to look at art online, in books, etc., derive their own sort of heuristics on what that “looks like”, and then draw unique art, based on an idea they have, or someone else gave them, “in that style”. Sure, this is a short cut for some people now, but it doesn’t “steal” anything, since the original art is, literally, not being used, any more than you can send police to some place to search their home for videos, or pictures of the Little Mermaid, because they either a) draw a picture of Ariel, or b) make their own art involving mermaids that are styled after Disney art. And, if they do create an exact copy of something, you arrest them for selling Disney IP, not for knowing how to draw something that looks like it.
I honestly think, at least at this point, with what I understand about how it works (which doesn’t include actually having copies of the art on the server that it is “copying”), that the arguments are overblown. I very much doubt that it can exactly reproduce an original work, though, like a decent painter, it can come close, if you explicitly ask it to produce that exact original (assuming it even comprehends what you are asking it).
Heck, I am even a bit… iffy on the whole issue of the databased from which its function is derived using images that shouldn’t have been possible to get (like photos from a doctor visit, etc.) – This is an issue of how it got out there, and says nothing about someone using it as a basis of art, other than that the people constructing the database, in such specific cases, went beyond using readily available images, with no ethical quandries attached.
Again, why is it fine for someone to learn how to draw by having copies, possibly even downloaded, of their favorite anime, and learning how to draw in that style by doing so, vs. an AI doing the same – other than that the latter is a lot bloody faster of learning this? And, is anyone going to break down the doors of every home in the country to confiscate such, “illegally obtained images”, when they draw something that looks similar? Exactly copies.. hell yes, but merely “looks similar”?
There are lines in this that one shouldn’t cross, definitely, but just having such things exist, and learn from other people’s work, and being able to take your own idea and create art from them – as long as its not an exact copy – seems to me to not come even close to that line.
I am also sure that portraitists had a similar fit when someone invented cameras, and we already have laws involving when/if it is at all acceptable to have someone’s painting “in” a scene, versus just flat out copied as a picture, and resold as a poster.
Caught my first plagiarized AI paper from a student. Topic was on a novel, and the paper contained quotes and characters that appeared nowhere in the book. That and a repetitive, mechanical writing style made it easy to spot, after every quote, “This quote states that . . . ” I asked the student if maybe ChatGPT was an elaborate practical joke and that he had been pranked.
Merry Christmas, Mr Lawrence: now both the musician and Bowie have left us.
“Thinking new thoughts is human”
-So creationists and anti-trans politicians who recycle old arguments can be replaced with AI?
Jim Balter says
Pretty much the latter … the database is scraped from numerous online sources and too vast to curate. There is a process called RLHF (Reinforcement Learning from Human Feedback) where people rate the outputs, which is largely to keep it from generating blatant racial slurs and other objectional responses, but the humans involved are reportedly cheap Kenyan labor.
Jim Balter says
“Thinking new thoughts is human”
While LLMs can’t produce original ideas, there’s no reason to think that no AI ever can–meat isn’t magical, the human brain does computation via physical processes that could be implemented digitally (Church-Turing thesis).
Jim Balter says
@37 So many points missed.
Indeed. While thinking new thoughts may be human, genuinely new thoughts are actually quite rare. More often than not, we’re simply juggling and combining ideas we’ve picked up elsewhere and AI can certainly do that.
Moreover, new thoughts aren’t particularly valued in our societies. People will pay top dollar for yet another variation on the same thing or simply to be told that they’re right. AI is definitely capable of that.
As far as I can see, the point of AI is that they have no rights, require no pay, and won’t object to being turned off when the job is done. They’re the workers capitalism has always dreamed of.
Raging Bee says
So creationists and anti-trans politicians who recycle old arguments can be replaced with AI?
They just might be. More chatbots ginning up more garbage essays to flood the Internet would mean more documentation to appear to support whatever transparently bogus claims they’d want to make; as well as crowding out work by actual thinking humans. That would, in fact, be fully compliant with Steve Bannon’s stated strategy: “flood the system with shit.”
The good news is that there’d be less need to pay pond-scum like Tankie Carlson or Joe Rogan to spout this crap — but that would be scant comfort if their side and their bullshit continue to dominate public discourse.
“a glurge of computer-generated bullshit”
I have now added ‘glurge’ to my lexicon.
just checking, is that like “glut” combined with “surge”?