Can we just knock it off with the AI nonsense? Somebody is profiting off this colossal sinkhole of useless frippery, but I don’t know who. Look what it’s doing to our environment, all because people like Mark Zuckerberg have decided it’s cool, and the future of profiteering.
The amount of water that A.I. uses is unconscionable, particularly given that so many of its data centers are in desert regions that can ill afford to squander it. A.I. uses this much water because of the computing power it requires, which necessitates chilled water to cool down equipment—some of which then evaporates in the cooling process, meaning that it cannot be reused. The Financial Times recently reported academic projections showing that A.I. demand may use about half the amount of water consumed by the United Kingdom in a year. Around the world, communities are rightly beginning to resist the construction of new data centers for this reason.
Then there’s A.I.’s energy use, which could double by 2026, according to a January report by the International Energy Association. That’s the equivalent of adding a new heavily industrialized country, like Sweden or Germany, to the planet.
Microsoft’s own environmental reports reveal these immense problems: As the company has built more platforms for generative A.I., its resource consumption has skyrocketed. In 2022, the company’s use of both water and electricity increased by one-third, its largest uptick ever.
We should be asking what we gain from glossy shiny artificial artwork, or from bizarre texts cobbled together by machines that don’t actually understand anything. Search engines are already corrupted by commercialized algorithms, do we really need to add another complex layer that adds nothing to anything? Tell me one thing AI adds to improve the world.
Do we need this?
Dozens of fake, artificial intelligence-generated photos showing Donald Trump with Black people are being spread by his supporters, according to a new investigation.
BBC Panorama reported that the images appear to be created by supporters themselves. There is no evidence tying the photos to Trump’s campaign.
One photo was created by the Florida-based conservative radio show host Mark Kaye.
“I’m not out there taking pictures of what’s really happening. I’m a storyteller,” Kaye told BBC. “I’m not claiming it is accurate. I’m not saying, ‘Hey, look, Donald Trump was at this party with all of these African American voters. Look how much they love him.’”
Maybe what we really need is a Butlerian Jihad.
Reginald Selkirk says
Charles Barkley says he’ll punch any Black person wearing Donald Trump mug shot shirt
larpar says
AI can’t do hands.
Marcus Ranum says
AI can’t do hands.
That is so last week.
Some of the newer models do hands very well, it’s a matter of training. The issue is that the models with more detailed training are larger, therefore slower and consume more memory on the GPU. Services like Midjourney are already doing much better hands because of training and they use GPU farms that make the processing less of an issue.
Go back 2 years and look at what AI art generators were doing, compared to now, and extrapolate forward. In 5 years we’ll have text to short movie including dialogue and sound effects.
robro says
First of all, “AI” in all its forms is just a toolbox. As I read it in Scientific American and ACM newsletters, AI is being used for very valuable research in astronomy, biology, engineering and so forth. Like any tool, “AI” can be used for bad purposes. In this particular case, and I saw some others on some show the other night, I know that’s blamed on AI and it may be using AI is some way, but mocking up photos can be done the old school way using PhotoShop.
larpar @ #2, “AI can’t do hands.” Today but if they know they can’t do hands, they will figure out how to do hands.
Autobot Silverwynde says
Well, this explains why Cybertron became a planet without resources….
lotharloo says
@3:
Which will be an improvement compared to the current stream of superhero movies.
But jokes aside, I don’t think it will grow as fast. You are doing a fallacy by comparing the current situation to 2 years ago and extrapolating to the future. This is the rate of investment into AI: https://www.statista.com/statistics/941137/ai-investment-and-funding-worldwide/
You can’t be expecting a doubling of investment every two years. We will not be investing half trillion dollars into AI every year by 2028.
Marcus Ranum says
But jokes aside, I don’t think it will grow as fast.
I’m using the current “behind the cutting edge” tools and used the old cutting edge tools. I’m also watching the cutting edge and it looks pretty good. For example the image generators are starting to add semantic layers that will nudge the outputs toward inferential rules like “most people have two hands that have four fingers and a thumb” etc. But … it’s just rules and prior probabilities. As long as those continue to become deeper and more expressive the improvements will continue. There is a chance the ramp-rate will tail off but if we follow the notion that model completeness and rule depth are where the game is, we can’t completely model reality nor can we run out of details to add to models.
Look for a shift toward productization – smaller more ubiquitous models offering marginal improvements at mass scale. There is a huge amount of room for improvement there without requiring great tech advances. For example, semantic rules atop neural net models could replace human diagnosticians in medicine for first order filtering. Current models are good enough to do that, its packaging and productization not cutting edge.
I’m going to go out on a limb and assert that large language models can already write a better film script than basically anyone who is scripting Star Wars and superhero flicks. So maybe we’ll have a boom of badness there. Or maybe it’ll improve things.
I have been watching and counting fingers (hair strand continuity too!) on Instagram and the fingers are improving fast enough that most guys just look “two big boobs, drool, click like” while there is a lot of room for improvement most humans are comfortable with mere adequacy – the aforementioned Star Wars a case in point.
Actually, cinema is a good example. In some areas it has show ln remarkable improvement but good plots remain a notable and memorable exception. With AI we will see some areas leap ahead while others grind slowly or perform at a human level. I am trying to sneak in an important issue in that – AI are inordinately faster than humans at many things. An AI can produce in seconds what might take Caravaggio a month or two. What if we gave an AI a month and back-checked the semantics of finger-counts? AI don’t experience time, of course, but humans have many more feedback loops in play than most AI applications. With a nod to George Lucas, who needed a few superceding feedback loops on his dialogue generation module, eeeeech
Walter Solomon says
Marcus Ranum #7
AI films will be garbage and you know it.
Akira MacKenzie says
Nah. Then humanity with start to stagnate and we end up being ruled for 5000 years by a human-worm hybrid. After that things get… weird and way too psychosexual.
PZ Myers says
I agree, but with the acknowledgment that most human written films are garbage, too.
Marcus Ranum says
AI films will be garbage and you know it.
Of course. But the current norm is garbage. The question is not “can AI make films as well as Stanley Kubrick?” It is “can AI make films as well as George Lucas?”
[I specifically use Star Wars on this point because of the many edit loops that were involved in production. The movies were not direct renderings of Lucas’ original script dialogue, and every stage of production of a human film has edits and tweaks. What we should be comparing is a human-produced turd that has been considerably polished with an AI-produced film that has been similarly tweaked. Or we should compare the directors’ cuts of bad human-produced films with the outputs of AIs]
I think that because we have seen AIs turn around and whip human chess and go masters, we are now expecting AIs to start playing at a grandmaster level in all the things. AIs already sketch and illustrate better than the vast majority of humans – and we want to dismiss them because they are not all Da Vincis and Caravaggios. My point in all this is I am trying to remind my fellow humans that we aren’t universally great and our expectations are perhaps unrealistic.
Another example is automatic driving. Most human drivers suck. In my life I have caused 4 accidents and been in 8 more (most of the ones I caused were fender benders or sliding on ice) – 12 in 60 years is not great. Would an AI driver do better or worse? Would an AI driver be safer if it didn’t have to share roads with humans? Or deer? I do not know the answer to that question. But if we expect AI drivers to win F-1 races we may be setting the bar completely wrong. AI are more likely to be driving military vehicles and, uhhh, god help us?
larpar says
Can’t do teeth or chins either (all the same). And look at the skin, not a blemish to be seen.
lotharloo says
I don’t think any current AI tool can write any coherent long document. They quickly lose context.
Superhero movies are garbage but they are not incoherent in the true meaning of the word. Go checkout some AI generated novels.
drew says
Professa please! Corporate abuse of the commons is rampant today. This is only one example. And unless we’re going to tackle actual root cause (capitalists take from the commons and keep the “profits”), trotting out this argument is on par with “think of the children!”
Marcus Ranum says
lotharloo@#13:
I don’t think any current AI tool can write any coherent long document. They quickly lose context.
Neither can John Ringo. At the risk of sounding like a broken record, the question is not whether an AI can write as well as Kazuo Ishiguro or Salman Rushdie, but rather whether it can write slightly less badly than John Ringo.
Superhero movies are garbage but they are not incoherent in the true meaning of the word. Go checkout some AI generated novels.
I’m not going to make my dementia any worse, thanks for the suggestion. Again, the important point I was trying to make is that there are feedback and edit loops all over the place in human-generated content. A superhero movie’s script will have been “improved” by numerous “writing room” exercises, etc. Think about how many humans have their hands in those scripts, and how they still manage to come out so … not quite incoherent. A more realistic scenario, which I have explored over at [<a href=
https://freethoughtblogs.com/stderr/2023/02/15/john-ringo-in-the-crosshairs/“>stderr] is that a hack writer like John Ringo uses an AI to rapidly fill in gaps in a plot, editing them together cut and paste in edit loops that make it less incoherent and maybe even better. The danger is not that we will have to deal with an AI Shakespeare, but rather that we will have a million AI John Ringos.
beholder says
I didn’t think what Great American Satan had to say about avocado-toast liberals ferreting out each other’s “AI cred” the way Republicans use antivax language would be demonstrated so soon on this blog.
No.
Would you like me to keep a running tally? Maybe Marcus or GAS will do so. I am sure this sentiment will age well.
Marcus Ranum says
Tell me one thing AI adds to improve the world.
For whom?
It’s a stupid question. AI is going to allow the ruling class to stop paying boiler rooms full of cold-caller telemarketers and replace them with AI cold-callers. They’ll save a lot of money. That will make them happy. You, not so much. AI is going to allow some countries to win military engagements they would otherwise lose, or not be able to afford to win. That will make them happy. You, not so much depending on where you sit. As I mentioned above, AI will enable mediocre script-writers and hack authors to produce more dreck faster, and it may even be slightly better. That will make them happy. Shakespeare won’t care. You, I don’t know. One thing AI will do is (maybe soon!) allow someone to go to an ER and sit with an iPad, hold a sensor array, then answer a bunch of questions, and get immediate appropriate response with accuracy comparable to a human doctor they would have to wait an hour or two for, if they weren’t spouting blood. Will it be perfect? Hell no. Are human doctors? Hell no.
But a short answer would have been “google translate” which is already making the world a better place and has been for about a decade. Countless travelers have pulled out a phone and been able to resolve a question asked of someone who speaks a language they don’t, accurately and (sometimes) amusingly. Couple just that one point with medical emergencies and I’m going to say “game, set, and match.” Imagine, you are in a hotel in Alabama and you speak no Alabaman. You are experiencing debilitating pain in the abdomen. You don’t think it’s food poisoning because you haven’t eaten any Alabaman food, but you are running a fever and are afraid you may pass out. You stagger down to the lobby, show the Alabaman behind the desk your phone on which Google Translator is saying (roughly Alabaman: “Y’all, mah belleh hurts real bad I think I may have appendix rupture”) and the Alabaman reads it. 15 minutes later, when they are done reading it, they make a phone call and type something onto your phone that translates as “Bro, I called 911 and an ambulance is on its way.” You survive. The guy who tried to communicate with the Alabamans died in agony.
imback says
I thought maybe the Butlerian jihad was a nod to Octavia Butler, but no it seems to be Samuel Butler.
robro says
Marcus Ranum @ #17 — Cool. I can have even less guilt about hanging up on cold callers if I know they are an AI bot. Actually I don’t answer any calls from unknown numbers. I’ll block and delete them unless they prove to be something I need to respond to. Maybe that’s why I don’t get cold calls or bot calls too often anymore.
cartomancer says
Marcus, #17,
Since it’s Alabama, though, you will still be bankrupted by the ambulance charges or bilked out of millions by the insurance companies.
Steve Morrison says
I just saw this story: Public Trust in AI is Sinking Across the Board.
IngisKahn says
Don’t be so sure about AI not being able to write coherent books. Dynamic vector memory systems enable LLM agents to always have relevant information in context. And this context keeps getting bigger and bigger.
Robbo says
AI is shit. And dumb. Unless, and until true sentience emerges. Then skynet. Lol. Or the whole world will become pencils. Either way, I don’t want to live in a world of pencil or T-1000 overlords.
However, there has been a spate of useful “ai” that can identify tumors in various parts of people. Find weird shit in the early universe with hubble and jwst datasets. Machine learning. That is good. Let’s pursue that. Good results! Machine learning FTW.
It’s unfortunate that “ai” can mimic “human” interactions. And plop out crappy photos of people who obviously aren’t real cuz the have too many fingers and backgrounds have obvious misspellings. And are anatomically “hot” but NOT real.
It’s Photoshop from hell.
And the models are getting better at rendering fingers.
So I envision a future of “Photoshop” assassins.
Former Photoshop employees form an assassin group to fight the onslaught of ai generated crap.
We can wish .
Now, it seems like it’s leaning to deep fakes and that shit. Ads. Propaganda. Maybe push out a shitty movie with photorealistic images of some star of the day. Make it a porno! Better if it’s a world leader, like Biden, you can use to discredit them.
We, the people, of the world, need a tool.an ai tool. A tool freely available that will detect signs of other AI manipulation. So we can be sure of the Providence of information being disseminated to us.
Please. Some programmers. Develop a tool to combat the shit autocratic fucks are using.
Great American Satan says
im some kind of pro-ai techbro radical transhumanist radical partisan because i am in a moderate stance. like, the fact anti-AI is rapidly becoming a lefty moral panic should give good skeptics pause about articles with big claims about its evil. 90% of the “evidence” I’ve seen of its “plagiarism’ was blatantly fake to anybody in the know, for example. but on the flip side? it’s the hot new toy of malevolent government and capitalist greedlords, which means it’s most definitely doing a lot of harm right now.
I’m disappointed that people are naming it as the problem when the real issues are the same as they’ve ever been, bc it has massive potential to do good. But I admit, it may be a very good thing that public opinion is turnin’ agin it, because it may throw some water on those few malevolent mofos that are vulnerable to public opinion.
Great American Satan says
y’all complaining about how it’s bad at art are in for a rude awakening sooner than you think.
IngisKahn says
Such tools are already being created, but it’s an arms race. As AI becomes better, those tools become worse. Most generative AI systems now use an adversarial model where their potential output is run against models that try to determine if they are detectably artificial and adjust against that.
S maltophilia says
I just read https://theconversation.com/emotion-tracking-ai-on-the-job-workers-fear-being-watched-and-misunderstood-222592 before coming here. Ugh!
Skatje Myers says
Oof, spicy take from the father of someone who’s spent almost all of her adulthood working on AI.
There’s really a whole lot to unpack from the current LLM boom (/advances on the visual end, which I don’t work on), the arms race on compute power going on, and the environmental impact, but the takeaway shouldn’t be that AI as a field isn’t useful — it demonstrably is and has been for more decades than I’ve been alive.
Jaws says
At least HAL only killed four people (missed on the fifth one). Which implies what is really frightening about “enhanced Eliza with faster processors and bigger databases pretending to think”: HAL, fictional though he was, was a lot less destructive and was considered to be horrifically malfunctioning with even a single instance of inaccurate presentation of information.
You really should fix the AE35 unit.
invivoMark says
@Skatje Myers 28,
There’s a question that’s been bouncing around in my head for a while, and I’d be delighted if you took the time to respond to it, since you’re clearly more knowledgeable than I. All AI programs we’ve seen seem to have pretty notable limitations. ChatGPT, for instance, is capable of producing text that resembles references, because it “knows” what a reference should look like, but the references it produces are fabricated.
How far away are we from AI that can surpass that limitation and actually produce relevant references to real articles? The way I see it, it’s one of three possibilities: 1) ChatGPT will just eventually “figure out” how references are supposed to work (I don’t think that’s possible), 2) Software engineers would have to write special code for ChatGPT enabling it to parse information from articles and only format those articles into citations, which would take development time but it would then be a component of ChatGPT, or 3) That ability is so far beyond what ChatGPT is (it is only a language simulator), and it would take complete ground-up development of a separate AI program to be able to “understand” how references are supposed to work, and we are very far off from that.
My suspicion is the last option, but my experience with machine learning is limited to some very simple tools used in the biological sciences. Those tools are built for a very narrow and specific purpose, and are not meant to be creative in the way of ChatGPT or Midjourney.
I think the answer has a lot of implications for both the worries and the hopes people have for AI.
Silentbob says
@ 28 Skatje Myers
That’s how I feel whenever PZ does one of his “nobody should ever go to space, it’s stupid” posts. X-D
I’m just kidding, good comment.
Phrenotopian says
I’ve been guilty of trying out ChatGPT a several times and although it is somewhat impressive how it can mimic natural speech, I’m still underwhelmed by the results. Taking a step back, it really is just a glorified search engine… Whereas it is a more pleasant experience to have a summary of current knowledge packed into a relatable response, it’s jarringly obvious that “AI” doesn’t really understand what it’s doing. It needs very precise cues and often needs to be corrected multiple times before you arrive at a conclusion or result you can feel reasonably confident in. In the end it just feels like a waste of time, and I’m painfully aware of how much water may be consumed by each interaction.
Will AI smarten up eventually? For that it would need true conscious awareness and I’m still not convinced that mere calculation will lead to such a singularity happening with our current technology and limited understanding of how the brain works. Probably not in my lifetime.
Great American Satan says
good question, mark @30.
Dunc says
Microsoft Copilot already does this.
The thing you have to remember is that ChatGPT isn’t the state of the art, it’s just a toy to generate some buzz. To get useful output, you need to do a bunch of pre- and post-processing to provide some actual context to the LLM. There’s a brief summary of how Copilot does it here: Microsoft Copilot for Microsoft 365 overview.
lotharloo says
@15 Marcus Ranum:
If that’s what you mean by “AI writing a novel” then sure it is doable now but then I would say that the term is very misleading. To make AI write something that reaches the level of even the shittiest human writers, you will need constant feedback, corrections and so on. You can have it write small sections, max one or two pages at a time and even then it should be proof read to make sure it does not include absolute non-sense.
lotharloo says
@33 Great American Satan:
Depends on what you mean by art. None of the AI tools can produce anything that is actually art, not now not even in the foreseeable future since the techniques that are being used are incapable of doing so.
But they can be used as tools by humans to create works of art. So in that aspect, I also don’t understand the opposition to AI.
It seems like these days there are two groups of extremists: one group claims that AI will soon replace humans and solve every possible problem, generate art, write movies, optimize every issue, and so on. The notevenwrong idiot is also in this group who thinks AI will soon become conscious and remotely engineer a molecular factory which will create another molecular factory which will create another molecular factory which eventually kill all humans. These people are fucking delusional.
The other groups thinks AI is completely useless, waste of time, not good for anything, and everything that is does it absolute garbage and so on.
The reality is that AI, or to be specific, things made out of deep neural nets, can solve some problems better than the classical techniques but they also have create limitations. They will eventually become yet another tool available to use and we will gradually learn where they shine and where it is best to use other tools.
StevoR says
@10. PZ Myers : “I agree, but with the acknowledgment that most human written films are garbage, too.”
Sturgeon’s law applies to everything.. ( https://en.wikipedia.org/wiki/Sturgeon%27s_law )
@29. Jaws :
Indeed & – do I need a SPOILER WARNING on a book now set over a decades in the past as well as written in 1982?
They also made a movie of that sequel too – including this scene with HAL HAL: Will I dream? which echoes although notan android theold Dick-ensian Electric sheep novel featuring replicants..
HAL is later repaired and his warning saves the crew of the Leonov ship sent ot discover what wrong and investigate the monolith.
@18. imback : “I thought maybe the Butlerian jihad was a nod to Octavia Butler, but no it seems to be Samuel Butler.”
Nope. The Dune franchise & ‘verse :
https://dune.fandom.com/wiki/Butlerian_Jihad
Kathi Rick says
OK i friggin’ LOVE AI! Have always wanted to have my art leap from my forehead fully formed like Athena from Zeus’. Obviously not a popular sentiment here but – I LOVE IT! It is a wonderous starting point onto which i enchance/manipulate to create my own work. AHHH – MAGICAL!
Marcus Ranum says
None of the AI tools can produce anything that is actually art, not now not even in the foreseeable future since the techniques that are being used are incapable of doing so.
Dogmatic much?
Lacking a definition of “art” that is not circular (I suspect you want to define “art” as “not what AI make”) I argue that art is a continuum. If it’s a continuum then some people are more or less artistic, which seems intuitively obvious to me, and some humans aren’t artistic at all, while some australian shepherd dogs might be more artistic than the bottom ranked humans. Now, where do AI go on the continuum? It seems an easy case that some AI are more artistic than some humans, and some humans are more artistic than some AIs.
Given that humans are meat-based AIs, you’ll have a hard time defining “art” in such a way that AIs cant do it while humans can. The algorithms we both use are also on a continuum of complexity but you’ve got to introduce ineffable concepts like “souls” in order to preference humanity, and then argue convincingly that those are not algorithms that run on meat hardware.
Marcus Ranum says
PS – human art is a learned behavior. Computer hardware and software can demonstrate learning behaviors, on a continuum also like humans do. It’s just a matter of the inputs and training in the learning model.
We don’t have any of Caravaggio’s refrigerator period art, but it’s a tough position to defend that he came out of the womb special so as to deny AIs ability to improve.
This is similar to arguing about evolution with theists. As soon as they admit that species can change over time, the game is over. As soon as someone admits that AIs improve over time then we are arguing when, not if, the AI Caravaggio is coming.
[I think we passed that point years ago]
lotharloo says
@Marcus Ranum:
I think you are misunderstanding my point. I am claiming the current deep learning tools can’t make art. This is because they work by consuming huge amounts of data. They can copy whatever thing they have seen and while I don’t think I have a precise definition of art, I know that copying others blindly is not art.
The fact that “AI is improving” is misleading. AI is not improving at all in having an understanding of what they are doing. The current techniques can’t do that.
Deep neural nets don’t learn like humans. They for instance lack any understanding of the concepts they are learning. Now don’t ask me what “understanding” means because it is hard to define but you have probably heard the thought experiment of an alien who taps into human conversations and can manipulate symbols to pass as a human and yada yada to show the difference between producing a correct answer and understanding something; it has to do with having a correct mental image. DNN lack such things.
snarkhuntr says
@various,
Setting aside the by-now entrenched discussion about whether ‘AI’ produced output is ‘art’ or ‘creative’ – maybe we should look back up to PZ’s original post here and ask:
“Are the outputs worth the cost?”
And I submit that we really don’t know the costs yet. ‘AI’ is in the midst of another tech-industry hype cycle, where VCs and enterprise corporations throw vast amounts of cash and resources around hoping to be the owners of the ‘next big thing’. They’re not sure what that thing is yet, because there really isn’t a demonstrated use-case for AI models beyond automating the production of annoyances like telemarketing, spam e-mails, etc or rapidly churning out low-effort books/art for sale on online drop-shipping platforms.
One thing that Ed Zitron has repeatedly pointed out is that much of the ‘investment’ in OpenAI and their competitors has come in the form of free credits to use various cloud computing platforms. This serves the function of driving demand for those platforms and, I suspect, the hope is that if enough other companies can be convinced to incorporate ‘ai’ features into their products that this will create a sustained high demand for that computing power – which the owners will, in time, be able to control, monopolize and charge more money for. It’s just Doctorow’s enshittification cycle, but right at the beginning when the companies are still trying to hook in the userbase.
These companies want to get you and others hooked on their services, and they’re betting that if they subsidize the services long enough that they can create sustained demand willing to pay when they crank up the prices. Build ‘ai’ into AAA videogames, and now you have to pay a monthly subscription to Microsoft Cloud or AWS or whoever provides the backend computing that makes the NPC dialog slightly less repetitive.
Marcus makes a lot of neat ‘AI’ art, and some of it is done on his local expensive gaming machine – but much of it is done through midjourney’s subsidized cloud computing. Who is paying for the GPUs, the power, the cooling water? How much would it be worth to you to generate each image from MJ? A dollar? Two dollars? What if you’re having trouble narrowing down your prompts, is it worth 20-30 dollars for you to try to get the image you want? These are the questions that most users of the software aren’t answering now, because they’re not being asked to. My wife uses CanvaAI to generate art for her business, this is presently a free or near-free service offered as a loss leader, that will eventually change. It’s possible she’ll go back to buying stock photos when the real price shows up.
Sam Altman has said that full incorporation of AI into society would require the development of completely new kinds of clean powersources – because in his words “We still don’t appreciate the energy needs of this technology,”. And right now, most people aren’t even paying for that energy costs. The people benefiting from and operating the current hype cycle are. Remember folks, VCs and other capitalists don’t need to produce anything useful to get rich, they just need to produce something that you think is useful long enough for you to buy it off them at an inflated price.
Back to the costs:
Suppose you could actually generate good video by feeding an AI some prompts. And for the record, I’m not sure this will ever be possible. Certainly the current generation of software can’t do it, even with vast budgets and curated outputs to make superbowl commercials and promos, the videos are rarely particularly convincing. But set that aside. Imagine that we could plug a series of novels like Game of Thrones into an AI and get it to generate a TV series for us, and at the low cost of – say – a couple dozen million dollars in computing time, plus about the same amount in human labour in curation and editing. Is that a better world? The production of that TV series provided well-compensated employment for tens of thousands of people, many of whom were able to execute their arts and crafts at a very high level and actually get paid for it – I’m thinking of the sewers, leather workers, metal workers, painters, sculptors and all the other talented artists that produced the gorgeous props and costumes for that series.
Is it a better world if their creative output is replaced with the regurgitations of an AI that, fundamentally, needs to be trained up on pre-existing human work? Who would generate new kinds of art for the AI to ingest, chew up, and spit into our open beaks? When human-made entertainment can’t compete on an open market with AI generated products, I wonder what will happen to it? I envision a sci-fi-like scenario where the masses are fed ai-generated entertainment in endless profusion, while the elite get to enjoy actual novel human output that eventually filters out into the AI ecosystem. Maybe there’s a story in that, I should ask ChatGPT to write it for me.
birgerjohansson says
BTW Sabine Hossenfelder just trashed the latest hyped AI, Claude 3.
The claims it has become self-aware is b@€&*it.
DanDare says
Personal benefits of AI.
I get AI to read my draft documents and explain them back to me so I can see if they get the point across. The AI also provides suggested alternate structure and I use some of the suggestions.
A familly member has anxiety and talks about it with ChatGPT and it helps a lot.
John Morales says
ELIZA has come a long way.
Prax says
@Jaws #9,
HAL was only destructive because he had conflicting mission parameters (“do this awesome science mission in full cooperation with your awesome human crew, but also do this other secret alien-contact mission and lie to your crewmates about it for the first half”), and then they basically decided to execute him for avoidant behavior. Not many humans would have reacted better.
@Phrenotopian #32,
So it’s like teaching the average human toddler, then? Or teaching adult me how to dance.
@lotharloo #36,
That’s not true; fractal and other digital iterative art like the Mandelbrot set has been a significant part of our culture for forty years now. Computers routinely generate unforeseen but aesthetically significant content.
Sure, these programs don’t design, operate, or train themselves, but it’s not as if a human artist’s brain proceeds without input either. A child raised without socialization, education or sensory input would probably not be a very good artist.
lotharloo says
But I would classify that as a “computer-assisted art”. For example, fractals themselves don’t have colors. The “artistic” part of turning a fractal into a pleasing and nice looking piece of art is selecting the correct colors which involves human involvement.