On the issue of AI, FreethoughtBlogs has gone Point, Point, Counterpoint, Another Consideration, and … OK, I’ll give my thoughts on that, briefly. LLMs and AI image generators are fundamentally different, so I’ll give each a brief look.
LLMs memorizing and leaking personal info: It’s been demonstrated, it’s a problem that should get sorted ASAP. I’d say if any business or agency was found to have revealed personal info through use of an AI – and at this point, OpenAI surely has – then they should incur the same legal penalties as non-AI leaks. I don’t know enough about LLMs to hazard a guess on the best way to address these issues, but I’ll reiterate a few things I’ve said about them, about which my opinions have not been swayed:
LLMs, like all this new generation of AI tech, have genuine usefulness, which left discourse completely ignores. Various problems of them need to be addressed, but the usefulness should never be dropped from that conversation, and the idea of going full Ludd on the tech is abominable to me, because what I regard as the most important use of LLMs is not something I’m willing to lose ground on. Also, they will quickly be better at many human jobs than humans are, and that saves money, saves humans the humiliation of working jobs where their intellectual shortcomings are thrown into sharp focus, and can definitely save lives.
Regarding the idea it will steal somebody’s writing, that’s a risk that human authors take every time they hit a fucking keyboard. Who did JKR rip off the most? That Worst Witch lady? Rapist Neil Gaiman? She did rip both of them off, to an extent.
I’m not saying she did it on purpose. Human minds unknowingly rip off other human minds all the damn time. How closely we want to prosecute these things is a matter for intellectual property law of various flavors, but the more strictly those are interpreted, the worse things will be for the flourishing of art, and especially for independent artists. Be careful what you push for.
I still abso-fucking-lutely am not the slightest bit convinced yet that AI art generators are reproducing images from their training sets to an actionable extent, any worse than human artists do every time they look at reference or aim toward a given style. You got it to reproduce one of the most reproduced images in existence, like the mona lisa or the coke logo? Ooh. You got it to reproduce something at all more obscure? I’m betting you directly fed that image into it as an “image prompt,” and ran the prompt a hundred times, and picked the closest result.
This has never happened to me in the entire time I’ve been doing AI art. I asked for certain styles or images in a thousand ways, even fed in images of a particular artist’s style, and it still did not come back with anything like their original images. I get smushy signaturesque things in the corner of pics sometimes. Derivative works may be covered by X and X laws, but if a snippet of a cloud or an eye happens to look 85% like artist Y’s work, on an image that is 95% nothing like that work, it is not fucking derivative. Don’t insult my intelligence. The leftists pushing the case online have demonstrably used bad information and outright fabrications to make their cases, and the Asswipe Corporate Stooges using this as an excuse to push expansion of copyright law in court? They are an enemy of every artistic freedom you can imagine.
Did AI art generators successfully create a file compression system an order of magnitude greater than any that ever existed before, where they can take less than a bit of data and recreate your 1.5 megabyte .png from it? Sounds like the “zoom and enhance” cliché. Sounds like scifi bullshit and magical thinking to me.
Have exploit hunters been able to tease personal data out of these programs? Yes. How? It’s literally impossible for it to be in the image information. It’s in another aspect of their architecture, which can absolutely be fixed, and should be.
As to the issue of consent that was brought up in comments here. I think that’s fair and fine. I think it’s based on feelings instead of the tech that is in front of us right now and what it’s actually doing, but feelings are a legit consideration. We should develop a new generation of AI to get rid of all training data from non-consenting artists. People on both sides might tell you this cannot be done, but they are wrong. It can be. Might take a while, certainly will take a lot more expense, and it will involve some greenhouse gas excess during the training phase.
But I want this done, more than the anti-AI people do, because I want this part of the conversation to fucking stop. You know what would be a hilaribad way to retrain LLMs without the personal info? Tell LLMs to say everything they know except the personal info, and retrain them on that output. That’s silly, but I hope it shows that there must be a way to do this.
For my part, I hope this is the last time I feel compelled to make a post on the subject. Because I’d like my personal part in the conversation to stop. Yes, I should be able to control myself and just bow out. Maybe I will get the hang of that someday. But for now? The only reason I have a blog is because I don’t have that sense of restraint.
I know you’re bored of this too. I’ll shut up about it as soon as I’m able. I’m workin’ on it, man!
–
What I think happens is that when there’s lots of similar art on the net (e.g. fanart of comic book characters, WarHammer minifigs) then generative AI gravitates towards images of that type. It’s not copying any particular image, but is producing similar images.
I see signature and watermark like things every so often; watermarks because lots of images were taken from stock art sites with watermarks. Usually it’s not coherent text or logos, but I did get an convincing (but unwanted) DreamTime logo once.
I think that someone else has already mentioned https://arxiv.org/pdf/2212.03860.
To be completely honest, my whole problem with AI-generated content is that I do not consider it to be art unless there is significantly more input from the operator than just prompts or curating the input poll with someone else’s images.
To me, using AI to create something with prompts is analogous to using a forklift to lift weights. Both power-lifters and forklift drivers are lifting heavy weights, and both need specific skills to do so, but only one does it with the use of their mind and body. And we have a word for someone who uses their mind and body to achieve feats of physical prowess – an athlete. And just as I do not consider forklift drivers to be athletes, I do not consider AI prompters to be artists.
A child-drawn crayon picture is to me more artistically valuable than the most photo-realistic AI-generated picture from given prompts. I can empathize with the child, and imagine their joy in learning and creating. I get a peek into their mind by what they decided to create and into their progression on the skill in what it looks like etc. None of that applies when watching something AI-generated. The only thing that tells me is that someone put prompts into a computer and thought the resulting image was cool enough to click “save as”.
There are of course legitimate uses of AI in technology, sciences, and even in art. It is just a new technology. But that there are legitimate uses does not discount the fact that there are also plenty of illegitimate uses and that it needs to be regulated, like literally any other human activity that has an impact on a societal level.
That is my last word on generative AI. I do not acknowledge it as art and I won’t. I am doing my best to avoid it, when googling something I add “-AI” to searches whenever possible, I do not read your or Marcus’s posts about it because the novelty wore off and now I dislike it and I do not care about the product it offers.
I am sorry if this stance distresses you. I do not like conflict and I do deeply respect both you and Marcus. But this is one issue where we probably will never agree.
that’s not a controversial take charly; marcus and I are the wooly eyed heretics straying from orthodoxy on this one. your opinion on this finds you in good and numerous company. i only feel the need to express my view on this because it is an under-represented viewpoint. i want to show people it’s possible to be a progressive leftist type and not hate AI, specifically to assuage guilt or bad feelings the few people in that position may be suffering.
oh and my posts re: AI are tagged with “domo arigato roboto-san” in japanese characters. not super useful, I know.
I think I just have a few issues with AI.
– As a sci-fi fan… This is not AI. It’s not even close.
– You’re the product. Don’t trust it. Assume anything you send it will be used badly.
– Feeding corporate greed. I think it’ll all go behind a paywall someday but only after the public trains it and possibly becomes dependent on it. I think that at least in part because…
– Theft. They’ve already stolen a bunch of data. There’s no argument around that and the relative utility of the end product is irrelevant. You’re either okay with this aspect of it or you’re not.
I don’t think I’m particularly against this not-actually-AI product, though. Not using something and recognizing some downsides of it does not turn one into a screeching howler monkey. In fact everything I’ve listed has a lot more to do with the corporations and how they’ve acted than any technology they’re putting out.
Bebe Melange, I think if you want to be out of the conversation, maybe you keep feeling like jumping in because there’s something you need from it? It may not be the first thing you think of, such as the companionship aspects of this product you mentioned, but rather something driving that need. I’m not sure if what you’re feeling is the same, but I’ve had friends get into situations where something has taken on an unusual importance to them. Something else in their life wasn’t working out well so they ended up putting too much importance on something else that really wasn’t designed for it. Like when someone gets too invested in an entertainment product for example.
For whatever it’s worth, I very much doubt any of us here are going to affect this situation in anything close to a meaningful way. So be pro, con, or whatever makes you happy. It’s all just a wash in the end.
as the leftosphere is so thoroughly awash in hatred for ai tech and anyone using it, has that basically become invisible to y’all? carving out one tiny sliver of a place or left AI likers to exist without having 24-7 shit down their throats, that would be my driving need. anybody not noticing that itself has some use has not been noticing the vitriol because they don’t identify with or care about the people on the receiving end of it.
I just don’t see it because I don’t exist in those spaces. When someone catches religion and goes all hellfire and brimstone rant about their pet topic I tend to just leave or kick them out.
There seem to be a lot of people who like this fake* AI stuff. I’ve honestly seen more rabidly pro- than rabidly con- rants on this topic. I would think it wouldn’t be difficult to find other people who like it? So if the primary concern is being around Your People then by all means go find them. This product is a tool and you want to talk about the neat things you can do with it around people who will appreciate it like you do. On the other hand if it’s more about not wanting to hear any critiques or disapproval of it then that’s a bit different. When I’ve seen that it tends to be overinvestment in something that isn’t meant to handle it. Often because other parts of your life aren’t working out well. Kind of like Tom Hanks’ character really investing in Wilson during the movie Castaway.
Sorry if this is all off the mark. It’s just my small attempt to help out because I don’t think any of us should be feeling bad over this. Like/dislike/love/hate/whatever, we still don’t have a meaningful part in the story of this stuff. So it’s a real waste to let it mess with your head.
* I apologize if that hits you in a bad way. It’s not intended to. Typing my earlier reply reminded me how much disdain I have for the shitty marketing decision to call this stuff AI. Some of the assholes involved even tried to whip up fake fear about the AI taking over. A kitchen blender has a better chance of taking over the world than this stuff does.
ok but srsly where in the living fuck have you seen pro-AI in a space that any leftist would even remotely tolerate? i have seen exactly one teeny tiny clique on tumblr of queer anarchist types that like it, and permanently felt backed up a wall because 99.99% of left or left-adjacent people fucking despise AI, enough to where they are presently taking shits in my comments and asking me to feel cool with that. meanwhile, siggy is neutral and marcus and i are quasi-pro and every other FtBlogger takes giant shits on it whenever the subject comes up, any random discussion on reddit will have anti-AI doomsaying come up in the comments even if it’s about ice cream or kittens, and multiple famous left youtubers have devoted thousands of hours of video to shitting on it while precisely zero have said anything positive about it.
what kind of internet are you on? where could i smoke some? if it’s run by people who like NFTs and crypto, and you imagine i’d feel at all OK with it, maybe you don’t get where I’m coming from or what the fuck my problem is.
it messes with my head because i feel backed into a corner about it. generally people in my comments haven’t felt the need to be shits about it, but that does not seem to be the case today.
You may notice I’m not addressing specifics of what you and the others have said. That’s because I don’t want to get any madder than I am now, but I could address any and all of them point by point. I’ve done that kind of thing before. But my affection for y’all takes a hit for each second I spend thinking about your opinions on the subject.
Bebe:
You seem to be reading more into what I’m saying than I intend to say.
I’m not shitting on your thing, at least not from my viewpoint. I’m not raising bad faith arguments or
attacking you for having a different opinion. I’m not defending terrible, unhinged things other people say to you about this topic. Frankly I’m not even aware of what anyone else might have said about it.
And to reply to your question about where else I’ve seen people say supportive statements about this topic, it was in a discord server for an RPG. Some people like to use this type of product for character portraits and when myself and others suggested there might be some issues with how they were trained or how it could affect art in the long term, there was a lot of pushback. Some of it was very aggressive and out of proportion to the critiques they were responding to.
I haven’t run across any aggressive screeds against these products. I don’t doubt that you have. We’re both presenting anecdotal evidence here. There’s nothing shocking about us having different experiences.
When you talk about it here, I have read your posts on the topic as more general conversation about it. But your responses make it sound as if you think all other takes are dishonest or just plain false. And you seem to want something more like a talk you’d have within a fan club.
This part of it, the way you talk about it here, is why it looks to me as if you’re overly invested in this. You’re just acting defensive. I’m not pushing that hard and you’re already on the back foot protecting your thing. I’m not even pushing directly at your thing, I’m saying it’s a tool. And like any tool it can be used to forward good ends or bad ones.
I honestly think it’s difficult to argue that the corporations have behaved well and are promoting good ends here. It looks to me like they’ve been unusually selfish even for them. Could a reasonable person disagree? Yes. Could a reasonable person brush the whole question aside as obviously false? No, I don’t think so.
That’s how I look at this and why I think I’m respecting your viewpoint and opinions even if I don’t share them. Do I seem to be doing something else to you?
fan club talk? not really. some posts id like discussion in the comments, some not, and this is bringing some clarity on that. i posted to make a case that i never see made in left spaces, not to engage with people as much as to siwoti with the long form version of a comment that you never come back to read the replies on. knowing what i do about my comrades’ beliefs about ai, i probably should have just disabled comments, because i get nothing out of them except an increasing sense of alienation and anger.
overly invested
not personally. it helps me quickly illustrate the concepts i spit for spooktober, monsterhearts, and yes, rpgs. i’ve seen disabled people who use ai and are more insecure than myself feel really bad about the left freakout on ai, and am extremely angry on their behalf. extremely.
if you look at my op here you’ll see i’ve already made a huge amount of concessions to your side – as many as i’m willing to make. and at this point, i’m way beyond fucking done listening to you.
in choosing not to respond to the substance of that article linked at comment one or any of the things you’ve said, i’ve already given you the last word!
now kindly stop fucking talking about it, please.
i leave the comments open here only as a challenge to see if y’all got the gumption to stop talking at me about it now that i’ve asked, rather than because i closed comments. it is a trap. don’t take it.
EDIT
thanks for not taking the trap! i really appreciate it. closing comments now.