A good and lively conversation about bad, tired AI


Adam Conover talks with Emily Bender and Timnit Gebru about stochastic parrots (an excellent label for the surge of interest in “AI,” which really isn’t intelligent).

What I found most interesting was the discussion of TESCREAL, the blanket term for “transhumanism, extropy, singularitarianism, cosmism, rationalism, effective altruism, and longtermism” — and what a muddled, vague, pretentious mess all those topics are. Skip ahead to the 53 minute mark for some horrifying revelations: they talk about the recent letter from the gung-ho Open AI people suggesting a “pause” in development, and an opinion piece that starts out by discussing a definition for “intelligence”. Bender & Gebru looked at the footnotes and what they cited for that — it’s an op-ed defending The Bell Curve! It’s written (or signed) by a bunch of researchers at Microsoft who are using this information as the foundation of their understanding of what intelligence is. They cite a paper that says,

IQ is real, the measures of it are good, they are not racist, and yes, there are group level differences in IQ where Jews and Asians are the smartest but we don’t know exactly how much, and then you’ve got the white people centered around 100, and then they say but the black people are centered around 85.

Jesus. The bad ideas of eugenics and scientific racism have sunk very deep roots. Nineteenth century biology/natural history has significantly tainted all of the sciences with their ugly colonialist/imperialist beliefs, and it’s going to take a long time to dig them out.

Comments

  1. JoeBuddha says

    I remember reading somewhere that your IQ measures how good you are at taking IQ tests.

  2. Jazzlet says

    JoeBuddha @1

    Yup, Also it does not represent a fixed talent, the more you practise IQ tests the better you get at them, which is why back in the day when Oxford (in fact the whole of the UK) had grammar schools, and you had to pass the eleven-plus (an IQ test) to get into one, primary schools would have pupils practise on old tests.

  3. says

    The thing is, I am an ethical transhumanist. I think it’s great when science can be leveraged to improve the quality and length of life for humans, and there’s no reason there should be an arbitrary limit on that. It’s great that medical science has advanced to a point where someone can have their body reshaped to match their gender identity, or where someone can get a cochlear implant so they can hear, or when someone can get a pacemaker to keep their heart beating on schedule. If someday we were able to build a device that could interface with the brain to save memories for dementia patients? Awesome, let’s do it. The more we can improve the lives of humans and humanity as a whole, the better.

    And if that was what transhumanism meant to most transhumanists I wouldn’t feel the need to draw a distinction between me and them. Alas, that is not what most transhumanists think transhumanism is. Most of them think that the point of transhumanism is to make me, personally, into Iron Man. And maybe some people like me. And all the other non-Iron Man people can suck it.

    It’s just Randian objectivism with additional steps, and just as ridiculous.

  4. says

    And I read somewhere that all IQ tests are written by graduate students from Connecticut, which is why they have questions that begin with “Teddy leaves Sag Harbor on the brunchtime jitney…”

  5. says

    And I read somewhere that all IQ tests are written by graduate students from Connecticut, which is why they have questions that begin with “Teddy leaves Sag Harbor on the brunchtime jitney…”

  6. euclide says

    I love that they start by “this is not racist”, then lump together people by arbitrary racial traits, which is kind of the definition of racism.

  7. wzrd1 says

    feralboy12, well they could’ve instead went with graduate students from Vermont. Then, the questions would start off with, “you can’t get theya from heaya…”
    Of course, the real verbal IQ test is translating South Philly… Yo, you don know dat I know wha djyou know, ya know?

  8. drsteve says

    The legacy of Stanford-Binet and of Bill Shockley lives on! Oh brave new Valley, that has such Silicon in’t!

  9. Corey Fisher says

    I went and hunted down the source, and it is worth noting that revision to the paper removed and disavowed – and cited a rebuttal of the original editorial. (See the history of revisions here: https://arxiv.org/abs/2303.12712v5 )

    It might well have still been there when the podcast was recorded – and it might turn out that the bad ideas are endemic to the paper anyway. But it’s worth noting that it’s become more complicated.

  10. birgerjohansson says

    OT -14 mass graves associated with a starvation sect have been found in eastern Kenya.

    At least 311 people are missing. The cult leader preached that starvation is the way to heaven.

    And in another recebt religion-related horror (but not with fatalities) a debate with muslim apologist Daniel Haqiqatjou had him state that sexual intercourse with a child wife was permissabe under sharia – a stance defending the prophet’s marriage to Aisha and the stated fact that he had sex with her at a very early age.

    Daniel H is like those Republicans that say the quiet part out loud.
    I hope that modern muslims who are revolted by this eventually will abandon the literal interpretations of koran, we have seen where that road goes.

    Now back to alleged AI :
    For those interested in the subject I recommend the humor of Saturday Morning Breakfast Cereal .

  11. birgerjohansson says

    Yawn. Wake me up when we have made serious inroads into how the human brain works, as it is the only current working model we have for sapiens and intelligence.
    (Caledonian crows and that weird New Zealand parrot might disagree).

    Still, we have made enormous progress since the early nineties when neural networks were understood to be important. It took until 2014 before we saw commercially useful extended neural networks.
    Understand: neural networks do very useful things but they are dumber than a bucket full of hammers.

  12. wzrd1 says

    The starvation cult news originally broke last week, buried deep down on most sites. :/

    Russian soldiers, digging bunkers and fighting positions in Ukraine accidentally dug into a burial site for cattle that had been culled due to anthrax. Multiple soldiers contracted anthrax, were hospitalized and when diagnosed, transferred to an unknown location.

    Ukrainian farmers are being forbidden to work their fields in Russian occupied areas, with said fields now being mined.

    By all appearances, Russia has withdrawn from both the Geneva and Hague Conventions, given their behavior in Ukraine.

  13. raven says

    Lynn was hardly unique among leading IQ experts in characterizing the Irish as being low IQ.

    Race/IQ: Irish IQ & Chinese IQ
    Ron Unz Aug 14, 2012

    One of the many surprises I’ve encountered when reading the dozens of web pages and many hundreds of comments attacking my Race/IQ analysis is the overwhelming focus of these critics upon my Irish data. Although I discuss similar ethnic IQ evidence regarding the Greeks, Balkan Slavs, Southern Italians, Dutch, Germans, and various other European peoples, it sometimes seems like the attacks on my Irish analysis are more numerous than those against all these other cases combined, …

    First, Lynn was hardly unique among leading IQ experts in characterizing the Irish as being low IQ. For example, Hans Eysenck, one of the foremost IQ researchers of the 20th century said exactly the same thing in his 1971 book “Race, Intelligence, & Education,” claiming that the Irish IQ was very close to that of American blacks, and that the Irish/English IQ gap was almost exactly the same size as the black/white gap in the U.S., being roughly a full standard deviation.

    Once again, for many decades the poster ethnic group for low IQ was the Irish.
    Multiple studies always found that the Irish had low IQs and the conclusion was that the Irish are dumb.

    Except they aren’t. Any more anyway.
    “This rapid convergence between Irish and British IQs should hardly surprise us. According to the GSS, the Wordsum-IQs of (Catholic) Irish-Americans rank among the very highest of any white ethnic group, with a value almost identical to that of their British-American ethnic cousins.”

    The Irish IQs started rising towards the end of the 20th century and are now equal to the British and American IQs.

    Other groups in the past also were considered dumb for scoring low on IQ tests, including the Germans, Slavs, Greeks, and Italians. They also showed rising IQs over time.

    The point here is that a vast number of variables affect IQ scores.

    These include socio-economic status, pre and post natal nutrition, environment children are raised in especially age 0 to 2, education, stress levels, and so on.
    The IQ of any given population can change over time as their world changes.

  14. raven says

    How Chinese cheat the I.Q test scores

    As it is noticed in IQ testing, average IQ in cities is 15 points higher than rural areas.
    and
    Anyways, here are the low scoring IQ samples on China’s population :-

    Wang, 2001 (Average IQ of 76-81)
    Source: http://www.fluoridealert.org/wp-content/uploads/wang-2001.pdf
    Average IQ: 81 and 76

    Hong, 2001 (Average IQ of 65-82)
    Source: http://www.fluoridealert.org/wp-content/uploads/hong-2001.pdf
    Average IQ fluctuates between 65 and 82 for china, depending on amount of fluoride in water. Shandong province, china.

    Li, 1995 (Average IQ of 79-89)
    Source: http://www.fluoridealert.org/wp-content/uploads/li-1995.pdf
    Average iq is in between 79 and 89 for china. Guizhou province, china.

    Yang, 1994 (Average IQ of 76,81)
    Source: http://www.fluoridealert.org/wp-content/uploads/yang-1994.pdf
    Average IQ for china is 76 and 81. Jinan, China.

    An, 1992 (Average IQ of 76,84)
    Source: http://www.fluoridealert.org/wp-content/uploads/an-1992.pdf
    Average IQ for china is 76 and 84. Guyang county, inner Mongolia.

    Guo, 1991 (Average IQ of 76,81)
    Source: http://www.fluoridealert.org/wp-content/uploads/guo-1991.pdf
    Average IQ for china is 76 and 81. Hunan province, china.

    Lower results are in mild fluoride regions and higher results are in very optimum conditions.

    The question arises, why did Lynn ignore these samples on China’s population. Well, if you go out with a propaganda of proving one nation smarter than another, such result manipulation is a must.

    On top, these are the samples that are done in very optimum conditions like low fluoride, etc. and in top notch states of China.

    The racists usually claim that Chinese IQs are higher than whites.

    That may be so but they don’t have the data to show that.
    Most of the studies on Chinese population level IQs use heavily biased samples, sloppy research methods, and carelessness that is close to fraud.

    If you actually look at internal Chinese data, it is all over the place depending on the populations being measured.
    It is highest in the cities.
    In rural regions it is quite often very low.

    With that sort of variability, IQ tests aren’t measuring some intrinsic property that is unchanging and unchangeable.

  15. raven says

    This comment got caught in the spam filter for unknown reasons. Edited and try again.
    How Chinese game the I.Q test scores

    As it is noticed in IQ testing, average IQ in cities is 15 points higher than rural areas.
    and
    Anyways, here are the low scoring IQ samples on China’s population :-

    Wang, 2001 (Average IQ of 76-81)
    Source: http://www.fluoridealert.org/wp-content/uploads/wang-2001.pdf
    Average IQ: 81 and 76

    Hong, 2001 (Average IQ of 65-82)
    Source: http://www.fluoridealert.org/wp-content/uploads/hong-2001.pdf
    Average IQ fluctuates between 65 and 82 for china, depending on amount of fluoride in water. Shandong province, china.

    Li, 1995 (Average IQ of 79-89)
    Source: http://www.fluoridealert.org/wp-content/uploads/li-1995.pdf
    Average iq is in between 79 and 89 for china. Guizhou province, china.

    Yang, 1994 (Average IQ of 76,81)
    Source: http://www.fluoridealert.org/wp-content/uploads/yang-1994.pdf
    Average IQ for china is 76 and 81. Jinan, China.

    An, 1992 (Average IQ of 76,84)
    Source: http://www.fluoridealert.org/wp-content/uploads/an-1992.pdf
    Average IQ for china is 76 and 84. Guyang county, inner Mongolia.

    Guo, 1991 (Average IQ of 76,81)
    Source: http://www.fluoridealert.org/wp-content/uploads/guo-1991.pdf
    Average IQ for china is 76 and 81. Hunan province, china.

    Lower results are in mild fluoride regions and higher results are in very optimum conditions.

    The racists usually claim that Chinese IQs are higher than whites.

    That may be so but they don’t have the data to show that.
    Most of the studies on Chinese population level IQs use heavily biased samples, sloppy research methods, and carelessness that is close to fraud.

    If you actually look at internal Chinese data, it is all over the place depending on the populations being measured.
    It is highest in the cities.
    In rural regions it is quite often very low.

    With that sort of variability, IQ tests aren’t measuring some intrinsic property that is unchanging and unchangeable.

  16. says

    endlessly pissed AI was politicized into left=anti libertarian=pro with completely distorted asshole ideas across the spectrum. it is the most interesting cool and useful tech I’ve seen in my life, but I’m shitbird-by-association for liking it, and I have to listen to unlimited complete shit takes from my former comrades on it.

    I eagerly await the dust settling on this. appropriate uses of the tech so normalized that knee-jerk objection doesn’t come up from the one side, and magic hoodoo elon musk shit is less common on the other.

  17. chrislawson says

    raven, why are you using data from an anti-fluoride network that posts Google-translated papers selected to give the impression that fluoride reduces IQ?

  18. chrislawson says

    That quote: ‘IQ is real, the measures of it are good, they are not racist, and yes, there are group level differences in IQ where Jews and Asians are the smartest but we don’t know exactly how much…’

    Clearly whoever wrote that is not intelligent enough to see that the ‘we don’t know how much’ part directly contradicts the ‘real’ and ‘measures of it are good’ parts of the sentence. And the pat determination that IQ ‘are not racist’ is quite simply a lie. The research base showing the encoded racism in testing is overwhelming. Modern tests have been designed to minimise racial and cultural biases, but it’s impossible to completely strip racism from IQ tests when the culture itself is racist.

  19. drsteve says

    @14-G.A.S. Luigi, is that you?

    Apologies if not, I just used to know someone who was known to use that exact same nom de plume. . .

    In any case, I am both steadily, increasingly leftist as I get older, and also network-adjacent to a lot of AI and general Silicon Valley hoodoo (including having a smaller Zuckerberg-Musk number than, say, Natalie Portman’s Erdos-Bacon number).

    So I’m increasingly sympathetic to the idea that the post-GenX really ought to make it a point develop and adopt constructive policy positions on AI and it’s use cases, for our own purposes.

    You may find the work of this scholar in that general vein interesting:

    https://blog.cjtrowbridge.com/

  20. wzrd1 says

    @14, the biggest problem is, too many idiots on both sides of the spectrum look upon current AI as some great oracle, which it isn’t. It’s a tool that’s invaluable in certain applications and useless in others, just as a hammer is invaluable in some applications and useless in others.
    If one limits data presented to an AI and asks specific questions, one should reasonably expect correct answers. But, throw the entire internet at the thing as a data source, it’s like hammering on a box of nails and expecting one to nail a couple of boards across the room together.
    Just as wanting specialist analysis from a general AI is the wrong application for the wrong tool, as giving a dentist a hammer to perform an appendectomy would be wrong application and tool.
    Meanwhile, specialist AI was given tons of data sets on a galactic center to analyze a SMBH of certain fame, refining the image tremendously and presenting new data for astrophysicists to analyze. Right application, right tool, wonderful results.
    The Great Oracle died at Delphi, AI ain’t her replacement.

  21. kome says

    Stepping away from the intelligence conversation and focusing on the stochastic parrots/large language models/whatever-the-fuck you want to call them:

    Psychics and mediums claim to be able to talk your dead relatives. Some of these large language models are being advertised by companies as something you can train using a dead loved one’s text communications so it can learn to create new communications in the voice/style of your dead loved one. Functionally, I don’t see a difference here. And it worries the fuck out of me because we know from a lot of research that psychics and mediums actually cause harm to people by inhibiting the grieving process from resolving in a healthy manner. But the fact that things like Replika are built using code may give it the appearance of more scientific legitimacy. I worry how this may encourage more people to retreat to these technological safe havens rather than endure one of the most tragic but still authentic human experiences, and what that may do to our development.

    I haven’t gotten through the whole Adam Conover video yet, but in all of the other discourse I’ve seen about the real threats posed by these machine learning algorithms (e.g. misinformation spread, toxic parasociality, cheating in education) I’ve yet to see people find it abhorrent that these programs are being advertised as a way to effectively maintain communication with a dead person.

    Anyway, figured I’d throw that out here in case anyone wants to chime in or be equally horrified about this angle.

  22. John Morales says

    kome,

    I’ve yet to see people find it abhorrent that these programs are being advertised as a way to effectively maintain communication with a dead person

    Well, me too, but I’ve also yet to see these programs being advertised as a way to effectively maintain communication with a dead. Not a coincidence, perhaps.

    Anyway, that conceit is a longstanding trope in SF (and in sci-fi too).

    Anyway, figured I’d throw that out here in case anyone wants to chime in or be equally horrified about this angle.

    Not me. I don’t find it even slightly horrifying.

  23. hemidactylus says

    I usually come across as a splitter, as some have already seen me go apoplectic multiple times over the vague catchall term “social darwinism (barf). This TESCREAL acronym seems a hyperlump of what could be disparate things. I eye-roll at what I’ve read on transhumanism by proponents Kurzweil and Rothblatt, but they seem fairly benign. Of course Kurzweil himself pushes the Singularity extrapolation from Moore’s law and some nanobot goofiness. I didn’t get any inkling of Bell Curve that I recall from them.

    Breeding humans as the classic eugenists preferred by positive or negative means seems different from genetic engineering, not that the latter is free from abuse. Freeing someone from maladies like sickle cell by genetic engineering would be different from sterilization or otherwise preventing procreation by carriers. Such genetically engineered elimination of deleterious traits is also different from attempts at genetically engineered augmentation toward perceived improvements in intelligence or physical strength that would create a larger division between economic haves and have nots.

    Cyber augmentation or enhancement would also result in a socioeconomic based divide. If Google Glass had ever gone further than coinage of the term “glasshole” that might be one of the more benign augmentations. There could be cyber based easing of disabilities, fraught with controversy enough among the disabled, but that seems different than uber-rich techbros spending oodles of trust fund money or financial markets ill-gotten gains in making themselves far ahead of the hoi polloi. The flight of Icarus though may portend ironic epic fail for them.

    I usually like Conover’s take on things but haven’t delved into the video PZ linked. I did mess with ChatGPT not too long ago. Not quite ready for prime time. I was impressed by the speed of response, but put off with its inability to say “I don’t know”. Too cocksure before becoming apologetic when called on errors. I queried it about a book I was reading The Daughter of Doctor Moreau and it was answering me based on The Madman’s Daughter which was a serious howler as both are based on HG Wells’ original book. I was able to train it toward the right book, but not sure how it could delve into the details of the book itself apart from quick perusals of reviews and blogs. I wonder if books with a longer shelf life and in the public domain would yield better results. Stochastic parrot seems on its face to be an applicable term for my experience with ChatGPT. I might be a bit of a stochastic parrot myself though (imposter syndrome).

  24. John Morales says

    Perhaps some people are horrified that it may become more and more plausible that we ourselves are also fundamentally stochastic parrots. Not me, of course.

  25. hemidactylus says

    @23-John Morales
    I’ve heard of Martine Rothblatt’s version of transhumanism as “digital scrapbooking” effectively compiling enough about a person’s life to render a simulated representation of them that they might communicate with you in that manner after their natural death. Kurzweil does his stuff at least partly because of an understandable all too human obsession with bringing a semblance of his own dead father back to life.

  26. kome says

    @23
    Eternime and Replika are both platforms that are being billed as ways to achieve digital immortality, although in Replika’s case that’s just one potential use of the platform (and was the origin story behind the app) since it seems far more profitable to provide a service for digital surrogate erotic relationships.

    But, since you don’t find it horrifying, perhaps you don’t see it as akin to mediums and psychics claiming to speak to the dead. If that’s the case, would you mind explaining what you think the difference is?

  27. John Morales says

    hemidactylus, that’s Roko’s territory.

    And, in those sort of discussions, hardly anyone seems to notice the glaringly obvious fact that a simulation of you is not you, any more than a photograph or a doll of you is you.

    (Voodoo!)

  28. says

    how about, instead of literally embracing luddism and blowing shit all out of proportion, people look at the tech and its potential with lucidity and creative thought, as a chance to put ethics and philosophy into practice? think about problems with the tech and what people want to do with it, and come up with practical suggestions for laws, policies, and technological solutions to those issues? too much to expect from the supposedly intelligent side of the aisle? i see u steve @19

  29. John Morales says

    kome,

    But, since you don’t find it horrifying, perhaps you don’t see it as akin to mediums and psychics claiming to speak to the dead. If that’s the case, would you mind explaining what you think the difference is?

    Well, for one thing, mediums and psychics claiming to speak to the dead is their job, and it’s all they can do.

    For another, as long as it’s made clear in any marketing that it’s a program producing output from a collection of recorded data and metadata, it’s in no way a misrepresentation. Cf. my previous.

    I mean, sure, it can be used that way. It might be used that way.
    But then, look on the bright side: anyone foolish enough to believe that using the program is actually communicating with the deceased and thus using it will be one less that will be seeing a physical person who does that job.

    Forget about banning the tech, it’s already out there and already useful.
    And (dum dum doooom!) it’s only nascent.

    PS “Eternime is an Artificial Intelligence digital replica of you, built from your digital footprint (emails, social media posts, smartphone and wearables data etc.). This digital twin will learn from you, grow with you, help you and, eventually, live on after you die.”

    No pretense there that it’s the person, no foul.

  30. hemidactylus says

    @29- John Morales
    I have some recollection of that pesky “swampman”, but that was a previous iteration of “me” who had thought about that very relevant issue years ago.

    Granted the “you” transported from the Enterprise to some planet may lack continuity, but how much actual continuity has a person across their life?

  31. John Morales says

    hemidactylus,

    Granted the “you” transported from the Enterprise to some planet may lack continuity, but how much actual continuity has a person across their life?

    Constraint: we’re on the subtopic of a very specific use of stochastic parrots/large language models/whatever-the-fuck you want to call them.
    Not about transporters.

    (Not The Prestige, either!
    Though that was already a trope, too, and thus not an original conceit)

  32. kome says

    @31

    Regarding the claim that there’s no pretense there, psychics and mediums have in the fine print somewhere that the services they provide are “for entertainment purposes only” and yet that doesn’t stop people from accepting what they do at face value as contacting the dead. By contrast, Replika actually does instruct users to converse with their generated models as if they were sentient beings.

    Regarding the idea about “anyone foolish enough”, shouldn’t we still care about people – fools or not – who are being taken advantage of by companies, whether they are big tech companies, big scam artist brands like Sylvia Browne, or individual freelance charlatans claiming to have some means of staying in touch with a dead person? Especially since, in the case of manifesting a message from a dead person, whether through psychic powers or a large language model trained on all of their digital communications, it’s very likely the people engaged with that content are perhaps not in the best psychological state of mind given, you know, grief caused by loss? The cavalier attitude you seem to be showcasing in your responses is sidestepping that people engage with the idea of communicating with the dead or everlasting life because of grief and desperation, anxiety and fear. And I’m not sure if you’re aware that’s how you’re coming across, or if you are aware that’s how your coming across, that it can be perceived as kind of callously cruel to not be bothered by vulnerable people being taken advantage of with the intrinsically pretentious promises that are being sold to people regardless of the marketing speak used to legally protect these companies from fraud lawsuits.

  33. hemidactylus says

    @34- John Morales
    I think (or recall of a previous “me” thought IIRC) that with any talk of transhumanism or simulation of “you/me/whomever” the Swampman rears an ugly head. The transporter thing is just a way to address that thorny issue.

    A previous rendition of me may have thought being hosted in silica different from in vivo. And I happen to find rendition in writing different than audio and both different from video. A splitter. A digital transhuman representation of “me” on a server farm seems not quite current neuronal “me” nor the “me” I am struggling to convey here. Do “I” know “me”? See Johari window. There’s so much of “me” beyond “my” grasp to be captured by a “digital scrapbook”. Who else can grasp who I am? So much for digital uploads or scrapbooks.

    But the scrapbooking idea to capture aspects of a loved one seems ok still as a lower resolution capture replete with pixellation and buffering issues. Might be a way to reminisce with reservations. If swampman is a problem I’m not buying transhumanist promises. Lacking continuity where I experience in silica “life” nope. But that could be a digital hellscape if downloaded me is continuous. What if Muskrat enslaves continuously digital me? Yipes. Nope.

  34. John Morales says

    kome,

    Regarding the idea about “anyone foolish enough”, shouldn’t we still care about people – fools or not – who are being taken advantage of by companies, whether they are big tech companies, big scam artist brands like Sylvia Browne, or individual freelance charlatans claiming to have some means of staying in touch with a dead person?

    Sure, legislate that particular application to be illicit. Fine by me.
    Good idea. Though all it will do is move it underground; one can’t legislate people to be protected from themselves.

    Point being, it’s nothing to do with the underlying tech, is it?
    When I read your first comment it seemed to be about the tech, not specifically about some possible misapplication of it. Many of those, of course.

    I myself am in favour of regulating this emerging technology, starting now.
    I’m not unsympathetic to your general view that anything can be misused, and that this tech enables new forms of misuse, but I’m hardly horrified by it. And if stuff like the war on drugs have shown us anything, is that one can’t legislate away things people desire.

    And again, if one of the major worries is automating away stuff that currently only people can do, one of the minor benefits is that it can automate that scamming, by the same token.

    hemidactylus:

    But the scrapbooking idea to capture aspects of a loved one seems ok still as a lower resolution capture replete with pixellation and buffering issues.

    So? You got a scrapbook, fine. The aspects, not-so-much.

    If swampman is a problem I’m not buying transhumanist promises.

    It’s a very silly thing to call a problem.

    (Also, an old idea: https://en.wikipedia.org/wiki/Boltzmann_brain )

    Lacking continuity where I experience in silica “life” nope. But that could be a digital hellscape if downloaded me is continuous.

    Might as well worry about Heaven or Hell.

  35. John Morales says

    [OT]

    And I’m not sure if you’re aware that’s how you’re coming across, or if you are aware that’s how your coming across, that it can be perceived as kind of callously cruel to not be bothered by vulnerable people being taken advantage of with the intrinsically pretentious promises that are being sold to people regardless of the marketing speak used to legally protect these companies from fraud lawsuits.

    I’ve been commenting here and like places for quite a while, so I am aware of how some perceive me.

    Again: this taking advantage of vulnerable people (as if it were any worse than taking advantage of non-vulnerable people) is a perennial problem.

    I mean, what about catfishing? Very obviously, this tech can automate that, too.

    I shan’t elaborate, but I can offhand think of multiple other avenues of abuse of this new tech.

    But still, I hope for your own sake that you are consistent in your views, and view those outcomes also as horrifying. So very much horror!

    Anyway.
    Be aware that attempting an appeal to emotion or to conformity is not something that sways me.

  36. drsteve says

    Just wanted to co-sign Great American Satan @19 as well as John Morales @26 (and point out that myself @8 would seem to already provide a nice piece of supporting data for the latter idea 🦜)

  37. birgerjohansson says

    Re . @ 40
    Goddammit, I realised those supreme court judges will live forever.
    Not as good news as I thought.

    On the other hand, if the Brits keep Charles going for another 40 years they will be so goddamn tired of the institution they will abolish it when he dies.

  38. kome says

    @38

    It’s not an appeal to emotion to point out that your stance seems to be that vulnerable grieving people deserve to get taken advantage of. Caveat emptor, and all that. It’s merely a description of the moral stance you appear to be taking. And if you’re cool with that stance, which you appear to be, then we have nothing more to discuss because you do not exist in a space that allow for common ground.

    I’m sorry that you are a person who thinks it’s okay for tech companies to exploit people at their most vulnerable.

  39. wzrd1 says

    The problem is, at what point do we regulate or restrict a technology that can be abused? When it’s being abused, restrict it away from the public, save if licensed? Before it can be abused, despite it already coming into common usage? Before it can be abused, based upon theorized future abuse?
    If the latter, when do we restrict fire and the use of tools?
    Because, at the end of it is the absurd, in the middle is only theory and at some point, we have to balance a modest risk to an entire society vs accepting that fools and their money have tons of friends on payday.

  40. John Morales says

    kome:

    I’m sorry that you are a person who thinks it’s okay for tech companies to exploit people at their most vulnerable.

    Turns out you are a person who claims to think I think it’s okay for tech companies to exploit people at their most vulnerable, despite what I’ve written.
    Very judgemental indeed.
    However, your sadness is inappropriate, because it’s meritless.

  41. jo1storm says

    Turns out you are a person who claims to think I think it’s okay for tech companies to exploit people at their most vulnerable, despite what I’ve written.

    Well, actually you kind of 100% are claiming that with this simple sentence.

    Be aware that attempting an appeal to emotion or to conformity is not something that sways me.

    kome’s argument was people are creating technology and marketing it as a way to talk to dead in a manner of technological psychics and mediums. That’s unethical, concerning and should be banned.

    Your argument was, in this order:
    1) psychics and mediums are already doing it so what’s wrong with letting technology do it as well?
    2) banning the practice and making it illegal just moves it underground
    3) it is a service that people want and they will go their fill even if it is illegal. Everybody knows that psychics and mediums are just putting up a show so they get what they pay for. (“So serves them right they got scammed!” is heavily implied)

    First of all, psychics and mediums should be illegal, even as a “its just a harmless fun” show. Second argument, good. It should be illegal,underground and hard to come by and when somebody is caught doing it should be hit with the full force of the law with no “It’s just a harmless fun” protection. And as for your third argument, that is literally every scammer’s argument I have ever heard.

    “They really want to believe you can buy a Rolex for 50$ but they know it is impossible. It is just a game and I am providing a service of making their make-believe a reality. Its role-playing and they are getting what they paid for. For a few minutes, I let them live in a magical world where Rolex costs 50$ and extremely rich people have to sell genuine diamond rings to a stranger for quick cash. I am doing them a favor.”

    As hard as might be for you to believe John, some people actually believe in psychics and mediums. And turning that old grift technological will give them a new pool of potential victims is not a good thing. Thus I agree with kome that it is a worrying development. Also, saying that (vulnerable) people are getting hurt and will be getting hurt by this technology is not an appeal to emotion or to conformity, it is stating a fact. Ignoring that fact makes you look like a psychopath who doesn’t care that other people are getting hurt. Otherwise, we might as well make all snake-oil sellers legal and remove FDA completely.

  42. John Morales says

    jo1storm, hey.

    Well, actually you kind of 100% are claiming that with this simple sentence.

    You are about as wrong as one can be.

    (Care to essay the chain of inference you followed to reach that ridiculous conclusion?)

    kome’s argument was people are creating technology and marketing it as a way to talk to dead in a manner of technological psychics and mediums.

    Supposedly doing so, actually. I’ve discussed this matter already.

    As hard as might be for you to believe John, some people actually believe in psychics and mediums.

    Only a dolt could be so feeble at attempting to patronise.

    Also, saying that (vulnerable) people are getting hurt and will be getting hurt by this technology is not an appeal to emotion or to conformity, it is stating a fact. Ignoring that fact makes you look like a psychopath who doesn’t care that other people are getting hurt.

    To such as you, perhaps.

    More to the point, if you’d followed the discussion I had with kome you’d be aware that the issue was the supposed need to be horrified by this technology, later amended (or clarified, to be kind) to be horrified about this specific and particular application of it.

    Bah.

    Calling me a psychopath is of course not an attempted appeal to emotion.

    The accusation doesn’t carry any weight because it lacks merit.

    Otherwise, we might as well make all snake-oil sellers legal and remove FDA completely.

    Nevermind what I wrote, attack some smoke phantom. Not even straw, there.

  43. Silentbob says

    Only a dolt could be so feeble at attempting to patronise.

    And so it was, children, that the singularity was reached. And the evil troll disappeared up his own adumbration.

  44. jo1storm says

    @47 Ever heard of mail and wire fraud? With the development of new technology, the dangers of that technology was recognized and a new sort of crime was invented in legal system out of thin air. Mail fraud was first defined in 1872 and wire fraud was first defined in 1952. First you get horrified by the potential application of technology, then you ban that application of technology by making it illegal. That’s normal way.

    @48 Silentbob, that made me chuckle. Thanks for making my day a bit better.

  45. John Morales says

    jo1storm:

    Ever heard of mail and wire fraud?

    Yes. More to the point, fraud is hardly a new technology.

    Hey, ever heard of catfishing?

    If you’d actually followed my comments here, you might have noted I already wrote “I shan’t elaborate, but I can offhand think of multiple other avenues of abuse of this new tech.”

  46. KG says

    A good video (although I do find Conover annoying), which raised a lot of significant issues, told me that the “Sparks” paper (which I thought quite good – it does actually point out serious limitations of GPT-4) was authored by Microsoft drones, and pointed me to the “stochastic parrots” paper. Interestingly, Margaret Mitchell, one of that paper’s authors (under a rather easily decoded pseudonym), interviewed in New Scientist of 2023/04/27, is rather less dismissive of LLMs’ capabilities than Gebru and Bender. In answer to the question:

    Is there anything that has surprised you in what large language models can do

    she says:

    We have recently seen what people are calling “emergent behaviours” – abilities that go beyond language processing and can seem like human reasoning. You can give an LLM math problems or insturctions for writing computer code. You can give them stories and ask them to reason about the characters. And they can do these things. It’s not at all clear how that happens. They give the impression that they’re able to understand the world, in some sense, having just been trained on enormous amounts of human-generated text. The question is, are they doing something like human reasoning? Or are they just using sophisticated statistical associations, which doesn’t seem to be the way we reason.

    I’d say maybe those “enormous amounts of human-generated text” and “sophisticated statistical associations” may contain considerable implicit knowledge of the world. She does later in the interview say:

    I think simply scaling up these models is probably not going to take us to the kind of human-like understanding that we want… to get to that point, I think we will need some different kinds of architectures. For example, language models like GPT-4 have no long-term memory so they have no recollection of past conversations and they don’t care, in some sense, about what they have said in the past… If a system doesn’t have any motivations, or any of its own goals, maybe it can’t achieve the kind of intelligence that we have.

    (She also says that human intelligence is very specific to our evolutionary niche and it might not be as general as we like to think it is. I disagree, because of the way we can develop “cognitive prostheses”, including but not limited to collaboration with others.)
    Finally on a personal note, the central section of my very-unlikely-ever-to-be-written-SF-epic was going to start with something like the following:
    “Yes. I need to speak to Professor Iguchi, urgently!”
    “I’m sorry”, replied the robosec, “The professor died last night. Would you like to speak with his ghost?”

    But I see the term “ghost” for a post-mortem avatar is already current!

  47. wzrd1 says

    With the development of new technology, the dangers of that technology was recognized and a new sort of crime was invented in legal system out of thin air.

    So, the courts created law out of thin air? How odd, given it’s the legislature that crafts laws! Indeed, our legislature isn’t very proactive and on the few times it has been, was wildly off target in their legislative efforts. Instead, they’re reactive and craft laws in response to crimes that have already occurred.
    The legislature is so predictably poor at predictive legislation that the antiabortion crowd aren’t seeking much in new legislation, preferring to resurrect the nearly extinct Comstock law to forbid “anything used for an abortion”, which would de facto also prohibit shipment of quite a few medications, medical supplies and most surgical instruments, if followed the way the extremists are suggesting.
    I suggest for that last, let them run with it and see to it that they wallow in their own filth when thousands are denied necessary lifesaving surgery due to their efforts. Then, let’s see if they’ve the fitness to survive what they enraged.

  48. snarkhuntr says

    The various beliefs that make up TESCREAL have always struck me as just another kind of religion forming around our current tech-bro aristocracy. Much as previous religions did, their primary functions will be to reassure the Aristocrat that their good fortune was not, in fact, accidental – that they deserve to occupy their elevated position in society due to (pick one or more, as emotionally required) their [superior genes] / [work ethic] / [racial supremacy] / [value to the future] / [amazing intellect]. No inconvenient mention need ever be made about the starting advantages that a person might have had – nor any reference to ‘luck’.

    As the acolytes of these new faiths can schmooze their way closer to people with money and power, they will tune their offerings to appeal to the specific emotional needs of the specific aristocrat they’re trying to extract some crumbs from. Does the Aristo harbor vague feelings of guilt because they’re living an incredibly lavish lifestyle while children starve and the world burns? Effective Altruism ™ has the answer! For a low, low recurring donation of a few million dollars a year – the aristo can acquire a personal Futurist-Tech-Priest who will assuage any guilt or doubts they might have regarding their lifestyle. “Mr. Bankman-Fried – it’s not an obscenity that you’re spending millions of dollars each year on luxury catering alone – by creating and living this wealthy lifestyle you’re actually investing in the future of trillions of potential humans who might not exist if you can’t rest your exceptional brain while you come up with the ideas that will generate the money that you’ll use to really do some altruism. Please have another braised endangered sea-scallop.”

    Is the particular Aristo you’re courting terrified of death? Boring old-fashioned religions promised a vaguely-defined paradisical afterlife. New, modern, scientific religions have evolved past that. Now we understand that the real eternal life is living inside the computer! You’re already using it all day, why not be inside it? Or perhaps that doesn’t appeal, and the Aristo is too attached to their physicality – the Futurist-Tech-Priest can offer them an extended, possibly infinite, lifespan in their own physical body with concoctions of potions and the promises of future medical advances that are Just Around The Corner ™. This is a classic grift, of course – but it only gets more effective as the Aristo ages and gets ever-more-worried about their physical well-being. Boring, old-fashioned science-based doctors have an irritating habit of giving bad news. Luckily a Futurist-Tech-Priest isn’t bound by the limiting confines of what actually exists. By referencing new advancements that are “Just Around The Corner ™”, they can ease the aristo to a hopeful, comforted death while extracting money, grants and access to other powerful folks.

    Is your Aristo a racist? Don’t fret – there are hundreds or thousands of pseudo-scientific papers and books you can use to make sure they never need to question their genetic superiority to whomever they dislike. Why not indulge in some speculation about Eugenics? A practice where the Aristo can imagine that they’ll never have to look at a kind of person they don’t want to see – and tell themselves they’re improving the species by doing so.

    It’s just the same old religious con, dressed up in fancier language. They’re telling the kings and princes of this age whatever they need to to keep extracting money from them – same as it ever was.

  49. jo1storm says

    @53

    So, the courts created law out of thin air? How odd, given it’s the legislature that crafts laws! Indeed, our legislature isn’t very proactive and on the few times it has been, was wildly off target in their legislative efforts. Instead, they’re reactive and craft laws in response to crimes that have already occurred.

    Did I say anything about the courts? I said that was invented in legal system out of thin air. That involves legislature. Legislature is a part of legal system, by definition.

    When is a crime not a crime? When it is not on the books. For example, catfishing is not a crime (and neither it should be one). But it can be an action that leads to a crime, like fraudulent misrepresentation and rape by fraud. You can commit a bank robbery and that’s one crime. Or you can rob a bank by hacking into their systems using the internet and that’s a different crime with different penalties. You still took the money that doesn’t belong to you. The difference is that before the internet existed, you physically couldn’t commit the second sort of crime (well, you could use phone to call a bank and say there’s a bomb inside then rob it during the confusion or panic. Or say that you are bank’s CEO and tell everyone to close it for a day).

    I agree that legislature is usually reactive but they are sometimes proactive. Comstock law is frankly horrific.

  50. John Morales says

    jo1storm:

    For example, catfishing is not a crime (and neither it should be one).

    &

    Also, saying that (vulnerable) people are getting hurt and will be getting hurt by this technology is not an appeal to emotion or to conformity, it is stating a fact.

    “Victims of online deception are often left heartbroken, bankrupt, destitute or even suicidal. While some predators face charges for fraud or extortion, many who commit these irreparable harms will never face any consequences. Is it time to criminalise catfishing?”

    https://lsj.com.au/articles/the-lure-of-the-law-should-catfishing-be-a-crime/

    “Heartbroken and alone, Renae Marsden was just 20 years old when she sent her mother a loving text message and ended her life. It is unlikely she ever knew the horrifying truth of the man she loved; that he was a sickening work of fiction, concocted by a former friend.”

    So, catfishing should not be a crime, but pretending to channel the dear departed should be?

    Hm.

    Comstock law is frankly horrific.

    Unlike catfishing, which you hold should not be a crime.

    (You do know what you’d say about me if I made the very same claim you’ve made, no?)

  51. StevoR says

    isn’t catfishing already kinda the crime of obtaining money and stuff by false pretences?

  52. jo1storm says

    Catfishing should not be a crime for the same reason investigative journalism shouldn’t be a crime. You can’t have one without another. There is a huge the difference between fraudulent misrepresentation and catfishing.

    “Victims of online deception are often left heartbroken, bankrupt, destitute or even suicidal. While some predators face charges for fraud or extortion, many who commit these irreparable harms will never face any consequences. Is it time to criminalise catfishing?”

    Look who is now targeting emotions or is maybe going to my side of the argument? So, which of the two is it?

    No, it is not time to criminalize catfishing. Yes, catfishers should face charges of fraud or in certain cases, rape by fraud if there is evidence of those crimes because that’s provable harm but the act of catfishing itself shouldn’t be criminalized. But if you criminalize catfishing, the possible and likely consequence would be undercover journalists going to prison for catfishing and person responsible for uncovered misdeeds getting away scot-free. In a way, catfishing is the same as adultery and lying. Those are bad things that shouldn’t be criminalized either.

  53. John Morales says

    I see, jo1storm. You’re selective in what vulnerable people are worthy of protection via legislated regulation of technology.

    Wrecked lives, suicides, suffering from those most vulnerable who get caught in catfishing scams are fine in your estimation — just the cost of not legislating about it. Because it’s not as bad as pretending a chatbot is actually some dear deceased, which most definitely is horrifying. Grieving process and all.

    Nevermind eternal shame, a broken heart, a feeling of having been used, etc on top of any financial losses. And hey, now the grieving can start!

    (Fucking hypocrite, you)

    Anyway.
    As I noted, plenty of other avenues for misuse.

    Here is a recent story:
    https://www.abc.net.au/news/2023-04-12/artificial-intelligence-ai-scams-voice-cloning-phishing-chatgpt/102064086

    (Many more, of course)

  54. jo1storm says

    Phishing is already illegal. Not all harmful behavior can or should be criminalized. Adultery, lying, catfishing… They are all in the same boat. People have committed suicide because of all of them. Pretending to speak with the dead is always a fraud.

    Real quick question: Are you proposing that adultery should be criminalized?

  55. John Morales says

    FFS, jo1storm.

    kome was asking if anyone else was horrified by a potential (but very niche) application of this tech. I responded to the effect that at most it would automate what already happens, and in the process do away with the actual people doing it right now. Because it’s just a schpiel from them, and the chatbot can do that perfectly well.

    Now you ask me “Are you proposing that adultery should be criminalized?”.

    Very stupid of you, of course. What do you reckon?

    (But hey, automated adultery! It’s alliterative :) )

  56. jo1storm says

    Now you ask me “Are you proposing that adultery should be criminalized?”.

    Very stupid of you, of course. What do you reckon?

    I don’t know what you are thinking. That’s why I am asking you. So?

    I responded to the effect that at most it would automate what already happens, and in the process do away with the actual people doing it right now. Because it’s just a schpiel from them, and the chatbot can do that perfectly well.

    But you are ignoring a really big detail here. Can you see what it is and what makes it fraudulent?

  57. John Morales says

    (sigh)

    Whatever weird mental meanderings made you imagine I was in any way proposing that adultery should be criminalized is left to the imagination.

    It has absolutely zero to do with chatbots or their use as pretend deceased people for the purposes of stopping the bereaved from completing proper grieving, which was the nub of this digression into which you interjected your stupidity.

    But you are ignoring a really big detail here. Can you see what it is and what makes it fraudulent?

    I can laugh at you.

    Heh.

  58. John Morales says

    Mind you, at least you brought to mind Mr Universe. Give you that, Jo.
    Let’s get Socratic. Is fucking a sexchatbot considered adultery?

    Probably not, right? Just a machine.

    Except, perhaps, to the vulnerable proportion of those who fuck them, who might think they are in a real relationship.

    (And hey, these people already exist)

  59. jo1storm says

    It has absolutely zero to do with chatbots or their use as pretend deceased people for the purposes of stopping the bereaved from completing proper grieving, which was the nub of this digression into which you interjected your stupidity.

    Wait, you are the one who tried to change that discussion to discussion about catfishing and its legality/criminalization. That’s the only reason why I asked you about adultery. Catfishing should not be criminalized, adultery shouldn’t be either.

    People pretending that they can train chatbots to be like deceased people and offering a sort of “e-psychic service” should be, as well as all psychic services. Agree or disagree?

  60. John Morales says

    Wait, you are the one who tried to change that discussion to discussion about catfishing and its legality/criminalization.

    You think that because you’re kinda stupid.

    Here is when I brought that in: “I mean, what about catfishing? Very obviously, this tech can automate that, too.”

    No response to that from kome.

    It was a specific instance of one of the many, many ways this tech can be leveraged.

    Here, again: “I shan’t elaborate, but I can offhand think of multiple other avenues of abuse of this new tech.”

    Again: kome was freaking out about one very, very niche application which in the scheme of things is trivial. This was their contribution to a discussion about AI and its purported dangers.

    That’s the only reason why I asked you about adultery. Catfishing should not be criminalized, adultery shouldn’t be either.

    It’s more harmful to more vulnerable people than chatbots pretending to be the dead, that’s the fucking point! Geez, you’re slow.

    People pretending that they can train chatbots to be like deceased people and offering a sort of “e-psychic service” should be, as well as all psychic services. Agree or disagree?

    What the fuck does that have to do with the tech? Read the post title, please.

    Since you are so very curious, here is the answer (@37):
    “Sure, legislate that particular application to be illicit. Fine by me.”

    Not saying it should be done, saying I’m fine with it being done.

    Had you actually perused my actual comments, you’d have seen how specific I was.

    Why you ask me to reiterate what I’ve already clearly stated is pretty obvious, though.
    Best you can do, I suppose.

  61. jo1storm says

    I see. So you agree with kome and me, you’re just an a-hole about it. As is your usual wont. Carry on.

  62. John Morales says

    I see. So you agree with kome and me

    Heh heh heh.

    If only you’d read my #23 before ejaculating.

  63. John Morales says

    I think that, other than being rather dated and having nothing to do with LLMs or talking to the dead, the only relevance I can see is that you think this is about the purported problem at hand.

    If so, then it shows that it already existed 11 years ago.

  64. jo1storm says

    @John Morales

    It is unlike you to be cagey like this. What’s your opinion on the video? Is making a twitter bot (crude one compared to what we have now) that has a similar name to the real person and heavily pixelated profile image with the merest hint of similarity catfishing? If so, how would you criminalize it or write a law to do it?

    What’s your opinion on the three researches who created it? Do you find them annoying? Do you find their reasoning sound or unsound? Do you find their goal noble or misguided? Or the goal is noble but the way they go about it is misguided?

  65. John Morales says

    It is unlike you to be cagey like this.

    Obviously, it’s entirely like you to imagine I’m being cagey, when I am being about as direct and explicit as one can be.

    Again, I think the only relevance it holds is that you entirely misapprehend the subject at hand.

    The content? Boring, banal, trite, uninteresting, and dated.
    And hardly about a vulnerable person. That was kome’s worry, no?

    (Won’t someone think of the vulnerable?)

    Is making a twitter bot (crude one compared to what we have now) that has a similar name to the real person and heavily pixelated profile image with the merest hint of similarity catfishing?

    FFS. Again: there’s a million and one possible applications, someone freaks out about mediums, and you wank on about some fucking ancient Twitter spambot as if that were in any way comparable.

    It’s not making a person fall in love with a fake person. It’s not making parents give money to a scammer thinking they’re helping their daughter. It’s not talking to a fake financial advisor. Etc etc etc.

    You really are clueless. And way behind the times.

    If so, how would you criminalize it or write a law to do it?

    Topic at hand is some conversation about AI.

    What’s your opinion on the three researches who created it? Do you find them annoying? Do you find their reasoning sound or unsound? Do you find their goal noble or misguided? Or the goal is noble but the way they go about it is misguided?

    Bah. I could hardly care less. Go for it, impersonate me on Twitter. Like I give a fuck.

    Again: kome’s fearful terror specifically about possibly using AI to milk saps who imagine they’re speaking to the dead has fuck-all to do with your wankings about some spambot back in the day.

    (Might as well worry about Eliza)

  66. jo1storm says

    Topic at hand is some conversation about AI.

    And possibly using that AI to catfish people. Thus I asked you if this crude AI is an example of catfishing and if so, how would you criminalize the practice and write a law to protect people against it?

    Again: kome’s fearful terror specifically about possibly using AI to milk saps who imagine they’re speaking to the dead has fuck-all to do with your wankings about some spambot back in the day.

    It is fairly relevant, actually. That crude spambot was specifically made for two reasons: 1) to trick those “milk saps” into thinking that is really Jon Ronson’s account on twitter and 2) to mildly annoy Jon Ronson. In fact, if you paid attention to the video, you’d have seen that they made the bot to annoy Jon Ronson and then started heavily gaslighting him once he got annoyed and confronted them about it. That’s why I asked you if you got annoyed by them, because it should have infuriated you if you had any sense of empathy towards strangers.

    How is that related to the topic at hand? We have more convincing chatbots now and unlike them targeting and doing a mild harm to a single person, they are now targeting and doing bigger harm to a larger population. It is literally company’s business model.

  67. John Morales says

    jo1storm, I do like our repartee.

    And possibly using that AI to catfish people.

    Learning from experience is a thing about sapient beings.

    You (early on): “There is a huge the difference between fraudulent misrepresentation and catfishing.”
    You (much later): “Is making a twitter bot (crude one compared to what we have now) that has a similar name to the real person and heavily pixelated profile image with the merest hint of similarity catfishing?”

    Let’s see if repeated exposure gets through:
    “Here is when I brought that in: “I mean, what about catfishing? Very obviously, this tech can automate that, too.”

    No response to that from kome.

    It was a specific instance of one of the many, many ways this tech can be leveraged.

    Here, again: “I shan’t elaborate, but I can offhand think of multiple other avenues of abuse of this new tech.”

    Again: kome was freaking out about one very, very niche application which in the scheme of things is trivial. This was their contribution to a discussion about AI and its purported dangers.”

    Thus I asked you if this crude AI is an example of catfishing and if so, how would you criminalize the practice and write a law to protect people against it?

    No, it isn’t.

    (It follows your “if so” is moot)

    It is fairly relevant, actually. That crude spambot was specifically made for two reasons: 1) to trick those “milk saps” into thinking that is really Jon Ronson’s account on twitter and 2) to mildly annoy Jon Ronson.

    Heh. Is that your attempt at “A good and lively conversation about bad, tired AI”?

    In fact, if you paid attention to the video, you’d have seen that they made the bot to annoy Jon Ronson and then started heavily gaslighting him once he got annoyed and confronted them about it.

    In fact, if you paid attention to my #56, you’d have seen me quoting “Heartbroken and alone, Renae Marsden was just 20 years old when she sent her mother a loving text message and ended her life. It is unlikely she ever knew the horrifying truth of the man she loved; that he was a sickening work of fiction, concocted by a former friend.”

    But that should not be proscribed, in your estimation, unlike annyoing some dude and gaslighting him when he got annoyed.

    (You have no idea, do ya?)

    How is that related to the topic at hand? We have more convincing chatbots now and unlike them targeting and doing a mild harm to a single person, they are now targeting and doing bigger harm to a larger population.

    The harm is hardly comparable, is it?

    Yet you wish to criminalise the annoyance and allow the life-destroying applications.

    (And you call me callous!)

    That’s why I asked you if you got annoyed by them, because it should have infuriated you if you had any sense of empathy towards strangers.

    Presumably, you spend all your time being infuriated, since there are so many wrongs in the world and you have O so much empathy towards strangers.

    (Presumably, I am not a stranger to you, since your empathy is towards me is not exactly effusive)

  68. John Morales says

    PS ah well, bored.

    to trick those “milk saps”

    No. The actual words were” ‘to milk saps’

    See, there ‘milk’ functions as a verb, not a noun.

    milk
    verb [ T ]

    /mɪlk/
    To milk something or someone is to get as much from that thing or person as possible

    I speculate that perhaps you were trying to ape my techniques, as often happens during interactions with the doltish. Cargo-cult type of worship.

    (But hey, “milk saps’ is almost as good as “raisin dates”)

  69. jo1storm says

    No response to that from kome.

    You are also having conversation with me and not bothering to respond to my questions either.

    Yet you wish to criminalise the annoyance and allow the life-destroying applications.

    Now you are just trolling. I never said that specifically. I asked you a question do you consider that annoyance catfishing. No would have been sufficient. In which case I would have asked you a following question.

    No, it isn’t.

    (It follows your “if so” is moot)

    What is catfishing for you then and how exactly would you criminalize it and write a law to protect people against it?? You have been dancing around that question for a while now.

    Here, again: “I shan’t elaborate, but I can offhand think of multiple other avenues of abuse of this new tech.”

    And again and again, when I actually ask you to elaborate about catfishing, you refuse! Again I ask, what is catfishing FOR YOU and how exactly would you criminalize it and write a law to protect people against it??

    The harm is hardly comparable, is it?

    It’s a scale, isn’t it? On the one side of it is what was done to Jon Ronson, on the other you have catfishing and actual fraud, which somebody wants to make a business model out of.

  70. jo1storm says

    PS: Isn’t the correct thing to write then “to bilk saps”, since we are talking about fraud and all?

  71. John Morales says

    jo1storm, this is great.

    No response to that from kome.

    You are also having conversation with me and not bothering to respond to my questions either.

    Mate! That was from #67, and it was a direct and specific response to you.
    I quoted it again because it’s not sinking in.

    <blockquote><blockquote>Yet you wish to criminalise the annoyance and allow the life-destroying applications.</blockquote>

    Now you are just trolling. I never said that specifically.

    Heh.

    You wrote: “Catfishing should not be criminalized”.
    This, after I’d already adduced its effects, and reiterated them since.

    For the third time: “Heartbroken and alone, Renae Marsden was just 20 years old when she sent her mother a loving text message and ended her life. It is unlikely she ever knew the horrifying truth of the man she loved; that he was a sickening work of fiction, concocted by a former friend.”

    Catfishing. You don’t think it should be criminalised.

    You wrote: “That crude spambot was specifically made for two reasons: 1) to trick those “milk saps” into thinking that is really Jon Ronson’s account on twitter and 2) to mildly annoy Jon Ronson. In fact, if you paid attention to the video, you’d have seen that they made the bot to annoy Jon Ronson”

    Now that’s infuriating, for you. Right?

    Again: a fucking long way from Jon Ronson to vulnerable people whose grieving process may be interrupted by employing a chatbot to pretend to be a dear departed one.

    What is catfishing for you then [etc]

    If you’d looked at my #56, you might have got an inkling.

    Basically, it is what it is.

    And again and again, when I actually ask you to elaborate about catfishing, you refuse!

    Here, again: “I shan’t elaborate, but I can offhand think of multiple other avenues of abuse of this new tech.”

    And again and again, when I actually ask you to elaborate about catfishing, you refuse!

    It seems so to you because you are… um, challenged.
    I understand you don’t get that I offered catfishing as anything other than an arbitrary example of the many, many (or, as you quoted, “multiple other avenues”) ways in which currently existing scams could be automated by LLMs.

    Again I ask, what is catfishing FOR YOU and how exactly would you criminalize it and write a law to protect people against it??

    The irony of your #66 surely eludes you.

    It’s a scale, isn’t it?

    Oh, sure.
    A continuum between suicide of a young woman and the annoyance of an adult man.

    (Though, according to you, the cause of the former is not to be criminalised, only the latter)

  72. jo1storm says

    This is getting very annoying.

    You offer something as an example of potential use of technology for evil. I ask you to elaborate. You refuse to do so. I ask you how would you prevent harm, then? You said, criminalize it. I tell you that you can’t and shouldn’t criminalize it (same way as you can’t and shouldn’t criminalize adultery and lying) and it shouldn’t even be attempted because of all the harm that attempt would do. You refuse to engage with that argument and instead call me callous and unfeeling.

    So I ask you for the last time: how exactly would you define and criminalize catfishing (and thus also criminalize using AI tools for that unsavory practice)?

  73. John Morales says

    This is getting very annoying.

    You offer something as an example of potential use of technology for evil. I ask you to elaborate. You refuse to do so.

    Heh.

    Point is, the only new and problematic aspect of this new tech is that such uses can be automated, but they are extant. So kome’s horrification is weakly-based, since all that will happen is that the (limited) set of people who are sufficiently vulnerable to fall for that scam will be scammed by a computer program instead of by an actual person, and the actual scamming person will be out of a job, because the chatbot will do it.

    I ask you how would you prevent harm, then? You said, criminalize it.

    Heh heh. Quote me, if you imagine that.
    I said no such thing. I was remarking on the peculiarly specific issue kome brought up.

    I tell you that you can’t and shouldn’t criminalize it (same way as you can’t and shouldn’t criminalize adultery and lying) and it shouldn’t even be attempted because of all the harm that attempt would do. You refuse to engage with that argument and instead call me callous and unfeeling.

    Heh heh heh.

    This, from the specimen who wrote thus: “Ignoring that fact makes you look like a psychopath who doesn’t care that other people are getting hurt.”

    Pure psychological projection; you are imagining that I have done what you do.

    (The irony is truly tasty, but)

    So I ask you for the last time: how exactly would you define and criminalize catfishing (and thus also criminalize using AI tools for that unsavory practice)?

    <snicker>

    https://www.esafety.gov.au/young-people/catfishing

    How you imagine I could criminalise it is left to speculation; presumably, I’d get into politics and then become leader of a nation and then inform myself of all the relevant issues and likely consequences and seek the best advice, and then I would influence the legislature, or something like that. Not very likely, but that’s how I’d likely do it.

    (Tell me again how I’m somehow avoiding anwering you, it’s kinda funny)

  74. jo1storm says

    The definition you provided is meant to explain children and teenagers what catfishing is. It is not fit to be a legal definition of a crime.

    How you imagine I could criminalise it is left to speculation; presumably, I’d get into politics and then become leader of a nation and then inform myself of all the relevant issues and likely consequences and seek the best advice, and then I would influence the legislature, or something like that. Not very likely, but that’s how I’d likely do it.

    Funny answer and practically meaningless. I ask you how you would legally define it, you give me a definition for children which would make people like investigative journalists liable for criminal damages if used, like I warned you it would happen. Unfortunately, you are one of those a-holes I sincerely hoped you aren’t.

    A person says they are worried about new technology being used to scam people, to reinvent the old grift and commit it in new ways. You say, “You know what? That’s one evil use of that technology, but there are worse uses, so don’t worry, be happy!”. When asked about those worse uses and how to prevent and punish them, you refuse to give an answer. The main issue here which you are ignoring is that most of those worse uses are already illegal (fraud, impersonation to gain illegal access etc). The one kome is worried about is explicitly not illegal. The one other worse use you mentioned (catfishing) is not illegal either. The solution kome has offered, to make it illegal, is actually viable and can be easily implemented. Unlike the solution for catfishing which is neither.

    So kome’s horrification is weakly-based, since all that will happen is that the (limited) set of people who are sufficiently vulnerable to fall for that scam will be scammed by a computer program instead of by an actual person, and the actual scamming person will be out of a job, because the chatbot will do it.

    Actually, more of them than usually would, because there are people who believe in technology but don’t believe in ghosts. So the group of potential victims is bigger than those of traditional victims of psychics.

  75. John Morales says

    You do not disappoint.

    The definition you provided is meant to explain children and teenagers what catfishing is. It is not fit to be a legal definition of a crime.

    Heh.

    You’re already on record as claiming that “Catfishing should not be a crime for the same reason investigative journalism shouldn’t be a crime.”, so you should be happy the definition I supposedly gave (heh) is not fit to be a legal definition of a crime.

    Aren’t I supposed to be, in your estimation, he “who tried to change that discussion to discussion about catfishing and its legality/criminalization.”

    Unfortunately, you are one of those a-holes I sincerely hoped you aren’t.

    Yeah, your sincerity is totally evident.

    A person says they are worried about new technology being used to scam people, to reinvent the old grift and commit it in new ways. You say, “You know what? That’s one evil use of that technology, but there are worse uses, so don’t worry, be happy!”. When asked about those worse uses and how to prevent and punish them, you refuse to give an answer.

    Only a dolt would imagine I have refused to give an answer.

    You imagine I have refused to give an answer.

    :)

    The one kome is worried about is explicitly not illegal. The one other worse use you mentioned (catfishing) is not illegal either.

    LOL. Yes, “the other one”.

    The solution kome has offered, to make it illegal, is actually viable and can be easily implemented.

    Your memory is like a sieve.

    Me @37, in response to exactly that: “Sure, legislate that particular application to be illicit. Fine by me.
    Good idea.”

    Unlike the solution for catfishing which is neither.

    So, you hold that the solution kome has offered, to make it illegal, is actually viable and can be easily implemented for scamming the bereaved by purporting to communicate with their spirit.
    And furthermore, you also hold that the solution kome has offered, to make it illegal, is actually not viable and cannot be easily implemented in regards to any other activity where a chatbot is used to pretend to be someone.

    (Such acumen!)

    Actually, more of them than usually would, because there are people who believe in technology but don’t believe in ghosts. So the group of potential victims is bigger than those of traditional victims of psychics.

    <snicker>

    Even more of the vulnerable people prone to being scammed by a scammer purporting to communicate with the dead!

    Well, that makes all the difference! More of them!

    (heh)

  76. jo1storm says

    So, you hold that the solution kome has offered, to make it illegal, is actually viable and can be easily implemented for scamming the bereaved by purporting to communicate with their spirit.
    And furthermore, you also hold that the solution kome has offered, to make it illegal, is actually not viable and cannot be easily implemented in regards to any other activity where a chatbot is used to pretend to be someone.

    Because proposed solution is to make all sorts of psychics illegal, you dolt. It is not ban on technology but on the practice behind it! And catfishing is much wider phenomenon than pretending to be a psychic. Are you really that blind?! This is the difference between using a bucket to drain a tub and using a bucket to drain the sea!

    “Sure, legislate that particular application to be illicit. Fine by me.
    Good idea.”

    And guess who is sidestepping the issue and ignoring what I have written after that point again? You are. It is not about banning the particular application but the activity itself.

    You really are one of the worst types of a-hole.
    C is Concerned citizen. A is a-hole.

    C: I am worried about Y. We should do this to solve Y.
    A: There is nothing to be worried about.
    C: Shows why there is something to be worried about Y
    A: You are right but there are worse things to be worried about like Z. And besides, the proposed solution won’t solve all, 100%, every single issue of Y in the world.
    C: Ok, how do you propose to solve issue of Z?
    A: Easy.proposes solution that will create more problems than it will solve
    C: That won’t work for reasons E,F and H.
    A: ignores the argument, keeps talking about Z and how important it is to solve it, without offering any other actual solution
    C: Ok, how do you propose we solve issue Y then?
    A: You really shouldn’t focus on issue Y, because Z is more important…

    And in circles we go. The only way to actually solve the issue Y is to completely ignore person A. Because that guy has only two modes: 1) it is not an (important) issue at all and 2) proposed solution is not good.

    Without ever going to mode 3) actually offering a solution to issue Y.

  77. John Morales says

    It is not ban on technology but on the practice behind it!

    My very point to kome from the start.

    They rail against an application, not about the technology, thought he post is about the technology.

    And guess who is sidestepping the issue and ignoring what I have written after that point again? You are. It is not about banning the particular application but the activity itself.

    Mmmhmm.

    Me to kome @37:
    “Sure, legislate that particular application to be illicit. Fine by me.
    Good idea. Though all it will do is move it underground; one can’t legislate people to be protected from themselves.

    Point being, it’s nothing to do with the underlying tech, is it?”

    You’re kinda groping your way there. Well done!

    You really are one of the worst types of a-hole.

    Because it takes one of the worst types of a-hole to think people actually killing themselves is worse than people who supposedly may not complete the grieving process, or that were annoyed over a decade ago by a spambot.

    (Doesn’t get any more real than that)

    C is Concerned citizen. A is a-hole.

    And you are you. Yay!

  78. Silentbob says

    I suppose I shouldn’t be surprised that a decade in there are still suckers who don’t know Morales is nothing but a troll utterly disinterested in even pretending to formulate a coherent argument, but here we are.
    Welcome to Pharyngula I guess. :-/