Haters Can’t See Us


Content Note:  This is a pro-AI post.  Haters don’t even comment.

The title refers to a West Side Connection track that is itself referring to a song I’m unfamiliar with.  Rap man says “Can they see us?  No, haters can’t see us.”  Something Marcus sometimes laments, when he’s talking AI, is the blinders people wear as human supremacists.  People underestimate what various AIs are and what they can do, but they also badly overestimate what humans are and what we can do.  These two strains of flawed thought add up to an abject incuriousness about the subject.  Powerfully interesting shit is going on, but the blithe glide on by it.  They can’t see it.

That’s fine, I’m not going to win literally anybody in the fuckin’ leftiverse with my brand of argumentation.  History will have to do the convincing, and since AIs are being developed for both good and evil, who can say which will make a larger impact on public opinion?  I’d just like the ignorant argument to die down so thoughtful conversations can finally be heard above the noise.

You don’t have to be a starry-eyed techbro, a singularity cultist craving escape from the flesh, or one of the silicon valley scumbags that both fears skynet and is the demographic most likely to create it, in order to see the amazing possibilities of this moment in technology, to see the way this technology reflects on who we are, and thereby gives us an opportunity to learn something about ourselves.  You don’t have to be an anti-AI reactionary to see the limitations in the tech and look at it with an appropriate measure of skepticism and realism.  The middle path is being genuinely thoughtful about it, and that’s practically nobody right now.

This is my house and I’m gonna say what I will about it, even though I’m talking to a brick wall.  Human supremacy is real, and it is bullshit.  It is not an equivalent crime to white supremacy, not even remotely.  Supremacy is the word of choice here not to make insult against AI detractors (I’ll just call you assholes if I wanna do that), but because it’s the best word for the behavior.  Human supremacists are presuming that humans have unique abilities of thought that are not present in other animals and/or cannot be emulated by computers.  It is a presumption, and it’s a mistake.

Throughout the history of science, we’ve been constantly searching for why humans are so dominant over nature, a field of inquiry thoroughly corrupted by motivated reasoning.  We start with the observable fact of our dominance, quietly (or loudly) allow ourselves the prejudice of pride, and set to bullshitting.  This is not unlike how scientific racists started with the economic and political dominance of the Global North and sought justification for it, except in one key aspect.  We aren’t harming people with human supremacy, unlike white supremacy.  That lets human supremacists off the moral hook.  I don’t consider what you do evil.  I consider it infuriatingly wrong.

Humans can be pretty cool, but we are not cosmically special.  Humans are not as smart as we think we are.  Are you and I even living in the same human species, that you could make those arguments?  The more I consider all the arguments made against the feasibility of “AGI,” the more I think they’re all deriving from an unspoken, even unconscious belief in the soul.  Something like the puritan work ethic that informs USian proles who are very far removed from puritanism proper, it’s in your head whether you want it or not.

Instincts are programs.  Self-awareness is more complicated programs.  The self is a construct so a constructed / programmed self is as valid as any.  Creativity is controlled chaos.  We now have programs that don’t require the computing power of a small nation to function like a human with a brain lesion that results in endless prevarication.  That’s goddamn amazing.  Of everything humans do, I would have presumed verbal thought to be the most difficult thing to emulate.  Scratch it off!

The rest of the blocks could fall like dominoes.  This should have sensible regulation, a body concerned with ethics presiding over it all.  We don’t live in that world so it isn’t happening.  Given the world we do live in, I’m very keen to see what good people can do with this technology, and wondering what can be practically done about the bad.  “Someone should pass a law to make art styles copyrightable” ain’t it, chief.  Jesus, taking the disney art style away from furries would be like making homosexuality illegal again.  Don’t do that.

Comments

  1. says

    i think the way LLMs work, while they seem alien, might be a lot closer to how humans actually formulate speech than how we think we do. the speech is a beast unto itself, operating independent of the commands our “selves” give them. this is why speech can continue even after the self takes a hit, albeit damaged. see the babbling of some stroke sufferers, incessant verbal stream from some mental illnesses, the garbage that falls out of a compulsive liar’s mouth…

    if true, the verbal process is like fingers or legs, just an organ we push around so automatically we don’t notice it’s nearly independent of our higher thought. we give an AI directives / “instincts” and a way to manage its own long term memory, we could get closer to “agi.”

  2. springa73 says

    I think that there are at least two different ways that people formulate speech. A lot of speech is done kind of on “autopilot” like you say – most small talk, rote responses, and often repeated phrases and stories probably fall in this category- but some speech is more carefully and consciously thought out, too. The carefully thought out speech is probably a minority of the total for the majority of people, though.

    I’m not sure what the implications of this are for LLMs, but I thought it would be worth pointing it out.

  3. says

    if there’s one thing reading an oliver sacks book is good for (proper science probly aint it), it’s making all aspects of cognition seem more like computer program – counterintuitive. our idea of what we’re doing is a shorthand that helps us make sense of life, not entirely accurate. i’m suggesting that when we experience our thoughts and words operating in close alignment, we imagine the one to flow more or less perfectly into the other – that we’re making those noises happen by precise conscious effort – but that perhaps it’s more like we’re jockeying a process that’s a separate layer of programming unto itself.

    maybe it’s true, maybe it isn’t, but i’ve observed things in life that make it feel legit to me. the reason it doesn’t typically feel like that is because the control loop is so close that it feels direct, thought to mouth. my alternate version is thought -> internal chatbot -> mouth, so fast you could easily miss the inbetween shit that is happening.

  4. SpiritusIgneus says

    I think I agree with you (Bebe), generally speaking. This is anecdotal, but more than once have I found myself speaking without completely paying attention to what I was saying, and thinking a few seconds later “Wait, what the hell did I just say?” – the sentence was syntactically correct, but not quite sensical, and also not quite what I had initially intended to say. It was like the language part of my brain was doing a sort of auto-complete.

  5. John Morales says

    Me: turing’s spinning is gaining rpm, as in he’s even more spun by you. Pass the turing test, but not really, you.
    BTW, summarise the actual Turing test… was it not about gender
     
    BB: Yes, originally it was. Turing’s 1950 paper “Computing Machinery and Intelligence” introduced the Imitation Game, which involved three participants: a man (A), a woman (B), and an interrogator (C). The interrogator, isolated from the others, had to determine which was the woman by asking written questions. The twist: the man would try to deceive, and the woman would try to help the interrogator.

    Turing then proposed replacing one of the human participants with a machine. The question became: Can a machine imitate a human well enough that the interrogator can’t reliably tell the difference? Over time, this evolved into what we now call the Turing Test—a machine passes if it can produce human-like responses indistinguishable from a real person’s.

    So yes, the original framing was explicitly gendered, and that context is often glossed over in modern retellings.

    [It is not wrong, and that’s the title of a movie with whatshisname]

  6. says

    muffuckin grumbledick snaptisnatch, he’s always waiting to get namedropped. his face floats by this comment section like an indonesian ghost, smiling at us, invulnerable, eternal.

    that’s interesting info. not sure if i’ve heard it before.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.