You have annoyed the Great and Mighty Professor Bostrom!


Nick Bostrom paced about his chambers, agitated and offended. The puling mob had accused him of racism for merely writing Blacks are more stupid than whites in his youth, and then refusing to admit that thinking entire ethnic groups, nay, even the population of whole continents, could be genetically inferior was racism. It was science! He was not going to repudiate Science!

He must rebuke these irrational people. They are distracting him from his important work. He must deliver a stunning riposte. He considers the most effective way to crush them. He shall accuse them of being…mewling infants? Lowing beasts? No — they are buzzing insects.

Hunkering down to focus on completing a book project (not quite announcement-ready yet). Though sometimes I have the impression that the world is a conspiracy to distract us from what’s important – alternatively by whispering to us about tempting opportunities, at other times by buzzing menacingly around our ears like a swarm of bloodthirsty mosquitos.

Perfect. Accusing those damned SJWs of being a swarm of bloodthirsty mosquitos will strike exactly the right chord with his fanbase of libertarian/conservative free speech warriors.

Buzz, buzz, buzz.

Comments

  1. lanir says

    I guess if your main point is vacuous nonsense it doesn’t matter if your rebuttal is childish sneering.

  2. bcw bcw says

    @1 Bostrom is directing the future of humanity in the same way your kid is driving the car in one of those coin operated rides.

    As someone quite aware of the things I’m good at: certain types of puzzle solving, and also aware of all the things I am very bad at, the very idea of super-intelligence seems a misnomer – super intelligence in what way, doing what? Google maps is super-intelligent at one thing; ChatGPS is very good at plausible-sounding BS; you can have strengths in different things. The question in all these systems is what is your purpose or goal in life, or in a more AI sense, what are you optimizing for? Something like CHatGPS is optimizing for something like the Turing test, sounding like a human, but imitating humans really well is certainly not a route to any kind of super-intelligence, if even intelligence at all.

    Looking at Bostrom’s work, I think the buzzing is coming from inside his head. He seems to be trying to pitch terminator movies as grand philosophy while confusing problems in human social structure (to much power in Elon Musks) with the effects of technology.

  3. wzrd1 says

    It’s interesting that he equates detractors to the single animal that’s killed the greatest number of people throughout history and currently, the not so humble mosquito.
    May he enjoy the academic equivalent of dengue! Thrice.
    An infection by one strain, followed by the other being quite lethal…

  4. John Morales says

    Well, I have had a swarm of hungry mosquitoes swarming around me, and it most certainly is distracting. Of course, mozzies literally suck your blood, leave stinging bite marks, and can carry disease. So not the most apposite metaphor.

    sometimes I have the impression that the world is a conspiracy to distract us from what’s important

    Prone to magical thinking, I see.

  5. says

    sometimes I have the impression that the world is a conspiracy to distract us from what’s important

    Is it a simulation or a conspiracy?
    And wouldn’t a proper simulationist conclude that IQ test performance was a programmed part of the simulation? You can hardly be a racist and a simulationist without also being a jackass.

  6. raven says

    I read his Wikipedia entry and there was nothing original or exceptional there.
    It’s all stuff I’ve read in science fiction stories since I learned to read in the 1950s.

    The AIs are going to take over and kill us all.
    The Singularity is going to happen any day now some century or another and the desiccated corpse of Ray Kurzweil will be reanimated and say, “I told you so”.
    Vernor Vinge did it better.

    Microsoft will get the contract for running our simulation and we will all be locked into the Windows operating systems forever.
    It’s not as exciting as the movie Matrix but is more realistic.

    Shrug.
    I’d never heard of Bostrom before and all I learned since then is that I didn’t miss anything.
    He’s a nobody.
    A guy who thinks ripping off and repeating pop culture is philosophy.

  7. raven says

    Bostrom is behaving like he has tenure and is well connected at Oxford.
    He doesn’t have to worry about defending racist statements.
    No one important at Oxford seems to care.

    And yeah, he is tied up with a whole bunch of crackpots.
    The Institute for Reading comic books for ideas shares office space with the Centre for Effective Altruism which is another bunch of lunatic fringers. One of its founders was the uber creep MacAskill. The weird longtermer

    If MacAskill was serious about making the present and future of humankind better, he would grab his office mate Bostrom and drop him off in someplace obscure and impossible to get back to the UK from.

    I’m getting the impression that something isn’t at all right at Oxford.

    Wikipedia
    Future of Humanity Institute
    Purpose Research big-picture questions about humanity and its prospects
    Headquarters Oxford, England
    Director Nick Bostrom
    Parent organization Faculty of Philosophy, University of Oxford
    Website fhi.ox.ac.uk
    The Future of Humanity Institute (FHI) is an interdisciplinary research centre at the University of Oxford investigating big-picture questions about humanity and its prospects. It was founded in 2005 as part of the Faculty of Philosophy and the Oxford Martin School.[1] Its director is philosopher Nick Bostrom, and its research staff and associates include futurist Anders Sandberg, engineer K. Eric Drexler, economist Robin Hanson, and Giving What We Can founder Toby Ord.[2]

    Sharing an office and working closely with the Centre for Effective Altruism, the institute’s stated objective is to focus research where it can make the greatest positive difference for humanity in the long term.[3][4]

  8. chrislawson says

    I love how fighting racism is a distraction from what is important. It really helps convince me of the genuineness of the original apology.

  9. birgerjohansson says

    I am looking forward to his next comment about the criticism. He just keeps digging.

  10. Sphinx of Black Quartz says

    Of COURSE he’s associated with “effective altruism” and “longtermism.” Those selectively-natalist techbro “philosophies” are just a goofy pop-sci-fi spin on white supremacy.

  11. raven says

    Longtermism and MacAskill have a lot of problems.
    Bostrom BTW, is not just a racist but also a longtermist.

    Grady Booch @Grady_Booch
    Longtermism is the dangerous techbro equivalent of the evangelical Christian community, where true believers prefer to dream of the promise of some perfect and distant future at the absolute expense of the reality of the here and now.
    Quote Tweet
    NPR @NPR
    ·
    Aug 16, 2022
    In his new book, philosopher William MacAskill urges today’s humans to protect future humans — an idea he calls “longtermism.”

    Here are a few of his hardly modest proposals.
    https://n.pr/3QNPMAM

  12. raven says

    There is nothing wrong with trying to make the future better for our children. Most of us at least make the attempt ourselves including myself.

    The longtermists just use the natural concern for the future as an excuse to be horrible people because they are horrible people to start with.

    It becomes an excuse for racism, out of control exploitation of the world and environment by the rich, various lunatic fringe ideas, and ignoring present problems to worry about imaginary people who might exist 1,000 years ago.

    Here are a few problematic projects.

    Or consider Bostrom’s claim that minuscule reductions in “existential risk”—that is, any event that would prevent us from becoming a superior species of posthumans or colonizing space to simulate people—are morally equivalent to saving billions and billions of actual human lives.

    Or consider Nick Beckstead’s assertion that, since what matters more than anything is shaping the very far future (up to billions or years from now),
    …we should prioritize saving the lives of people in rich countries over saving the lives of people in poor countries.

    Remember one cis white old man from the First World is worth more than anyone from the other 90% of the world.

    And Bostrom is just wrong here.
    The lives of real people who live today are far more important than imaginary people who lives a million years in the future on Mars or Ceres.

  13. raven says

    One more for the road.
    The more I read about longtermism, the worse it looks.
    They aren’t serious about the long term.
    It is just an excuse for right wingnuts to be right wingnuts and lunatic fringers to be loons.
    It’s a con and a grift.

    The last quote in #19 was from https://www.longtermism-hub.com

    https://www.currentaffairs.org/2022/09/defective-altruism
    Nathan J. Robinson
    filed 19 September 2022 in PHILOSOPHY

    So what does “longtermism” add? As Émile P. Torres has documented in Current Affairs and elsewhere, the biggest difference between “longtermism” and old-fashioned “caring about what happens in the future” is that longtermism is associated with truly strange ideas about human priorities that very few people could accept.

    Longtermists have argued that because we are (on a utilitarian theory of morality) supposed to maximize the amount of well-being in the universe, we should not just try to make life good for our descendants, but should try to produce as many descendants as possible. This means couples with children produce more moral value than childless couples,

    Longtermism are cuckoo beliefs covered by flimsy wrapping paper.

    The earth has 8 billion people, the climate systems and biosphere are obviously struggling, hundreds of millions aren’t too far away from starvation, and we need to have as many children as possible because the earth might run out of people soon.

  14. raven says

    Bostrom is an advocate for eugenics.
    An idea that was discredited before I was born.

    Emile P. Torres: … If less “intellectually talented individuals,” in Bostrom’s words, outbreed smarter people, then we might not be able to create the advanced technologies needed to colonize space

    OK, I now know why I never heard of Bostrom.

    .1. Bostrom isn’t a philosopher.
    All he is doing is taking common ideas from Science Fiction, popular culture, and comic books and repeating them.
    There is nothing novel or original there whatsoever.
    It’s all been done many decades ago and far better than this clown.

    .2. Bostrom is a conperson and this is just a grift.
    Follow the money.
    He gets a big pay check for telling the tech bros they are the Crown of Creation and the future of humanity. It’s indoor work with no heavy lifting.

  15. lanir says

    I think you’ve nailed it raven. I guess this is a reminder that even if you’re what most people might consider to be conventionally smart it doesn’t make you immune to cons. It’s the same as looking down on people who click a link in a spam email and then falling for a standard phishing email that’s just tailored more toward your interests. All the while talking about them like they’re two completely different things.

    That email example is made up but it captures the attitude more precisely than any story I could relay. I’ve known plenty of people who are capable of something like that in the IT field. The worst part is some of them seemed incapable of learning and getting over it.

  16. StevoR says

    @7. birgerjohansson : “It is four in the morning, I can hardly coordinate my fingers.”

    I can relate. It isn’t 4 a.m. here now but there’s a lot of times its been almost as late at night / early morn for me when I can’t sleep and have been commenting here. Itcan certainly result in some tyois and other errors.

  17. petesh says

    @21: 1. Techno-eugenics is a thing. It’s dumb as well as cruel but the fact that eugenics was discredited decades ago does not mean that we can now ignore it. New names (some mere hyphenates) get painted on the same old rusty frame.
    2. Also, of course Bostrom is a philosopher. You and I disagree with him vehemently but he has all the credentials to claim the title, in the moral cesspit that is Oxford (where I got my undergraduate degree) and many similar institutions, which validate each other.
    3. I think he is sincere. He has been pushing varieties of this stuff for decades now. The oldest cite I have at my fingertips is from the New York Times in 2007: https://www.nytimes.com/2007/08/14/science/14tier.html. My published commentary on that (https://www.geneticsandsociety.org/biopolitical-times/he-real-are-you) concludes:

    Bostrom’s specialty is coming up with ludicrous premises and then explicating the inferences that can logically be derived from them in pedantic detail. His presentation on “Posthuman Dignity and the Rights of Artificial Minds” last year at Stanford was perhaps the finest example of academic humor since the Appendix to Carlos Castanada’s first book — or it would have been had it been intended as parody. Let us indeed postulate that “procreators have a pro tanto moral reason to select to create, of the possible beings they could create, the one that is expected to have the life most worth living.” Not to mention the importance of “non-discrimination with regard to substrate.”

    But a “gut feeling” of a “20 percent chance”? If I were paying this guy to think, I’d want a refund. Unless perhaps I was just doing it for the laughs.

    Yup, I stand by that. But he has an audience, immense self-belief and unfortunately a platform.

  18. says

    So basically when Nick Bostrum hears any sort of criticism, he simply pretends he only hears a buzzing noise. Yeah, that’s how real innelekshals respond to criticism, innit?

  19. says

    2. Also, of course Bostrom is a philosopher. You and I disagree with him vehemently but he has all the credentials to claim the title…

    And those are…?

  20. unclefrogy says

    I see he is a good example of someone living in an ivory tower and being out of touch with reality, I see it clearly, he even is a paid resident!

  21. raven says

    I’m just going to repeat what I already concluded about Bostrom.

    OK, I now know why I never heard of Bostrom.

    .1. Bostrom isn’t a philosopher.
    All he is doing is taking common ideas from Science Fiction, popular culture, and comic books and repeating them.
    There is nothing novel or original there whatsoever.
    It’s all been done many decades ago and far better than this clown.

    If that is philosophy, then you have drastically lowered the bar on who gets to call themselves a philosopher and also on what philosophy is.

    And what about where he gets all his idea?
    That makes all those Science Fiction writers, Comic Book writers, and popular culture creators such as movies, articles, Youtube, internet, etc. into philosophers as well.

    That is OK. Lowering the bar makes a huge number of people into philosophers.
    Including I/we on this thread.
    I’ve read Asimov, Pohl, Vinge, etc. as well as Wikipedia and some articles from magazines.
    I’m certainly now well qualified to criticize Bostrom, the unimaginative hack philosopher.

    If you call guys like MacAskill and Bostrom philosophers, you have to add the adjectives hack, second rate, derivative, intellectually mindless philosophers.

    “All he is doing is taking common ideas from Science Fiction, popular culture, and comic books and repeating them.” Really, a high school kid could do that.

  22. raven says

    If you look at where Bostrom gets all his idea, they are from ancient sources, before I and he were even born. Here is one example. R.U.R. was written in 1920.

    The killer robots are going to take over and kill us all.

    Wikipedia R.U.R. a play by Czech Karel Čapek, 1920.

    Premise
    The play begins in a factory that makes artificial people, called roboti (robots), whom humans have created from synthetic organic matter. (As living creatures of artificial flesh and blood rather than machinery, the play’s concept of robots diverges from the idea of “robots” as inorganic. Later terminology would call them androids.) Robots may be mistaken for humans and can think for themselves. Initially happy to work for humans, the robots revolt and cause the extinction of the human race.

    In the first story where the word robot appeared, they ended up killing off the human species.

    Asimov was writing his positronic brain robot stories in the 1940s.

    As previously mentioned, I’m a philosopher (as of an hour ago) not a historian.
    I’m sure if you look at Bostrom’s other ideas such as eugenics, the Singularity, the Matrix movie that he calls the Simulation(s), they are old ideas from someone else.

    I’m going to have to go to the library and ransack their collection of graphic novels for my next paper on philosophy. I’m not sure where Dark Seid and Apocalypsos (Superman characters for those not up on the latest) fits in with our future but I’m sure DC comics has something to say about it.

  23. raven says

    The idea of autonomous robots is actually very old and widespread.
    It dates back in one form or another to the ancient Greeks, Egyptians, and Hindus.

    When robot assassins hunted down their own makers in an ancient Indian legend

    MADE IN (ANCIENT) INDIA
    When robot assassins hunted down their own makers in an ancient Indian legend
    ByAdrienne Mayor PublishedMarch 18, 2019 Quartz

    As early as Homer, more than 2,500 years ago, Greek mythology explored the idea of automatons and self-moving devices. By the third century BC, engineers in Hellenistic Alexandria, in Egypt, were building real mechanical robots and machines. And such science fictions and historical technologies were not unique to Greco-Roman culture.

    In my recent book “Gods and Robots,” I explain that many ancient societies imagined and constructed automatons. Chinese chronicles tell of emperors fooled by realistic androids and describe artificial servants crafted in the second century by the female inventor Huang Yueying. Techno-marvels, such as flying war chariots and animated beings, also appear in Hindu epics. One of the most intriguing stories from India tells how robots once guarded Buddha’s relics. As fanciful as it might sound to modern ears, this tale has a strong basis in links between ancient Greece and ancient India.

    The story is set in the time of kings Ajatasatru and Asoka. Ajatasatru, who reigned from 492 to 460 BC, was recognised for commissioning new military inventions, such as powerful catapults and a mechanised war chariot with whirling blades. When Buddha died, Ajatasatru was entrusted with defending his precious remains. The king hid them in an underground chamber near his capital, Pataliputta (now Patna).

    Traditionally, statues of giant warriors stood on guard near treasures. But in the legend, Ajatasatru’s guards were extraordinary: they were robots. In India, automatons or mechanical beings that could move on their own were called “bhuta vahana yanta,” or “spirit movement machines” in Pali and Sanskrit. According to the story, it was foretold that Ajatasatru’s robots would remain on duty until a future king would distribute Buddha’s relics throughout the realm.

    Ancient automatons
    Hindu and Buddhist texts describe the automaton warriors whirling like the wind, slashing intruders with swords, recalling Ajatasatru’s war chariots with spinning blades. In some versions the robots are driven by a water wheel or made by Visvakarman, the Hindu engineer god. But the most striking version came by a tangled route to the.. continues

  24. StevoR says

    See with Europa :

    https://en.wikipedia.org/wiki/Talos

    Not to be confused with Europa the Galilean moon of Jupiter named after the girl abducted by Zeus in the guise of a white bull… rather than the swan that that gave birth to Pollux, Castor, Helen & Klytemnestra from a clutch of human / avian dino eggs.

  25. says

    What a horrible comments section. You know, there are diffrent ways to read philosophers and one of the worst way is reading them through your political lense. This is intellectual lazyness, that reminds me of those anti-vaxxers who do not want to engage with the actual science behind vaccination, but rely on conspiracy theories.

    Now, regarding Nick Bistroms remark about blood sucking insects, is not racist but has a antisemitic connotation, that is indeed hard to ignore. So, if you want to read his writing through a political lense, at least do it the right way.

  26. Olivier Audet says

    I know several smart people who have gravitated (mostly in the past afaik, thankfully) around the whole sort of Bostrom, LessWrong, Effective Altruism milieu, and I always wonder why. Bostrom as a philosopher isn’t totally uninteresting but his whole shtick seems predicated on an extremely dubious use of statistics which to turn complete thought experiments into ethical imperatives. By the time you reach the end of one of his reasonings, he’s piled complete unknown on complete unknown, so high it’s completely impossible to evaluate if most of his claims entertain the slightest relationship to reality at all.

    We are supposed to accept that based on the postulate that even an extremely improbable scenario must be taken seriously if its ethical consequences are serious enough (essentially: I can take a course of action that I know with 100% certainty will lead to a moral wrong if the wrong is negligible, but I should shy away from taking a course of action that has even a small chance of leading to an absolute ethical catastrophe. The more terrible the ethical consequences, the lower the acceptable odds, the further I should go to avoid or prepare for the worst-case scenario). A decent heuristic in many cases. Problem is, what are the odds of his scenarios being true? For the most part, the answer as far as we know is ????, and he piles them on top of each other so that in order to calculate the odds of the final outcome being correct, you have to multiply ???? by ????.

    He solves that by saying “ah, yes, but the ethical outcome would be so catastrophically bad, that even if ???? turns out to be an infinitesimally small number, you should still act as if the scenario were true and seek to avoid it”. The truth, of course, is that you can come up with an infinite number of horrifying, ethically abysmal potential futures starting from credible next steps, and the reason why they should not be acted on isn’t that they’re insufficiently bad scenarios, it’s simply that they’re effectively fiction.

    The most genuinely interesting way to read Bostrom is to take much of his work as an attempt to craft the most serious philosophical arguments possible about what seem like science-fiction scenarios – in other words, the most interesting way to read him is not to take him too seriously. Unfortunately, he takes himself very seriously and he is very invested in surrounding himself with people who take him very seriously.

  27. KG says

    basementboi@35,
    Bostrom’s work has blindingly obvious political implications, so it is impossible to read his work intelligently without using a “political lens” (note: “lens” has no final “e”). And antisemitism is a form of racism, so your second paragraph is otiose.

    Olivier Audet@36,
    One “low probability but catastrophically bad” scenario Bostrom might consider is that his entire population of umptyzillion simulated people end up being horrifically tortured for trillions of years by an evil tyrant. Since the only way we can be certain of avoiding this is by bringing about human extinction (better sterilise the entire planet in case a technological species re-evolves), that’s where Bostrom should be focusing his efforts. Indeed, maybe he’s already reached that conclusion, and all the guff he comes out with is just to distract us from the real plan…

  28. Olivier Audet says

    KG@37
    Ah, but he’s decided that a life is better than no life, and because there’s a potential sci-fi post-scarcity future where billions upon billions of humans can live fulfilling life, we should base all kinds of ethical decisions today on the impact Bostrom and friends think those decisions will have on this possible future, based on a chain of events that is both highly uncertain and almost completely inscrutable. Entirely coincidentally, the ethical decisions we should make to get there are highly attractive to a certain number of plutocrats who are eager to promote Bostrom and his views and collaborators. If you’re not willing to let the poor die today, there’s a completely unknowable chance that billions of people won’t get to live in post-scarcity sci-fi paradise, and that’s on you!

    (At first glance this seems like it could make for a highly convenient argument against abortion, but Bostrom concedes that we don’t currently live in post-scarcity paradise and family planning is probably overall an ethical good. Support for eugenics is, I guess, an unfortunate side effect.)

  29. Olivier Audet says

    KG@37
    I guess I just repeated a lot of obvious stuff about Bostrom’s arguments though. I suppose my point is, there’s a lot less money and prestige in telling rich people to, more or less, on the long term, kill themselves (I would say he could try quoting Keynes at them but the rich don’t like that either).