Existential alarms about AI and longtermism


AI has been much in the news recently. The initial splash was with ChatGPT and its potential to enable students to use it for writing assignments and the threat to eliminate the jobs of people whose work consisted mainly of writing. But suddenly things took a very dark turn and warnings that AI threatens the future of humankind are suddenly all over the media. We now have a public statement signed by 350 tech executives and AI researchers that warns of the danger of extinction of humanity posed by this technology. The signatories including Sam Altman, CEO of OpenAI the creator of ChatGPT who testified before congress. The statement says in its entirety:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

Extinction is a pretty dire word and this naturally set off alarm bells.

But there has also been a backlash to this statement, with arguments that the dangers are being overblown and that people like some of the signatories, especially those associated with the tech industry, are fear mongering to cover their self-interest.

[N]ot everyone was shaking in their boots, especially not those who have been charting AI tech moguls’ escalating use of splashy language — and those moguls’ hopes for an elite global AI governance board. 

TechCrunch’s Natasha Lomas, whose coverage has been steeped in AI, immediately unravelled the latest panic-push efforts with a detailed rundown of the current table stakes for companies positioning themselves at the front of the fast-emerging AI industry. 

“Certainly it speaks volumes about existing AI power structures that tech execs at AI giants including OpenAI, DeepMind, Stability AI and Anthropic are so happy to band and chatter together when it comes to publicly amplifying talk of existential AI risk. And how much more reticent to get together to discuss harms their tools can be seen causing right now,” Lomas wrote. 

“Instead of the statement calling for a development pause, which would risk freezing OpenAI’s lead in the generative AI field, it lobbies policymakers to focus on risk mitigation — doing so while OpenAI is simultaneously crowdfunding efforts to shape ‘democratic processes for steering AI,'” Lomas added.

Other field experts promptly shot back at the tech execs’ statement. Retired nuclear scientists, AI ethicists, tenured tech writers and human extinction scholars all called the industrialists to the carpet for the use of inflammatory language. 

“This is a ‘look at me’ by software people. The claim that AI poses a risk of extinction of the human race is BS,” retired nuclear scientist Cheryl Rofer said in a Tuesday tweet. “We have real, existing risks: global warming and nuclear weapons.” 

“A few weeks ago [Altman] was pontificating about leaving the EU market due to proposed training data transparency requirements. Do not take these statements seriously,” said tech writer Robert Bateman in a tweet. 

The use of scary language and fear as a marketing tool has a long history in tech. And, as the LA Times’ Brian Merchant pointed out in an April column, OpenAI stands to profit significantly from a fear-driven gold rush of enterprise contracts. 

“[OpenAI is] almost certainly betting its longer-term future on more partnerships like the one with Microsoft and enterprise deals serving large companies,” Merchant wrote. “That means convincing more corporations that if they want to survive the coming AI-led mass upheaval, they’d better climb aboard.”

This effort to warn of the existential danger posed AI reminded me of the ‘longtermism’ movement’s warnings about humanity’s future. This movement is associated with the Effective Altruism movement of William MacAskill. Sam Bankman-Fried, the disgraced founder of the cryptocurrency exchange FTX now being tried for fraud, was a big supporter and advocate of this movement.

Longtermism has also been criticized because it seems to prioritize nebulous long-term threats over those that are real and much more immediate, such as global warming, nuclear war, pandemics, and others. Émile P Torres provides a more detailed critique against longtermism’s argument for the development of more technology to save humanity from extinction.

[O]ver the past two decades, a small group of theorists mostly based in Oxford have been busy working out the details of a new moral worldview called longtermism, which emphasizes how our actions affect the very long-term future of the universe – thousands, millions, billions, and even trillions of years from now. This has roots in the work of Nick Bostrom, who founded the grandiosely named Future of Humanity Institute (FHI) in 2005, and Nick Beckstead, a research associate at FHI and a programme officer at Open Philanthropy. It has been defended most publicly by the FHI philosopher Toby Ord, author of The Precipice: Existential Risk and the Future of Humanity (2020). Longtermism is the primary research focus of both the Global Priorities Institute (GPI), an FHI-linked organisation directed by Hilary Greaves, and the Forethought Foundation, run by William MacAskill, who also holds positions at FHI and GPI. Adding to the tangle of titles, names, institutes and acronyms, longtermism is one of the main ‘cause areas’ of the so-called effective altruism (EA) movement, which was introduced by Ord in around 2011 and now boasts of having a mind-boggling $46 billion in committed funding.

It is difficult to overstate how influential longtermism has become. Karl Marx in 1845 declared that the point of philosophy isn’t merely to interpret the world but change it, and this is exactly what longtermists have been doing, with extraordinary success. Consider that Elon Musk, who has cited and endorsed Bostrom’s work, has donated $1.5 million dollars to FHI through its sister organisation, the even more grandiosely named Future of Life Institute (FLI). This was cofounded by the multimillionaire tech entrepreneur Jaan Tallinn, who, as I recently noted, doesn’t believe that climate change poses an ‘existential risk’ to humanity because of his adherence to the longtermist ideology.

The point is that longtermism might be one of the most influential ideologies that few people outside of elite universities and Silicon Valley have ever heard about. I believe this needs to change because, as a former longtermist who published an entire book four years ago in defence of the general idea, I have come to see this worldview as quite possibly the most dangerous secular belief system in the world today.

Why do I think this ideology is so dangerous? The short answer is that elevating the fulfilment of humanity’s supposed potential above all else could nontrivially increase the probability that actual people – those alive today and in the near future – suffer extreme harms, even death. Consider that, as I noted elsewhere, the longtermist ideology inclines its adherents to take an insouciant attitude towards climate change. Why? Because even if climate change causes island nations to disappear, triggers mass migrations and kills millions of people, it probably isn’t going to compromise our longterm potential over the coming trillions of years. If one takes a cosmic view of the situation, even a climate catastrophe that cuts the human population by 75 per cent for the next two millennia will, in the grand scheme of things, be nothing more than a small blip – the equivalent of a 90-year-old man having stubbed his toe when he was two.

Bostrom’s argument is that ‘a non-existential disaster causing the breakdown of global civilisation is, from the perspective of humanity as a whole, a potentially recoverable setback.’ It might be ‘a giant massacre for man’, he adds, but so long as humanity bounces back to fulfil its potential, it will ultimately register as little more than ‘a small misstep for mankind’. Elsewhere, he writes that the worst natural disasters and devastating atrocities in history become almost imperceptible trivialities when seen from this grand perspective. Referring to the two world wars, AIDS and the Chernobyl nuclear accident, he declares that ‘tragic as such events are to the people immediately affected, in the big picture of things … even the worst of these catastrophes are mere ripples on the surface of the great sea of life.’

This way of seeing the world, of assessing the badness of AIDS and the Holocaust, implies that future disasters of the same (non-existential) scope and intensity should also be categorised as ‘mere ripples’. If they don’t pose a direct existential risk, then we ought not to worry much about them, however tragic they might be to individuals. As Bostrom wrote in 2003, ‘priority number one, two, three and four should … be to reduce existential risk.’ He reiterated this several years later in arguing that we mustn’t ‘fritter … away’ our finite resources on ‘feel-good projects of suboptimal efficacy’ such as alleviating global poverty and reducing animal suffering, since neither threatens our longterm potential, and our longterm potential is what really matters.

What’s really notable here is that the central concern isn’t the effect of the climate catastrophe on actual people around the world (remember, in the grand scheme, this would be, in Bostrom’s words, a ‘small misstep for mankind’) but the slim possibility that, as Ord puts it in The Precipice, this catastrophe ‘poses a risk of an unrecoverable collapse of civilisation or even the complete extinction of humanity’. Again, the harms caused to actual people (especially those in the Global South) might be significant in absolute terms, but when compared to the ‘vastness’ and ‘glory’ of our longterm potential in the cosmos, they hardly even register.

[T]he crucial fact that longtermists miss is that technology is far more likely to cause our extinction before this distant future event than to save us from it. If you, like me, value the continued survival and flourishing of humanity, you should care about the long term but reject the ideology of longtermism, which is not only dangerous and flawed but might be contributing to, and reinforcing, the risks that now threaten every person on the planet. [Italics in original-MS]

The economist John Maynard Keynes famously wrote in his 1923 work, A Tract on Monetary Reform “The long run is a misleading guide to current affairs. In the long run we are all dead.” This statement has been caricatured as suggesting that he was advocating exclusively for short-term policy making. But he was not arguing for only looking at the present moment but instead saying that an excessive focus on the very long term can blind us to the immense harm that can happen in the nearer term.

It might be good to bear his warning in mind as we deal with AI and longtermism.

Comments

  1. billseymour says

    When I heard about the statement mentioned in Mano’s first paragraph, I immediately wondered what was in it for the CEOs; and it occurred to me that generating fear of the technology could limit the amount of money available to new startups.  It’s possible that it’s just another anti-competitive move that capitalists (in Marx’ dystopia which we seem to be headed for) are good at.  If I guessed right, then “film at eleven’.

  2. xohjoh2n says

    Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

    What? We should just ignore it and carry on as normal? Okey dokey.

    @1:

    Also, raising new regulatory impediments allows existing players to pivot to full monopoly more easily.

  3. says

    They don’t fear AI -- they fear AI that they don’t own. They’re going to have a great time replacing workers and abusing labor, and they’ll complain the whole time.

    I am impressed by what the AIs can do, and there will be some industry shakeout.

  4. Pierce R. Butler says

    … the two world wars, AIDS and the Chernobyl nuclear accident, … even the worst of these catastrophes are mere ripples on the surface of the great sea of life.

    Doesn’t he understand that three out of those four events killed some rich people?!?!!!

  5. Dunc says

    David Gerard’s most recent blog post is worth reading: Crypto collapse? Get in loser, we’re pivoting to AI. He also notes the links between the signatories of this statement, various longtermist / LessWrong / Effective Altruism wierdos, and the crypto community.

    The real threat of AI is the bozos promoting AI doom who want to use it as an excuse to ignore real-world problems — like the risk of climate change to humanity — and to make money by destroying labor conditions and making products worse. This is because they’re running a grift.

    Anil Dash observes (over on Bluesky, where we can’t link it yet) that venture capital’s playbook for AI is the same one it tried with crypto and Web3 and first used for Uber and Airbnb: break the laws as hard as possible, then build new laws around their exploitation.

    The VCs’ actual use case for AI is treating workers badly.

  6. outis says

    I don’t know, for the mo the main danger looks to be people using the stuff inappropriately, for instance employers firing staff thinking ChatGPT is going to work in their place, and similar. Natural stupidity using artificial intelligence, what?
    There IS alarmism galore. Some wit interviewed by the Guardian wondered, what if an AI decides to take all tha oxygen out of the atmosphere, in order to protect its precious chips from oxidation? Plausibility was strong in that one… or maybe it was just beer o’clock.

  7. Ketil Tveiten says

    The key to understanding the AI industry’s cry to be regulated is that their main competition is open source stuff. If there were regulatory barriers to entry in the field, they get to protect themselves from competition and have a nice little cartel situation going on and make lots of money, rather than users shifting to the open source alternatives, which are free.

  8. Deepak Shetty says

    @Dunk @5
    Beat me to it. David Gerard’s cypto posts are my weekend fun read.

  9. KG says

    The current “generative AI” such as Large Language Models and their image-generating counterparts, are the subject of simultaneous hype (Oh noes, by next year they’ll be deciding whether to exterminate us!!!), and “anti-hype”, dismissing them as nothing more than “spicy autocomplete” while some actual experts in the area, such as the first listed signatory to the public statement, Geoff Hinton, have recounted how surprised they have been by their capabilities, and revised their view of when actual “AGI” (Artificial General Intelligence) is likely to be developed. Certainly one should be sceptical of the motives of the signatories to the statement, but many of the academics are not apparently working in the field -- although of course, that in itself implies that their expertise is not directly relevant! In any event, prospects for regulating advances in the area seem pretty bleak: even if the “tech giants” could be tamed, it is unlikely the rival miltary-industrial complexes of the USA, China and others, can be -- and they will be working on systems designed to kill people, although presumably not to exterminate humanity.

Leave a Reply

Your email address will not be published. Required fields are marked *