They might turn me into a Luddite at this rate


It’s all good, their lives were worse than average anyway

All these raving mad techbro loonies keep ranting about how AI, unless properly nurtured (and paid for), might lead to extinction, and how AI ought to be a high priority for humanity (meaning “give us money), and it’s confusing, because they use words differently than normal people. In particular, the word “extinction” means something very different from what a biologist might understand it to mean.

When TESCREALists [transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism and longtermism] talk about the importance of avoiding human extinction, they don’t mean what you might think. The reason is that there are different ways of defining “human extinction.” For most of us, “human extinction” means that our species, Homo sapiens, disappears entirely and forever, which many of us see as a bad outcome we should try to avoid. But within the TESCREAL worldview, it denotes something rather different. Although there are, as I explain in my forthcoming book, at least six distinct types of extinction that humanity could undergo, only three are important for our purposes:

Terminal extinction: this is what I referenced above. It would occur if our species were to die out forever. Homo sapiens is no more; we disappear just like the dinosaurs and dodo before us, and this remains the case forever.

Final extinction: this would occur if terminal extinction were to happen — again, our species stops existing — and we don’t have any successors that take our place. The importance of this extra condition will become apparent shortly.

Normative extinction: this would occur if we were to have successors, but these successors were to lack some attribute or capacity that one considers to be very important — something that our successors ought to have, which is why it’s called “normative.”

The only forms of extinction that the TESCREAL ideologies really care about are the second and third, final and normative extinction. They do not, ultimately, care about terminal extinction — about whether our species itself continues to exist or not. To the contrary, the TESCREAL worldview would see certain scenarios in which Homo sapiens disappears entirely and forever as good, because that would indicate that we have progressed to the next stage in our evolution, which may be necessary to fully realize the techno-utopian paradise they envision.

I think maybe “we” and “our” might mean something different to them, too, because the words don’t include me or my family or my friends or even distant acquaintances. Heck, they probably don’t include most of the life on this planet.

Later in his book, MacAskill suggests that our destruction of the natural world might actually be net positive, which points to a broader question of whether biological life in general — not just Homo sapiens in particular — has any place in the “utopian” future envisioned by TESCREALists. Here’s what MacAskill says:

It’s very natural and intuitive to think of humans’ impact on wild animal life as a great moral loss. But if we assess the lives of wild animals as being worse than nothing on average, which I think is plausible (though uncertain), then we arrive at the dizzying conclusion that from the perspective of the wild animals themselves, the enormous growth and expansion of Homo sapiens has been a good thing.

The lives of wild animals as being worse than nothing on average…who assesses that “worse”? People? TESCREALists? I was just watching an adorable little Theridion constructing a cobweb in a signpost — what was “worse” about that? It’ll probably thrive all summer long and leave behind a family of spiderlings who I’ll see building cobwebs next summer.

I don’t think the monarch butterflies and mayflies consider the expansion of Homo sapiens to be a good thing either — they’re dying and declining in numbers. Were passenger pigeons grateful for what we brought to them? I think MacAskill is playing a weird numbers game here. He thinks he can arbitrarily assign a value to an organisms life, either negative or positive or “average” (relative to what, I have no idea), and if it’s less than zero…pffft, it’s OK to exterminate them.

People who think that way about animals tend to eventually do the same thing to people, you know.

So where does this leave us? The Center for AI Safety released a statement declaring that “mitigating the risk of extinction from AI should be a global priority.” But this conceals a secret: The primary impetus behind such statements comes from the TESCREAL worldview (even though not all signatories are TESCREALists), and within the TESCREAL worldview, the only thing that matters is avoiding final and normative extinction — not terminal extinction, whereby Homo sapiens itself disappears entirely and forever. Ultimately, TESCREALists aren’t too worried about whether Homo sapiens exists or not. Indeed our disappearance could be a sign that something’s gone very right — so long as we leave behind successors with the right sorts of attributes or capacities.

Again, the extinction they speak of is not the extinction we think of. If their strategies lead to the death of every person (and animal!) on the planet, but we leave behind blinking digital boxes that are running simulations of people and animals, that is a net win.

I’m beginning to worry about these people. If I assign them a value of -1, will they all conveniently disappear in a puff of smoke?

Comments

  1. wzrd1 says

    Ultimately, TESCREALists aren’t too worried about whether Homo sapiens exists or not. Indeed our disappearance could be a sign that something’s gone very right — so long as we leave behind successors.

    Something’s wrong here.

    Ultimately, TESCREALists aren’t too worried about whether Homo sapiens exists or not. Indeed our disappearance could be a sign that something’s gone very right — so long as we leave behind *their* successors only.

    There, I fixed it. They don’t care about simulations, some mythical next stage that they predict, they care about money and power, as well as a mindset that “not only must I succeed, all others must fail”.
    I happily assign them a value of -100 and advise them that I’m considering the remainder of their exterminationist philosophy. They promptly move on, to if not more fertile ground, decidedly safer ground.
    An odd choice, since they claim to basically not mind individual extinction.

  2. robro says

    I don’t think we have anything to worry about from auto-complete, not even souped-up auto-complete, any time soon. However, abject stupidity is an immediate problem.

  3. raven says

    Later in his book, MacAskill suggests that our destruction of the natural world might actually be net positive, …

    Oh my Cthulhu, this is about as stupid as it gets.

    We are on a large space ship and the natural world is our life support system.
    Only total idiots wreck their life support system.

    And, we are part of the natural world as well.
    By the time we’ve wrecked the natural world, things would be bad enough that a whole lot of people would likely be dead or not exist because much of the earth is uninhabitable.

    This doesn’t even rise to the level of pseudo-intellectual nonsense.
    It’s just gibberish.
    But it is nice of MacAskill to let everyone know he is an evil idiot who can’t think his way out of a paper bag.

  4. birgerjohansson says

    There is German phrase that means “life unworthy of life”. It is rarely used after 1945.

  5. daulnay says

    Worth reminding people that ChatGPT and similar ‘AI’ systems are merely parrots which are very good at pattern matching and generating according to pattern. There is no underlying understanding or intelligence, your dog or cat has much more understanding and intelligence (a given that intelligence is poorly defined).

    The people hyping ‘AI’ at the moment are hucksters, maybe con artists at worst. There are earnest attempts at machine intelligence – Wolfram Alpha for example – but the ChatGPT kind of stuff is not. Potemkin village intelligence, basically.

  6. robro says

    daulnay @ #5 — I don’t know what to say about ChaptGPT or OpenAI specifically, but from my vantage point, “Large Language Models” and “generative AI” is not just a bunch hucksters and/or con artists. There are earnest attempts in this area of ML, and a history of research behind it. Depending on the data sources and prompts, you can get useful results that don’t depend on so much supervision. That said, it’s not magic and possesses no innate intelligence despite the over-hyped rhetoric we’re getting now. Most importantly it isn’t likely to destroy the world or the extinction of humans. Humans are quite capable of that without any help from artificial intelligence.

  7. says

    If their strategies lead to the death of every person (and animal!) on the planet, but we leave behind blinking digital boxes that are running simulations of people and animals, that is a net win.

    I think we’re already dealing with simulations of people here.
    And if you’ve ever watched Life After People, you know those boxes won’t be blinking for very long without maintenance.

  8. nomdeplume says

    “from the perspective of the wild animals themselves, the enormous growth and expansion of Homo sapiens has been a good thing” – exactly the argument used in parts of the world (USA, Canada, Australia, NZ, most of Africa) where an invading European culture caused death and destruction to the people who lived there before colonisation/invasion. In fact an Australian right wing politician used exactly that argument just last week!

  9. says

    But if we assess the lives of wild animals as being worse than nothing on average…

    First, what the AF does that even mean? And second, why should we make such an “assessment?” On what evidence, logic or priorities would such an “assessment” be based?

    To the contrary, the TESCREAL worldview would see certain scenarios in which Homo sapiens disappears entirely and forever as good, because that would indicate that we have progressed to the next stage in our evolution, which may be necessary to fully realize the techno-utopian paradise they envision.

    IN other words, the TESCREAL folks are nothing but a bunch of clueless twits fantasizing about a “techno-utopian paradise” that has absolutely nothing at all to do with real people in the real world that they don’t even pretend to give a shit about, let alone understand. Why should anyone listen to these morons, when there’s lots of stoners, crackheads, methheads and junkies with a better grasp of reality?

  10. Nemo says

    People who think “if we assess the lives of wild animals as being worse than nothing on average, which I think is plausible” are people who I don’t want making any decisions about anything. That’s demented.

    But the distinction between types of extinction seems on point enough. Today, for instance, we think of Neanderthals as extinct, but some of their genes live on in us. That would presumably make them “terminal”, but not “final”. (As for “normative”… well, that category seems less useful.)

    AFAICT, no species exists forever. Humans have more capacity than any other species to reshape the environment to ourselves rather than the other way around, which ought to be something of a conservative force in our own evolution. Yet still, we change. I can imagine a future where some portion of humanity takes intentional and extreme measures to try to remain the same (kind of the opposite of transhumanism)… but after a million years, despite their best efforts, they’ll probably be different.

  11. Artor says

    I am almost offended that the words “effective altruism” appear in their cumbersome acronym. I don’t see these chucklefucks as being either effective or altruistic. Wishful thinking maybe?

  12. tuatara says

    Artor, if the “effective altruists” are excluded the remaining cohorts are collectively RECTLS. I am happy with that.

    Mind you, considering their altruism is not effective, the E and the A need not be adjacent so perhaps the entire collective can be called ERECTALS.

  13. DanDare says

    There is a whif of “I’ll be ok and one of the new transhuman overlords” about it.
    The extinction of all biology is very “some of you may die but its a sacrifice I’m willing to make”.

  14. chrislawson says

    I used to think the Utility Monster was a ridiculous critique of utilitarianism that had no bearing on reality, and yet here we have McAskill embodying it.

    Most of the TESCREALists (terrible term, btw) are not utilitarian because they think it is a useful tool to improve lives. Most of these people are utilitarians because it allows them to hack assumptions until they find a justification for whatever they wanted in the first place.

  15. chrislawson says

    Artor@11–

    ‘Effective altruism’, as developed by philosophers (especially Peter Singer), was easily co-opted by tech billionaires and their sycophants as a tool to further their own agendas (get more rich and powerful) while pretending to act for the global good. In the most obvious cases, e.g. Sam Bankman-Fried, it was nothing more than a cover story for his scam that served to make the rubes feel good about their greediness, and distract journalists and investigators. It worked very well for SBF for many years.

    This is the fundamental problem of utilitarianism: it relies entirely on the metrics you use to define ‘goodness’ or ‘happiness’. As one critic put it (can’t recall who), we haven’t yet figured out how many headaches equals a broken leg. And that’s just on the matter of pain, not all the vast and unquantifiable measures of existential happiness. But being a mathematical model (you add up the goodness of different approaches and choose the path of maximal benefit) also makes it appealing to techbros and financial scammers whose entire ethos is based on cleverly manipulating math to get what they want.

  16. John Morales says

    It’s like they read and decided it was a road map.

    Features all the tropes.

    chrislawson:

    It worked very well for SBF for many years.

    Um, he only started FTX in 2019, and failed in 2022.

  17. John Morales says

    [heh. I didn’t attach the link to anything. Getting careless in my old age]

  18. says

    AI is really really far down on my list of scary things that could kill me. I put it pretty close to choking on a balloon at this point.

  19. wzrd1 says

    daulnay @ 5, huckster is exactly on target. I’m already using that fact to improve my daily life by using it for new spam filters – especially various headhunting firms and search services that hawk that ware as magic.
    Such as “our lasted ChatGPT search” or “Improve your resume with ChatGPT” being quite effective triggers and laughably little variation to complicate a basic RegEx pattern.

  20. jo1storm says

    I daresay that the term Luddite becoming a slur is one of the best examples of successful capitalist propaganda ever devised. Let’s just say that the worries of frame-breakers who called themselves Luddites were completely justified, their predictions completely correct and their tactics so successful that a new crime had to be invented and put on the books just to punish that behavior harsher. Almost 250 years later, we are facing the same problems and philosophical quandaries they have faced and some are proposing similar solutions.
    I’ll now talk about the dream of automation, what happens in reality and what was Luddites’ quite successful answer to that.

    The dream of automation is this: if we automate dreary and repetitive and horrible tasks, it will free more time for more fulfilling and useful tasks. The Eldorado of automation is that it will free so much time that leisure time for everyone will increase while the salary remains the same.
    In practice, the dream turns into a nightmare really fast. All that “freed time” is instead turned into a profit for the owner of the automation while the workers are fired and left to starve to death.
    Historical example was automated weaving machines / steam looms.
    Weaving is what can be called average job. It is not as simple as fetching, which is the simplest job possible (pick a thing, move it, and put it down somewhere else). Newspaper delivery is an example of fetching job. Ditch digging too. It is not complicated, it is simple but it is hard work. It is also not as complicated as engineering. Making and maintaining the steam loom would be an example of more complicated engineering job that requires years of training. Weaving requires 3 months tops to be okay productive for most purposes, two years and you are considered a master weaver. A lot of people are weavers, very few are steam loom builders.
    The dream: if I put steam loom, I can replace 40 weavers with a machine and 4 operators. I free up those 36 people to open a new shop somewhere else. Or, if I have money, I can buy 10 machines, train 40 weavers to become 40 steam loom operators and my productivity goes up 10 times and more, it goes through the roof. Now those 40 weavers don’t have to work 12 hours hard labor, they can work 6 hours each easier labor operating a machine for the same salary. Everybody wins!
    The nightmare that came true: machine owners are greedy bastards. If they were not, they wouldn’t be able to afford to buy machines, right? So what happens instead is that he doesn’t buy 10 machines, he buys 5 and hires two operators for each (12 hour workdays). And because the work of the loom operator is comparatively easier than that of a weaver, he refuses to pay weaver’s salary to a loom operator. So what we now have is 5 machines and 10 loom operators working the same 12 hour shifts with lower salary replacing 40 weavers. All the money he saved by automation he pockets to himself. What happens to 30 weavers? They get fired and them and their families starve to death.
    Long term, they will find other jobs. Suddenly, weaving is not a good profession to have, so they’ll find other jobs, less paid jobs, fetching jobs. They and their families will still have less money and less food on the table, but at least they’ll survive. And the worry is that it will happen to everyone! What money was equaly spread to the community of 41 people is now shared like this: 34 salaries go to one man (machine owner), 6 salaries go to 10 loom operators.
    Luddites’ answer was simple: smash the machine. Show the machine owners that there is more weavers than them, that if they continue on that path they’ll face riots and consequences. Get concessions, make sure that loom operators get the same salary as weavers used to get and that well paid jobs and training are provided for weavers that lost theirs due to automation. And it worked, for a while. It was very effective, in fact.
    The answer of greedy bastards to that? Call their lawmaker friends to make smashing the machine, which is a simple property crime that used to be punished by having to pay for what you smashed, a much worse crime with much harsher penalties. And thus what became as act of protest became the act of industrial sabotage. The penalty? Imprisonment and enslavement to do hard labor in prison. And your family still starves while you are in prison, because it is 18th century and safety net is non-existent.

  21. jo1storm says

    Sorry for the wall of text, should have previewed. It looked okay in Word and comment box.

  22. John Morales says

    jo1storm,

    The dream of automation is this: if we automate dreary and repetitive and horrible tasks, it will free more time for more fulfilling and useful tasks.

    The dream of capitalist automation is this: if we automate dreary and repetitive and horrible tasks at lower cost than human labour, more profit will ensue.
    Turns out it’s generally capitalists who tend to have the capital with which to set up automation.

    The ancillary benefit of freeing the time of the peons labouring away at dreary and repetitive and horrible tasks is basically an epiphenomenon.

    No less real for that, of course.

  23. jo1storm says

    @24 you didn’t read the rest of my wall of text, haven’t you? That’s the nightmare of automation, actually. The dream turned into a nightmare.

    Anyway, the modern version of this you can read on r/AntiWork and r/MaliciousCompliance when redditors automate themselves out of a job. The modern version goes like this:

    1) redditor with an engineering bent or some engineering training is hired to do job. Salary is average, work to be done is average.
    2) they automate the tedious, time consuming part of the job. Something that took 4 hours, now takes 15 minutes at most.
    3) The boss notices and gives them more things to do. They automate them as well. The end result for redditor: instead of doing two things for 8 hours a day, now they are doing ten things for six hours a day. The end result for the boss is redditor not working the job they were hired to do or as hard as they were hired to do.
    4) Gears in boss’s head start turning. With this automation they can hire two much cheaper laborers! A trained monkey could do it! No need to pay average salary when you could pay two low salaries, we are saving money here.
    5) They hire two interns and make redditor train them in how to use automation. Then they fire the redditor.
    6) The automation breaks, two interns quit (because this tedious work from before the automation is not what they signed for).
    7) The redditor has to be replaced with 3-6 average people without engineering bent.

  24. John Morales says

    I see. So, the modern version of automation means, in your head-canon, that one person’s engineering job is replaced with 3-6 average people without engineering.

    @24 you didn’t read the rest of my wall of text, haven’t you?

    Of course I read the entire comment; thing is, I only commented on the conceit you mentioned. And, trust me, I used to read Dan Fincke’s blog, so what to you is a wall of text to me is a tiny curb to step over.

  25. Silentbob says

    @ ^

    By George, I think Morales just made a joke.
    *monacle pops out of eye*

  26. KG says

    There are earnest attempts in this area of ML, and a history of research behind it. Depending on the data sources and prompts, you can get useful results that don’t depend on so much supervision. That said, it’s not magic and possesses no innate intelligence despite the over-hyped rhetoric we’re getting now. Most importantly it isn’t likely to destroy the world or the extinction of humans. Humans are quite capable of that without any help from artificial intelligence. – robro@6

    QFT. LLMs are the subject of both hype and anti-hype. The key question is whether they are a genuine step on the way to AGI, or a dead end, and on that, relevant experts are divided (e.g. Geoff Hinton on one side, Rodney Brooks and Ernie Davis on the other – the divide tends to mirror that between neural net and other approaches to AI), so the sensible option is to remain open-minded. That the technology underlying LLMs – deep learning – has produced notable advances in areas such as protein folding is not in real doubt, and LLMs themselves are certainly useful (but do make significant errors) in machine translation, speech recognition and programming.

    The “lives of wild animals as being worse than nothing on average”…who assesses that “worse”? – PZM

    I agree that we humans (the only possible candidates) can’t in practice make that assessment, but I’ve seen anti-vegan “arguments” that it would be wrong to abandon livestock farming because then domesticated cows/pigs/sheep/chickens/etc. would become extinct, and wouldn’t that be terrible for them? While vegetarians and vegans often rely on arguments that imply, if not stating outright, that the lives of most individual livestock animals are indeed “worse than nothing on average”. If we can (as I think we can) reasonably make that assessment, it’s hard to maintain that we can never in principle make the same judgement about those wild animals which have the capacity for suffering and enjoyment (whether that includes spiders, I don’t know).

  27. jo1storm says

    @26 John, that is the original meaning of automation: engineer makes machines that do work. The fact that these are now virtual machines made of software makes no difference. It is not engineering job that is replaced, it is automation that is replaced by people doing it manually.

    The conceit is on the boss. They got higher skilled worker than they paid for and they got them cheap. They thought they were paying for a grunt, they got amateur automation engineer. If they hired professional automation engineer to do the same that the amateur did, it would have cost them between 5 and 100 times more than they were paying them to do make them the machine that amateur did for free. Now that the machine to do the job broke, one person without machine can’t accomplish the same labor as one person with a machine. But 3-6 people can, depending on the machine.

  28. wzrd1 says

    jo1storm @ 30, but the more common event is, pointy haired boss brings original amateur engineer employee as a consultant for a hell of a lot more to fix the automation – after first accusing and threatening litigation, increasing the amateur’s price, tries to screw engineer, pays through the nose and engineer pads the resume as a consultant.
    Boss continues such nonsense, company fails and gets bought by a competitor, leaving boss out of work and the owner even wealthier than before. Boss then ends up fetching.

  29. says

    The glaring problem that always jumps out at me with the “we must all sacrifice and not worry about being happy now for the sake of our descendants (meat or virtual) being happy in the techno-utopia is there is no real end point. Even if the techbros brought about a future a thousand years hence with ten trillion virtual people, the message would still be “your happiness is not as important as the happiness of the ten quadrillion virtual people who will exist in another thousand years.”

  30. Daniel Martin says

    The specter of AI posing an existential threat to all of humanity is a convenient distraction and smokescreen.

    It is what first-movers in the current AI hype cycle (e.g. ChatGPT) throw up to try to get the government to come in and regulate further AI development for everyone’s “safety”. The truth is that current AI poses nothing like that threat, but also the current first movers have no moat against second-movers, and desperately want to create one with needless regulation.

    There is though a very real and actually current threat from AI, and that’s discrimination/bias. Deep learning/ML/AI-based solutions (whatever you call them) are well-known to identify and replicate (and in some cases, amplify) racial and gender biases present in their training data. Also, the vast majority of current solutions built on these technologies render a black-box judgement. “Why does the system recommend that this person not get granted bail?” (you can’t answer that, because how the system turned its inputs into its outputs is often a proprietary trade secret, and even when not is very difficult to debug)

    That’s the danger of AI technology: biased systems used to render judgements that can’t be inspected or queried, then enforced by humans “just doing their job” as the machines’ output told them to. Addressing that, however, would require reworking the business models of loads of ML/AI startups so we aren’t going to see much willingness to investigate that. (Aside from individual researchers/activists like Timnit Gebru)

  31. jenorafeuer says

    Daniel Martin@34:
    I remember hearing about a situation like that back in University (1990 or so): centuries-old University which had a bad reputation for bigotry in its acceptance criteria decided to ‘fix’ this by training a neural net based on their previous acceptance data, on the idea that the computer itself couldn’t possibly be prejudiced.

    Tests arranged by other people using classic techniques (send out the exact same CV but with different male/female/foreign names and see if there’s any difference in acceptance) and still found biases. Because the people who trained the neural net had included the names in the training data, and the computer had internalized that certain foreign-looking or feminine-looking names were less likely to be accepted, and so had significant weights associated just with name variations.

    The public denunciation was enough to get them to re-run the whole training sequence from scratch with the names removed from the training data.

  32. xohjoh2n says

    @35:

    …so now it generates capricious and perverse results, because the bias is still there in the training data but the net no longer knows how to assign that bias to its inputs…

  33. jo1storm says

    @35 Oh, I heard about that one. So they removed names and all mentions of sex from the training data. It still remived 95% of women from the candidate pool. Turns out it was removing them based on 1) colleges they studied at (all “female” colleges were automatically a no) and 2) work experience at “female institutions” as decided by language used in the desciption. Such as nursing, secretary, dactilographer etc.

    Further investigation found 2 (two!) guys that worked at the university who had the final saying in hiring process and were, frankly, mysoginistic assholes. One worked at the position for over 20 years, the other for over 30 as his successor. Combined, in nearly sixty years they haven’t hired a woman for leadership position in that time unless there was heavy political or funding pressure and have hired 3(!) In total for 120+ positions.

  34. xohjoh2n says

    @35,37:

    …or, indeed, it finds something else as a proxy for the redacted names…

  35. wzrd1 says

    It is interesting to see AI’s fall for the same traps humans fall for.
    When I look at a personnel file or resume, I ignore the name, sex and in military files, the military personnel file photograph. It never ceased to amaze me how many leaders looked and even suggested people by showing the personnel file portrait.
    I was weird, I looked for qualifications, experience, suggestions of advanced studies and greater diversity in experience bases. That whole “is the candidate qualified” being critical.

  36. John Morales says

    wzrd1:

    It is interesting to see AI’s [sic] fall for the same traps humans fall for.

    Um. You are both giving those systems too much credit, and not enough.

    Put it this way: The very same system if trained on a different dataset would give different results. And I’m pretty sure you yourself claimed they are not intelligent.

    Of course, the training dataset need not be static — that is, used once and never again updated. And, of course, there’s nothing to stop fine-tuning by curation.

    In short, you too are falling into the traps of (a) thinking the system as a whole is the same as the decision engine, and (b) assuming no improvement over time.

    (As with any other tool, unless used properly it may not achieve the desired result)

  37. wzrd1 says

    It was a joke. A brainless model failing in the same way that a brainless human fails.

    An entertaining item is, early neural networks would take the same dataset for training and yield different results.

    I’m uncertain as to improvement over time, as I’ve saw nothing on if the bot currently is still in learning mode or not. If it’s in a continuous learning mode, improvement on its own based upon learning from users pressing for an accurate answer, when an erroneous one was initially supplied would move it toward improvement. Otherwise, one has to rely up developers only and hence, the good graces of the company paying them.

    But, you are spot on, the right tool for the right job.

  38. wzrd1 says

    Yep! But, not so old as to be old enough to remember a moth stuck in a relay. ;)

  39. GerrardOfTitanServer says

    we haven’t yet figured out how many headaches equals a broken leg

    This where John Rawls Veil Of Ignorance comes to the rescue, partially. Basically, while we can’t determine a formula or come to compete agreement on individual cases, by asking the question “ok, suppose you were randomly assigned a position in this society, including parents and native talents, which society would you prefer?” seems to tease out our real feelings on the matter.