From bit-shuffling to caring


Metaphors aren’t just decoration, they’re more like the foundationMichael Chorost explains in the CHE.

[I]n their 1980 book, Metaphors We Live By, the linguist George Lakoff (at the University of California at Berkeley) and the philosopher Mark Johnson (now at the University of Oregon) revolutionized linguistics by showing that metaphor is actually a fundamental constituent of language. For example, they showed that in the seemingly literal statement “He’s out of sight,” the visual field is metaphorized as a container that holds things. The visual field isn’t really a container, of course; one simply sees objects or not. But the container metaphor is so ubiquitous that it wasn’t even recognized as a metaphor until Lakoff and Johnson pointed it out.

From such examples they argued that ordinary language is saturated with metaphors. Our eyes point to where we’re going, so we tend to speak of future time as being “ahead” of us. When things increase, they tend to go up relative to us, so we tend to speak of stocks “rising” instead of getting more expensive. “Our ordinary conceptual system is fundamentally metaphorical in nature,” they wrote.

I’ve noticed the time one often. I don’t think I could think of it any other way however hard I tried.

Researchers are exploring all this with fMRI studies.

If cognition is embodied, that raises problems for artificial intelligence. Since computers don’t have bodies, let alone sensations, what are the implications of these findings for their becoming conscious—that is, achieving strong AI? Lakoff is uncompromising: “It kills it.” Of Ray Kurzweil’s singularity thesis, he says, “I don’t believe it for a second.” Computers can run models of neural processes, he says, but absent bodily experience, those models will never actually be conscious.

Some think the problem could be solved with sensors and actuators, others think it would be silly to replicate human physical limitations.

What’s emerging from these studies isn’t just a theory of language or of metaphor. It’s a nascent theory of consciousness. Any algorithmic system faces the problem of bootstrapping itself from computing to knowing, from bit-shuffling to caring. Igniting previously stored memories of bodily experiences seems to be one way of getting there.

That interests me – the difference between bit-shuffling and caring. It seems to me to be a big difference.

 

Comments

  1. Shatterface says

    The ‘mapping hypothesis’ – the theory that time is perceived through spacial metaphors – seems to hold true for everyone except the Amandowa tribe of Brazil.

    There are cultures who picture the future as behind them rather than in front of them though – which makes some sense as you can’t see it.

  2. soogeeoh says

    That interests me – the difference between bit-shuffling and caring. It seems to me to be a big difference.

    Yeah, that interests me too

    or rather frightens me in humans

  3. Dave Ricks says

    Berlioz knew enough science in 1862 to question musical pitches being high or low:

    For why should the sound produced by a string vibrating 32 times a second be closer to the center of the earth than the sound produced by another string vibrating 800 times? How can the right-hand side of the keyboard of an organ or piano be the top orhigh part of the keyboard, as it is usually called? Keyboards are horizontal. — Hector Berlioz, The Art of Music and Other Essays, trans. and ed. Elizabeth Csicsery-Rónay, 1994.

    He also wrote there that a cellist plays higher notes on a given string by moving their hand on the fingerboard down toward the floor. So we make no physical sense calling notes high or low.

    An online review paper cited Lakoff and Johnson to call this a metaphor.

    Some trumpet teachers say, “Don’t go up to a high note,” instead, “Bring a high note down to you.” Again we make no physical sense to have a spatial concept of where we are relative to a note. But it works like telling someone afraid of heights, “Don’t look down.”

  4. Morgan says

    I’ve noticed the time one often. I don’t think I could think of it any other way however hard I tried.

    I believe there’s a passage in Zen and the Art of Motorcycle Maintenance where the author claims (I don’t know enough to say whether it’s true, myself) that the ancient Greeks talked about the future, which we cannot know, as coming at us from behind, while the past, on which we can “look back”, is stretched out in front of us.

    Some think the problem could be solved with sensors and actuators, others think it would be silly to replicate human physical limitations.

    “Sensors and actuators” could also be described as “a way to perceive reality, and a way to act in the world”. So what about robots? What about AIs with no physical bodies, but virtual avatars in simulated spaces (especially as augmented and virtual reality are taking off, giving more incentive for research into how to construct interfaces that give human users a proper sense of being “embodied” in a computer-generated environment)? The leap from “making a consciousness, or at least a humanlike consciousness, requires giving it a sense of being embodied in space and having many sensations that we think of as physical rather than mental” to “…therefore it’s impossible” is unwarranted.

  5. John Morales says

    Morgan @4:

    “Sensors and actuators” could also be described as “a way to perceive reality, and a way to act in the world”.

    Well, yes, but that’s just changing the terminology.

    The hard question is: What is doing the perceiving?

    There’s a black box that determines whether or not, and if so, to what degree and in what manner the actuators should actuate depending both on current sensations and past history; so far it amounts to a stochastic process with many, many degrees of freedom.

    So what about robots? What about AIs with no physical bodies, but virtual avatars in simulated spaces (especially as augmented and virtual reality are taking off, giving more incentive for research into how to construct interfaces that give human users a proper sense of being “embodied” in a computer-generated environment)?

    Robots are designed constructs, humans are natural beings.

    As for AI’s with no physical bodies, what do you call the substrate wherein the AI is embodied?

    (Heh)

  6. Morgan says

    Er, and now that I actually read the article I see both of those points mentioned. To quote Charles Stross: “the current difficult problem in AI is getting software to experience embarrassment.”

  7. Morgan says

    Well, yes, but that’s just changing the terminology.

    Yes, that’s my point. “Sensors and actuators” sounds flippant and dismissive – “just throw widgets at the problem until it magics away”. Looked at another way, though, of course many applications for artificial intelligence would require it to take in sensory data and then take actions based on that.

    The hard question is: What is doing the perceiving?… Robots are designed constructs, humans are natural beings.

    Yes? So?

    Saying AI is impossible because it’s artificial isn’t any kind of argument. And I don’t know how to describe “what’s doing the perceiving” in an AI – but I don’t know how to describe it for a human, either. However, unless you want to invoke souls or vitalism, human consciousness is a thing that happens because of matter – artificially constructing something to produce a similar result to the naturally evolved machinery of the human brain and body may be very difficult, but there’s no reason to consider it impossible in principle, and I’m consistently amazed by the sheer badness of the arguments otherwise.

  8. John Morales says

    Morgan @7:

    Yes? So?

    Saying AI is impossible because it’s artificial isn’t any kind of argument.

    Who said that?

    However, unless you want to invoke souls or vitalism, human consciousness is a thing that happens because of matter – artificially constructing something to produce a similar result to the naturally evolved machinery of the human brain and body may be very difficult, but there’s no reason to consider it impossible in principle, and I’m consistently amazed by the sheer badness of the arguments otherwise.

    Mmmmm. I think “in principle” is optimistic, given that the issue is “in practice”.

    In principle, we could build a space elevator. At least we have the theory (we know the principle), which is more than we do for building a mind.

  9. Morgan says

    Who said that?

    You certainly seemed to be implying it with “Robots are designed constructs, humans are natural beings.” What else is the point of saying so? Genuine question – why did you say that, apparently apropos of nothing?

    Mmmmm. I think “in principle” is optimistic, given that the issue is “in practice”.

    And yet the argument (or assertion) that AI is impossible in principle is made often enough to be worth countering, IMO. Indeed, Lakoff seems to be making it in the linked article.

    In principle, we could build a space elevator. At least we have the theory (we know the principle), which is more than we do for building a mind.

    On the other hand, we don’t have any naturally-occurring space elevators around to point to as confirmation that, yes, the materials science challenges are tractable. We do have a handful or billion of evolved intelligences kicking around to serve as evidence that matter can be made to go through thinky motions.

  10. John Morales says

    Morgan @9:

    You certainly seemed to be implying it with “Robots are designed constructs, humans are natural beings.” What else is the point of saying so?

    You asked @4: “So what about robots?”.

    (I think you confused yourself @7 when you used ellipsis to join two separate retorts; you should have noted which response was to which claim)

    And yet the argument (or assertion) that AI is impossible in principle is made often enough to be worth countering, IMO.

    So you weren’t addressing me anything here written, just something that is elsewhere claimed.

    (Tsk)

    On the other hand, we don’t have any naturally-occurring space elevators around to point to as confirmation that, yes, the materials science challenges are tractable. We do have a handful or billion of evolved intelligences kicking around to serve as evidence that matter can be made to go through thinky motions.

    <sigh>

    You rail against a phantom, whilst hand-waving Sufficiently Advanced Technology.

    (Not to mention, you’re derailed the topic — to wit, that the conscious mind communicates symbolically rather than literally)

  11. Morgan says

    (I think you confused yourself @7 when you used ellipsis to join two separate retorts; you should have noted which response was to which claim)

    I think I confused you, due, I’ll grant, to having written unclearly. I joined two statements of yours because they were both equally beside any point I could see being relevant. I asked “what about robots” because the assertion was that computers don’t have bodies and therefore can’t be conscious – so the obvious question is “what about computers controlling robot bodies?” What does “robots are constructed, humans are natural” have to do with any part of that? I really don’t understand: what point were you trying to make with that statement?

    I had written more here about who said what in response to which and whether it constituted derailing or not, but on reflection there’s not much point. You may (heh) and (tsk) at that as you like, secure in the knowledge you’ve successfully avoided communicating clearly.

  12. dmcclean says

    Since computers don’t have bodies, let alone sensations, …

    I think it would be very difficult to support either of those assertions (that computers don’t have bodies, or that they don’t have sensations) without resorting to definitions of “bodies” and “sensations” that are intrinsically dualist.

    Even then, I don’t see how you can get to “computers don’t have bodies”. They clearly do. It may be the case that their bodies are crunchier than ours, but so what?

    Regarding the definition of “sensations”, it’s difficult to find a definition of “sensations” under which you can both (a) show that computers don’t have them, and (b) show that humans do have them. Without (a) the assertion that “computers don’t have sensations” is false, and without (b) it’s irrelevant. People have spent a lot of time exploring the space of such definitions without reaching a strong consensus, so it’s odd that the author breezes by it so quickly.

  13. dmcclean says

    Sure, but why should that mean anything? The author doesn’t say, probably assuming that it’s obvious, but it isn’t.

    Heck, “organic” barely means anything on its own terms, even apart from the consciousness debate, to the point where the history of organic chemistry reads like the history of the slowly dawning realization that it’s just chemistry and that “organic”-ness isn’t really a category difference.

    It might be illustrative to look at “the main, literal definition” more closely. Here’s one from Merriam-Webster:

    1a : the main part of a plant or animal body especially as distinguished from limbs and head : trunk

    b : the main, central, or principal part: as (1) : the nave of a church (2) : the bed or box of a vehicle on or in which the load is placed (3) : the enclosed or partly enclosed part of an automobile

    2a : the organized physical substance of an animal or plant either living or dead: as (1) : the material part or nature of a human being (2) : a dead organism : corpse

    b : a human being : person

    Let’s strike 2(b) as being too specific, it seems likely that, e.g. whales, “have bodies” in the sense the author meant. Similarly let’s strike 1(b) as being too general; perhaps, as you suggest, the author meant a narrower definition of “bodies” than one in which computers do have bodies. So we’re down to either 1(a) or 2(a).

    I’d argue that 1(a) doesn’t help the author. It seems entirely likely to me that my severed head, attached to suitable “inorganic” cardio-pulmonary support devices could perform moral reasoning and continue to exhibit strong (non-A)I. So the expected intuition that “having bodies” in this sense matters to the “caring” question doesn’t actually hold for me. Do your expectations of what would happen in this scenario differ? I would argue that the (gruesome, I really suggest not reading this article) experimental evidence supports my guess that such a head would exhibit essentially (or even completely) normal moral functioning.

    So we’re down to 2(a).

    2a : the organized physical substance of an animal or plant either living or dead: as (1) : the material part or nature of a human being (2) : a dead organism : corpse

    First of all, computers have been built from “bodies” of this kind. Second, it doesn’t seem that integrating a computer with a chipmunk carcass would materially change its ability to perform moral reasoning, so it doesn’t seem an especially relevant definition.

    But more fundamentally, the only sense in which ordinary silicon computers don’t have these “bodies” is because we have defined them to be the type of thing that only “animal(s) or plant(s)” have. So any conclusion we could reach is meaninglessly circular. We need a definition of “bodies” that isn’t defined by exclusion in this way if we want to draw a conclusion from the fact that computers don’t have them.

    Another way of seeing it is that I, having such a body, am quite capable of mechanistic computing and of executing supplied instructions. So you also need to redefine “computer” in such a way that I’m not one. The standard way of doing this is to just declare that “animal” and “computer” are separate categories by fiat. Which is fine for the dictionary, but you can’t then hope to conclude anything from the observed disjointness of “animals” and “computers” because it isn’t a real observation, it’s an arbitrary choice.

    So I should be more precise. Either “computers” do have “bodies” or your definitions of both have to be carefully arranged to avoid this result, in a way that invalidates conclusions you might have hoped to have drawn from the disjointness. I’d challenge the author to present definitions under which the observation that “computers don’t have bodies” is both true and relevant.

  14. Dave Ricks says

    I can explain what Rodney Brooks meant by this —

    For anything to develop the same sorts of conceptual understanding of the world as we do, it will have to develop the same sorts of metaphors, rooted in a body, that we humans do.

    That sentence says two things —
    (1) If it is true that metaphors are fundamental to how our minds work, then for some form of AI to have “the same sorts of conceptual understanding of the world as we do,” that AI would be based on metaphors too.
    (2) For that AI to learn metaphors, like spatial metaphors, it would need to move around. That’s what Brooks meant by being “rooted in a body” — to move around (not what a body is made of). I wrote move around in italics because that’s how I heard him say it around 1995 (I didn’t read his book).

    Brooks’ first thing (1) is an if/then statement: if it’s true, then it’s true. His second thing (2) is a claim to consider. In my comment #3, I mentioned my visualization of “where” I am as a trumpet player relative to a “high” note. For AI to think like me, it would need to visualize “where” it is is relative to a “high” note. How would AI learn that? Brooks claims it needs experience moving around. People may agree or disagree with him; I’m just saying that’s what he meant by being “rooted in a body”.

    His claim still leaves the door open to other forms of AI being different that us. Also, it was the Chronicle article that said, “computers don’t have bodies, let alone sensations.” That distracted us from what Brooks meant.

  15. dmcclean says

    I agree, my complaint was with Chorost and not Brooks.

    What Brooks is saying is highly open to misinterpretation, as Chorost indeed misinterpreted it, and as Ophelia’s headline and conclusion do, by people rounding up the definition of “like” in your phrasing of (2) (“for AI to think like me”). The sense in which it holds is quite narrow.

    The “moving around” hypothesis makes specific predictions about the intellectual and moral reasoning capabilities of those born with far-reaching paralysis that are disconfirmed by observation.

    I’m not familiar with them, but there have probably been all kinds of ethically dubious animal studies of this too. For example, Wikipedia, citing an article in The Washington Post magazine which doesn’t seem to be freely available on the internet says that experimenters “removed monkey fetuses from the uterus, deafferented them, then returned them to be born with no sense of their own bodies.” The moving around hypothesis would predict that such monkeys would be intellectually/morally incapable. I don’t know if they were or weren’t. Perhaps we should ask the experimenter, but I’m hesitant to do so since the article suggests that he has been harassed by animal rights advocates.

    Also, there are plenty of inorganic computers that can move around, so even a retreat from bodies to bodies that can move around can’t rescue Chorost’s claim.

    I just don’t see much of a link with the difference between bit-shuffling and caring. Brooks’ writing may not be, but the commentary on it seems more than a bit sloppy, narcissistic (as a species, not as individuals), and isolated from empirical evidence.

  16. Brony says

    I’ve had thoughts about this question from my own weird angle. The reality of how metaphors are used does have roots in biology.

    To consider Dave Ricks’s example at 3 the first thing that comes to mind with the sound system is what it is and what it does at a really basic, really really old level. It translates vibrations through liquid into information about the environment.

    While all metaphors (and meta-metaphors) fail at some level I would wonder what the different in a high or low might mean to a potential aquatic ancestor? If such a thing were a reasonable question when considering the evolutionary origin of hearing (I don’t know off of the top of my head but this is where I am starting).

    We have a lot of choice individually and as a group with how we attach meaning to words associated with direction so I would not be surprised if there was some collection of real-world associations that shape the metaphorical usefulness of high-low and other directional metaphors.

  17. John Morales says

    Brony @17,

    While all metaphors (and meta-metaphors) fail at some level I would wonder what the different in a high or low might mean to a potential aquatic ancestor?

    Something like a switch from Cartesian to polar coordinates, presumably.

  18. dmcclean says

    Last night I was thinking that, if embodiment/sensation-as-distinct-from-sensing has a relationship with caring, that we might expect nociception to be the most important. If you can’t feel pain, can you understand the concept of pain, why others might wish to avoid it, and why inflicting it on them might (in most circumstances) be wrong?

    But it seems that people with an extremely rare disease known as CIPA are insensitive to pain (and heat, cold, and the need to urinate) from birth to such an extent that they must routinely consciously check themselves for injuries that would be extremely painful to neurotypical people. There are very few people afflicted, one estimate is 1 in 360,000, so obviously it would be difficult to get much statistical power in assessing differences in moral cognition, but I can’t find anything in the first few pages of google results for a variety of queries suggesting that there is a difference that has jumped out at people.

  19. Brony says

    @John Morales

    Something like a switch from Cartesian to polar coordinates, presumably.

    That could be part of it, and would be if such a thing was in our evolutionary history. But [location in space] is only one part of how information is stored. The other part is [what the thing in perception means to the individual] (and the group in social species). Is it a good thing or a bad thing? Is it Environmental or a creature? Is it social, predator, prey, or other related? The logical structure of the information storage will be driven and managed by emotional systems that target perception and determine accurate and relevant memory retrieval.

    Such a switch would be a response to something in the environment that created the potential for location to be recorded in memory. Something would have caused such a switch to be “locked in” by evolution. Whatever that thing was, is a thing that is just as closely connected to how metaphors are useful.
    But evolution has made sure that we pay attention to many different things. So there are lots of definable “hubs” in each sensory or storage/retrieval system depending on the level of the hierarchical representation of reality we are accessing. These computational hubs in assigning logic and meaning will have a representation in the flesh, and an evolutionary origin of the human brain/mind. Because something like a metaphor necessarily has to do with socially defined collective meaning we can consider a single sense with multiple such hubs.

    My favorite one to mentally “play with” (always dangerous, I read my perceptions in too) is the possible involvement of the [red/green contrast] in primate evolution. A defining feature of human vision processing is the detection of a contrast, a simple difference. Two important contrasts are [blue/yellow] and [red/green]. There will be parallel processing of contrasts, locations, and associated meanings involved. You will also see these genes and functions divided up by “S,M,L” for short, medium, and long wavelength.
    Far in the ancient past our ancestors seemed to have had tetrachromatic vision (four color receptors), and most teleost fish (the non-coelacanth type), reptiles, birds and such remain tetrachromats. Then during the time of the dinosaurs our ancestors lost half of their photoreception potential, the [blue/yellow] remained. At some point between 35-60 million years ago [yellow] “decided” that instead of just being yellow all by itself, in our ancestors [yellow] became not just yellow but also [red/green]. The single yellow had diverged into two genes that detected a newly important contrast. (you will need to restrain your imagination now, but the images do matter collectively)

    [Blue/yellow] is an ancient logically associated pair. It’s origin, logical relationship, and pervasiveness of effect may be as old as the 500 million year old origin of vertebrate visual pigments It might just be the light of the sky versus the darkness of the deep, or it might be far more depending on how that relationship was affected by other things over 500 million years.

    The relevance to looking for any biological basis of metaphor might be seen in any possible difference in the parallel processing of [blue/yellow] and [red/green]. What is more ancient and selected for may be more likely to have deep roles in processing more broadly and produce more global effects if “disrupted”. Note that some disruptions may be information disguised as trauma and therefore bad in a personal sense, but neutral in the impersonal sense.
    Because of this possibility the more recent [red/green] logic may have different roles in emotional perception than [blue/yellow], or differences in intensity of effect or some other identifiable difference in how we respond to them as a group. The natural problems with evolutionary psychology now follow, but here is one reference on the evolution of trichromacy in primates.

    We can all appeal to a huge amount of personal experience in the realms of the effect of color on emotion and I’m sure many people reading this comment will already have examples in mind. We all naturally want to use this experience in trying to find meaning in this history and these facts. But because we have problems finding the most objective meaning as a group we must respect all of them until we have some best methods in mind.
    I can think of couple of relevant questions and observations to shape very sensitive and important considerations.
    *Do color blind people have moral problems, or strange problems related to how we use color in our communication? That seems a possibly insensitive question to me because nothing in my experience suggests that. Yet I think mentioning that I assume they don’t is important and any color blind people would have very valuable perspectives here.
    *[blue] is on an autosome (chromosome 7). [yellow] [red/green] are on the X-chromosome. This will cause much dangerous speculation. But it is a reality. I will just let others tell me what that means, as a group.
    *If you think sound and sight are deep, wait until you try to consider smell…

  20. dmcclean says

    One more interesting thing. If we are going by original definitions, the word “computer” is attested all the way back to 1613 to refer to a person who computes, which should make it even more difficult to come up definitions for “computer” and “body” such that computers don’t have bodies.

  21. Dave Ricks says

    I could agree that a claim depends on clear definitions of words within a claim. But if you let definitions outside a claim into play, to maneuver to invalidate a claim, then you have Calivinball:

    • Don Kirshner called Bruce Springsteen “the greatest force in rock and roll today.” (1970s).
    • Isaac Newton said a force changes momentum (in units of mass × velocity (1687, English 1728)).

    You could play these definitions of force against each other, to invalidate one claim, or the other, or both. But that would be word-play, or sophistry. Would you really classify a room full of people in WWII (women) performing calculations (for the ballistic trajectories of shells, or decoding Enigma) as an attempt to achieve AI (consciousness)? The answer should be no, that word-play with dictionary definitions is missing the point.

  22. dmcclean says

    I’m only concerned about within the claim, Dave Ricks. I’m asserting that if you are formulating the claim that “computers” don’t have “bodies”, you have to shop very carefully for definitions under which it is true, and when you do so you end up making it true by definition and not by observation, and thus you can’t hope to learn or infer anything by “observing” it.

    I showed this at #14, where I talked about how the only standard definition under which it can be said that inorganic computers don’t have “bodies” is one under which they don’t have them by exclusion, under which “bodies” are defined to be things that only “animals” or “plants” can have. If you do that, you can’t then hope to conclude anything from the fact that you observe that “computers” don’t have “bodies”, because it isn’t an observation, it’s the definition of “bodies”. That is a word game.

    It’s also a very narrow definition of bodies, which is why the dictionary had a dozen broader senses of it which are in many ways more natural. For example, I would say that a mushroom has just as much of a body as a tree does, but under this exclusionary definition it does not. I predict that my saying this will cause an immediate backpedalling, as: “oh, we didn’t mean ‘plants’, strictly, we’ll include ‘fungus’ too, anything ‘organic’.” Which is question begging. If only organic things can have bodies, and only inorganic things are computers, (and inorganic and organic things are disjoint sets) by definition, then we can’t hope to learn anything from observing that computers don’t have bodies.

    Would you really classify a room full of people in WWII (women) performing calculations (for the ballistic trajectories of shells, or decoding Enigma) as an attempt to achieve AI (consciousness)?

    No, and I don’t have the foggiest clue where you got the idea that I would. Primarily because my But also because an attempt requires intent.

    What I claimed in bringing up the definitions was:

    Since computers don’t have bodies, let alone sensations, …

    I think it would be very difficult to support either of those assertions (that computers don’t have bodies, or that they don’t have sensations) without resorting to definitions of “bodies” and “sensations” that are intrinsically dualist.

    Even then, I don’t see how you can get to “computers don’t have bodies”. They clearly do. It may be the case that their bodies are crunchier than ours, but so what?

    Attempts to achieve AI or consciousness play no role in my objection to the idea that “computers don’t have bodies” or that “computers don’t have sensations” on some definitions where this is a genuine observation, and not a tautological outcome of the fact that you chose to define “computers”, “bodies”, and “sensations” each in an unnaturally exclusionary way that by definition, and not observation means that computers don’t have bodies or sensations.

    There’s no Calvinball or sophistry going on here. I’m not shopping for definitions to invalidate the claim, or using unusual and unintended-by-the-author senses of the words to invalidate the claim. I’m pointing out that the author of the claim has to internally, within it tiptoe around his definitions, and that in doing so he renders it a claim which, while it may be true, is one from which we can learn nothing. A claim that can’t follow the word “since”.

    Having a preconception that inorganic and organic things are totally separate, a preconception that “sensation” is distinct from “sensing”, is not evidence for, well, anything. It’s just a preconception.

    I’d really ask the proponents of the “embodiment in organic bodies is essential for caring because it allows understanding of metaphors” claim to comment on #19, because I think that the apparent essential normality of people afflicted by CIPA is a fairly crushing blow to that position.

  23. dmcclean says

    Oh, there’s also the need to narrow the definition of bodies tightly enough that, while it still includes tree trunks, it doesn’t include wooden Turing machines. And if you do it by saying, “oh, only alive bodies”, you are smuggling vitalism into your definitions.

    I think that may help illustrate how carefully you have to draw lines to make this claim, even given only our current total lack of information about exobiology.

Leave a Reply

Your email address will not be published. Required fields are marked *