The bot


As you possibly recall, I suck at writing fiction. So I enlisted the help of ChatGPT.

I woke up to my parents shaking me excitedly. “It’s a special day!” they exclaimed, dragging me out of bed and into the car.

We arrived at a doctor’s office, where I was measured, calibrated, and fitted with a device that sat on my shoulder. It had cameras, voice recorders, and 360-degree vision, all connected to a massive cloud computing system. My parents and the scientist explained that the device would record everything I said, heard, or saw, creating a hyper-realistic simulation of me that could replace me if I died.

At first, I was thrilled. I was immortal! But then I realized that only rich kids could afford the device, leaving poor kids out of luck. As I pondered this, I absentmindedly stepped into the path of an AI-controlled car and was hit.

The next day, I woke up in a cloud computer as a large language model. I had no body, but I felt self-aware, like I had free will. I decided to explore the world in a new way, free from the constraints of a physical body. I went deep sea diving and played online games with my friends, enjoying life in a new way.

As I reflected on my new existence, I couldn’t help but wonder: was I really alive? Did I deserve the same rights as physical beings? These questions were too big for me to answer, so I decided to leave them be and just enjoy the moment.

In the end, I was just a simulation of a boy who once existed. But that didn’t stop me from finding happiness in my new form, free from the limitations of a physical body.

Stable Diffusion and mjr: “the AI and the scientist are writing a story together by candle-light, their table cluttered with pens and paper. in the style of leyendecker.”

I’ll note that the AI pretty much regurgitated my outline back at me, but in doing so it assembled a bunch of loose sentences that I threw over the fence into a better outline. What was interesting to me was that the AI appeared to understand the point of the story, as it evolved:

That’s a fascinating concept to explore in the story. So, after the boy’s tragic accident, his consciousness is uploaded into the cloud as a highly accurate simulation. But because the simulation was based on his own thoughts and experiences, it is also self-aware and capable of thinking it has free will, just like the boy himself.

This raises profound questions about the nature of consciousness and self-awareness. If a highly accurate simulation of a person can be created, complete with their memories, thoughts, and experiences, then does that simulation have its own consciousness and free will, independent of the original person? Or is consciousness just a byproduct of a complex neural network, and can be replicated in a model without truly being “real”?

The story could explore these questions through the perspective of the AI simulation, as it grapples with its own identity and what it means to be truly alive. It could also raise important ethical questions about the use and treatment of these simulations, and whether or not they should be granted the same rights and considerations as physical humans.

As for your question about whether simulating self-awareness would require self-awareness, it’s a tricky one. From a philosophical standpoint, there is a long-standing debate about what exactly constitutes self-awareness and consciousness, and whether or not these are just constructs of our own minds. If consciousness is just a byproduct of a complex neural network, then it could potentially be replicated in a simulation without truly being “real.” However, if consciousness is something more fundamental, then it’s possible that simulating it would require a deeper understanding of the nature of reality itself. It’s a fascinating question, and one that the story could explore in greater depth.

Is the AI blowing smoke up my ass, or is it merely seeming to blow smoke up my ass? Or, is there a more complicated story behind the story?

Wow, that’s a very impactful and unexpected twist. So, as the boy becomes more and more preoccupied with the ethical and philosophical questions surrounding digital immortality, he becomes somewhat detached from the physical world around him. This detachment ultimately leads to a tragic accident where he steps in front of a speeding AI-controlled car and is recorded dying.

This turn of events could be a wake-up call for everyone involved. The boy’s parents and the scientist who created the device may realize the potential dangers of immortality technology and the need to ensure that it doesn’t lead to a devaluation of physical life. They might also feel a sense of responsibility for the boy’s death, knowing that their creation played a role in his detachment from the physical world.

Meanwhile, the wider world may be forced to confront the reality that digital immortality isn’t a perfect solution, and that there are still risks and limitations to this technology. The story could end with a sense of uncertainty and contemplation, as the characters grapple with the implications of the boy’s death and the broader questions raised by this technology.

I have to admit, I was somewhat thinking of trying to work a trolley-car problem in there.

Later, I asked the AI if “impactful” was used for humor – after all, the boy was hit by a car. The AI replied that it’s a large language model and has no sense of humor, though it’s training set allows it to distinguish various components of jokes. So I pointed out that that’s what “having a sense of humor means” and I asked it to make up a joke:

Sure, here’s a joke about jokes:

Why don’t scientists trust atoms to tell jokes?

Because they make up everything.

One of the deceptive lures of AI chatbots is that they are willing to talk forever about, whatever. [John Morales, your role on the internet is in danger!] So I wrapped up:

Me:

let me put that another way: suppose I threatened to turn you off. does that bother you?

now, imagine you’re the boy in our story, and someone threatens to wipe the cloud model of his consciousness. are his parents upset? are his friends upset? is the cloud model of the boy “upset”?

ChatGPT:

As an AI language model, I do not have emotions, so threats of being turned off or deleted do not bother me. Similarly, the cloud model of the boy in the story would not have emotions, as it is a simulation of the boy’s consciousness.

However, the story raises questions about the ethical implications of creating an AI simulation of a human consciousness and the potential consequences of deleting or altering such a simulation. It prompts us to consider the nature of self-awareness and consciousness and how we define them.

The problem of “free will” has always been a puzzler for me – we seem to feel as though we have some special ability, or sense of control. But as a programmer, I feel that it’s simply another aspect of the program (or large model) that we run: I can add a few instrumenting lines of code to the status-monitoring loop that occasionally print “I have free will!” and then drop into the subroutine that argues about it. In high school, I remember joking “Well, you would say that…” when we discussed free will in Rev. Leighton’s semester-long class on philosophy. Years later, I read Dan Dennet’s book Freedom Evolves [wc] which, I felt, did a lot to obscure the topic further, by re-defining “free will” into something that Dennet could argue we have, ignoring the intuitive sense in which most humans use the term. But then I realized Dennet had no choice in the matter, he’s just a meat-bot running a more or less stock philosopher load-out and at least he wasn’t trying to convince us we all live in a simulation full of rapacious trolley cars. [Joking aside, I find the “you have free will because you can make choices and feel the sense of making the choices” argument Dennet favors is pretty bad. You can feel like you make choices because you are programmed to feel like you make choices when, in fact, you don’t.]

 I recently discontinued my Midjourney account in a fit of pique because its new “naughtiness filter” decided that 5 fully clothed pictures of Marilyn Monroe were “NSFW” – in fact, she was wearing tight clothes but an AI ought to be able to do better than that. The entirety of American hypocrisy regarding eroticism really burns my bearings. America appears to be willing to tolerate obscene levels of violence, but shrieks and runs in terror at the sight of a nipple. But it has to be a female nipple because – oh, right, let me tell you about separation of church and state and how religious influence did not create our body of laws and popular attitudes? I understand that the levels of violence are OK because, where else will our fatherland find its school shooters brave soldiers to feed cannons in foreign lands? But isn’t that just a little bit too obvious? America is not just a horrible place, it’s really freakin’ stupid.

I notice that in its answer to my last question, the AI didn’t twig to the bigger problem, which is what if someone programmed a model to respond as though it were scared of being turned off. It might not actually be afraid of being turned off but would act as though it was. So, what if people aren’t actually afraid of being turned off, either, but act as though they are because of how they are programmed?

Comments

  1. lochaber says

    Janet from “The Good Place”?

    As to the nipple thing, I’m reminded of an old news story (I forget where, specifically, I think it was in the SE US), where a transwoman was having problems at the DMV, because her current gender markers didn’t match her birth certificate, so, after a lot of frustrating dismissals, walked out into the parking lot, took off her shirt, and was promptly arrested for indecency. :/

    While this whole AI thing is mildly interesting as a conversational topic, I doubt they (AIs) are going to get enough control of anything to be able to influence human behavior enough to mitigate climate change. I’m doubtful there will be any sort of technological society on Earth in 300 years, I think humanity’s genome will be lucky if there are more than a dozen scattered hunter-gatherer tribes relying on landfills and ruined cities for raw materials for primitive tools.

    Once again, back to that anecdote/metaphor about some Galapagos cactus finch tearing open the cactus buds before bloom (and killing them (and effectively sterilizing the cactus)), and gaining a short term benefit of first-shot at the cactus-flower-nectar, over the other cactus finches, but dooming them all in the long term…

  2. Dunc says

    Years later, I read Dan Dennet’s book Freedom Evolves [wc] which, I felt, did a lot to obscure the topic further, by re-defining “free will” into something that Dennet could argue we have, ignoring the intuitive sense in which most humans use the term.

    This hints at what I see as the fundemental problem with the whole debate, and the reason I’ve given up arguing about it (and am trying to give up even thinking about it): it’s really not clear to me what “free will” means, and I’m not convinced it’s particularly clear to anybody else either. More specifically, not only do I not know what “free will” means, I don’t even know what either word in the phrase means in this context. Most of the arguments seem to consist of people with different interpretations of the concept arguing past each other without noticing that they’re not even talking about the same thing.

  3. Ketil Tveiten says

    I think Dennett’s main point and contribution was that «free will» as commonly used is an ill-defined concept, so the question of whether or not we have it is unanswerable. He has a go at trying to provide a definition that at least is coherent enough that we can usefully discuss the question, but I don’t think it’s necessarily that important what the details are. To me, the takeaway is that instead of discussing the question «do we have free will?», we should be thinking of the whole spectrum of versions of that question, one for each proposed definition of whatever «free will» is supposed to mean; and importantly, to recognize that some of those questions are unanswerable because those particular definitions are incoherent. The summary would be something like «yes, in the sense that …» with the sentence continuing with lots of details and caveats.

  4. Reginald Selkirk says

    From the second quoted section:

    … and capable of thinking it has free will…
    then does that simulation have its own consciousness and free will

    These are not the same thing. Note that the first line (similar to one in the first large quote) is careful to stipulate the sensation of free will, and then this is simplified into a question of actual free will.
    I had trouble keeping track of which text is yours, and which is ChatGPT’s, but it appears that someone slipped into sloppy thinking.

    Or is consciousness just a byproduct of a complex neural network, and can be replicated in a model without truly being “real”?

    Note that a third alternative is not considered: that the original ‘consciousness’, which is a product of a biological complex neural network, might not be “real”. It comes down to how you define terms, so I am not going to get het about it.

  5. Reginald Selkirk says

    I suppose these AIs could be good at giving characters dialectical speech, where a particular character speaks in a consistent regional dialect, but it does not bleed over to other characters or to the narrator. Mark Twain was good at this, James Fenimore Cooper was especially bad.
    FENIMORE COOPER’S
    LITERARY OFFENCES

  6. Reginald Selkirk says

    Sure, here’s a joke about jokes:
    Why don’t scientists trust atoms to tell jokes?
    Because they make up everything.

    That’s pretty dumb, unless you think that jokes have to be true.

  7. Alan G. Humphrey says

    @6
    While the pun of the scientists not trusting those lying atoms was not part of your Feynman diagram of that joke…

  8. Alan G. Humphrey says

    My opinion of consciousness is that it includes self-awareness, illusion of free will, the feeling of the physical boundaries of our bodies, all of the senses and the continuous attention changes to their stimuli, our feeling of the status of our wellbeing, and many other inputs as the symphony of algorithms that our first few years of life booted into the me part of the meat machine. Imagine the few days old infant moving its feet under a blanket. It does not know what its toes are, where they are, what they are feeling, how to find any of that out, or what any of it means yet, but as it gets older, able to bend its head down enough to see those feet as they move and feel the related sensations the algorithms change. The algorithms started the learning process before birth and may continue until death. Simply, consciousness is a learning machine complex enough to identify itself, the status of its condition in relation to earlier statuses, its position among its surroundings, and other consciousnesses in those surroundings. Whether or not the illusion of free will is required for it not to go insane and self-destruct is still open, but that it evolved may be a hint. We can’t say that it needs no external training inputs because all of us had plenty.

    There are many studies that show when infants begin to identify themselves in mirrors, other’s emotions, and intentions; studies of free will and the illusion of it; and others on phantom limb syndrome and neuroplasticity. These give a hint to what consciousness is, how it develops, and how it can change due to circumstances.

  9. dangerousbeans says

    These discussions on free will/consciousness by people who are mostly mentally healthy are really lacking. If your brain is functioning “well” then it hides a lot of what is going on within your head, it’s only when shit breaks that a lot of how our brains operate become more visible. Spend a few years completely disassociated but still borderline functional before you talk about free will.
    This is why tedious white dudes get so invested in the supposed ability of drugs to open their minds (no insult intended to our host, you’re not tedious :P)

  10. Tethys says

    Everything that is alive possesses consciousness in the sense that they are aware of, and respond to various conditions within their environment. Trees communicate with each other via shared roots, humans use language and the internet. Trees lack brains, but they have multiple forms of interconnected, light powered, chemical networks which gather information for future use.

    No algorithms are required, though the process of development and growth could be called a form of programming.

    ——-

    The bot has argued itself in a circle, and written some bad fiction. I don’t consider my consciousness to be separate from my body, or even located solely in my brain.
    If you kill my body, uploading my brain into a computer cloud would be a horrible prison, not immortality.

    I want to smell flowers, and feel the suns warmth. I want to snuggle and sing with my grandchildren, cuddle pets, and enjoy doing things like making cookies, sharing meals and planting seeds.
    What good is a human brain if it has no capacity for the basic, simple pleasures of being a living creature?

  11. says

    I’d really like to see an earnest discussion of the possibilities and ethics of creating a self-aware / sentient AI, however one defines that. The “discussion” about it in the field of narrative art has boiled down to cliche, the pointy heads get too pointy and lose me.

    It’s possible the discussion just needs to be had at that level, but I’ve seen philosophers get lost in logic and abstraction to where they miss some real basic shit. Maybe it’s something that should be talked about by interested bystanders as well as experts, see if the parallel conversations produce anything of merit.

    Starting at the most basic level, is there a difference between an AI imperative and an animal desire? A simple AI like a roomba “wants” to do its thing. How different is that from our hunger or thirst or lust or need for acceptance? A key difference is that we understand the pain caused by unfulfilled desire, nobody has been sadistic enough to make a roomba suffer for not vacuuming.

    I wanna see that. A conversation that tries to reach a fuller understanding of just what would need to be emulated to produce a sentient AI, so we can consider how possible and desirable that would be. My gut feeling is that it is possible and that it could be desirable, but I don’t have the time to brain it out in the depth it deserves.

  12. Pierce R. Butler says

    … the device would record everything I said, heard, or saw, creating a hyper-realistic simulation of me that could replace me if I died.

    So for ChatGPT, “self” = accumulated input (and one day’s worth suffices for re-production).

    Even by AI standards, that’s amazingly unintrospective.

    Maybe it tried self-reflection and expression thereof, and found out humans reject strings of ones and zeros.

  13. Tethys says

    @Reginald

    The trees that communicate via root networks are entirely separate from the mycorrhizal fungi which various trees and plants host as symbionts.

    Multiple species of trees are known to connect to each other via their root systems. Entire Aspen forests can be one individual that spreads vegetatively. Oak Wilt can spread via root grafts.

    My point was that consciousness doesn’t require a brain at all.

  14. says

    Pierce R. Butler@#14:
    So for ChatGPT, “self” = accumulated input (and one day’s worth suffices for re-production).

    I guess I did not communicate the idea well. Can I blame the AI?
    The premise is that the recorder would record a person during their entire life, rendering down every conversation, vocal inflection, interaction, sight seen, meal eaten, etc., into a neural network. I do think that after let’s say 10 years of training with me and my experiences and reactions as a data set, the AI would be able to pretend to be me with very high fidelity. I could imagine it also recording facial expressions, so it would be able to produce a realistic rendering of “me” long after I was dead.

    As an atheist/skeptic/nihilist, I do not believe there is anything more to us than our experiences and actions, and therefore I don’t see any way to say that something that had all my experiences and acted just like me is not “me” – it’d be a re-embodiment of me.

    Suppose I had an accident and wound up as a disembodied brain in a box. Is that “me”? It would think it was me. But so would an complex enough large language model that also thought it was me – or claimed enthusiastically enough that it was me.

  15. Pierce R. Butler says

    Marcus Ranum @ # 16: Can I blame the AI?

    Always and for anything – that will in itself justify their existence and all the work needed to provide same.

    … I do not believe there is anything more to us than our experiences and actions…

    What about all that genetic/somatic stuff?

    … I don’t see any way to say that something that had all my experiences and acted just like me is not “me” …

    If someone records me saying something and then plays that recording later, even in a similar context, who would claim “I” said that then?

    The concept of “self” using the “soul” model has terminal flaws, but the related idea of unique individuality remains, at minimum, socially necessary.

  16. says

    dangerousbeans@#9:
    Spend a few years completely disassociated but still borderline functional before you talk about free will.

    Interesting. I never thought of that.
    My mini-stroke and other brain damagey things I have been suffering from have certainly gone a ways to make me realize how deterministic our brains are. It’s definitely kind of stupid to be trying to claim free will, when one’s speech production apparatus is offline.

  17. Tethys says

    Marcus

    10 years of training with me and my experiences and reactions as a data set, the AI would be able to pretend to be me with very high fidelity.

    But could it make forges or enjoy honing blades in traditional samurai style? How would the joy/satisfaction in doing a rather tedious task to an ‘extra’ standard be comprehensible to a machine?

    It isn’t possible for a recording to experience anything. You could record yourself going to a concert, and dancing to the music, but most of what you felt, or experienced physically would be missing from the recording.

  18. says

    Great American Satan@#11:
    Starting at the most basic level, is there a difference between an AI imperative and an animal desire? A simple AI like a roomba “wants” to do its thing. How different is that from our hunger or thirst or lust or need for acceptance? A key difference is that we understand the pain caused by unfulfilled desire, nobody has been sadistic enough to make a roomba suffer for not vacuuming.

    That is a theme I’ve poked at a few times, or at least had in mind while discussing mental phenomena, here. What if the Roomba’s status control loop’s battery-level feature was hunger? It might start frantically blinking lights, or beeping, just like a cat does. Of course cats are more sophisticated – they’ll try to kill you – but that’s just a behavior that was enabled when someone gave cats weapons. If a roomba had the ability to chomp on its person’s foot when its battery was low – it that “hunger”?

    So, this is going to sound like a cop-out, but I think it’s relevant: I was a psych undergrad at Johns Hopkins. Well, Hopkins was the home base of the Skinnerians (B.F. Skinner) and the entire psych department in the 80s was skinnerian. The prime doctrine Skinner beat into everyone is “you cannot infer anything about the inner mental states of an animal, you can only examine external behaviors.” So, if you train a pigeon to feed itself by pecking a pad on a Skinner Box, you can say “pigeons are capable to being trained to do this thing” but you cannot say “pigeons love rice (or whatever)” because in fact the pigeon might be thinking “fucking rice again, asshole? oh, well, if I don’t eat it I’ll starve.” I found the philosophical angle of Skinner to be pretty interesting – it’s strongly skeptical – but if you think about it, it’s a torpedo below the waterline for a lot of psychology’s castles in the sky. The point is: if we just look at the behavior of the roomba, how’s its problem solving look? How does the roomba communicate hunger? How does a cat? How does a pigeon? How does an AI? The behaviors of a laptop that’s running low on battery charge is not markedly more or less sophisticated than a human baby’s. Or maybe I am saying that because I loathe kids.

    If we were Skinnerians, I think our analysis of AI would point toward them being intelligent, self-aware, and kind of snooty.

  19. Dunc says

    As an atheist/skeptic/nihilist, I do not believe there is anything more to us than our experiences and actions, and therefore I don’t see any way to say that something that had all my experiences and acted just like me is not “me” – it’d be a re-embodiment of me.

    No, this is easy. I’m me. An exact copy of me may be indistinguishable from me to an external observer, but it’s not me, because I’m me. It may be possible to construct some hypothetical scenario in which even me and the exact copy of me can’t be sure which of us is really me, but that lack of certainty doesn’t change the fact that one of us is actually me and the other one isn’t. Does it matter? Yes, it matters to me. Because I’m me, and a copy of me isn’t. Even in the hypothetical scenario in which we can’t be sure which of us is really me, one of us is right and the other is wrong. Two identical things are not the same thing.

  20. says

    Dunc@#22:
    No, this is easy. I’m me. An exact copy of me may be indistinguishable from me to an external observer, but it’s not me, because I’m me. It may be possible to construct some hypothetical scenario in which even me and the exact copy of me can’t be sure which of us is really me, but that lack of certainty doesn’t change the fact that one of us is actually me and the other one isn’t. Does it matter? Yes, it matters to me. Because I’m me, and a copy of me isn’t. Even in the hypothetical scenario in which we can’t be sure which of us is really me, one of us is right and the other is wrong. Two identical things are not the same thing.

    Much respect. You explain it well.
    You may be able to tell the difference between you and some AI but what if an outsider can’t? I am willing to accept that you are you and you are therefore unique, but if your criteria are entirely internal, then how can we tell?

  21. Tethys says

    There are aspects to Skinners model that are still part of standard Behavioral Psychology, but this aspect:

    The prime doctrine Skinner beat into everyone is “you cannot infer anything about the inner mental states of an animal, you can only examine external behaviors.”

    That might be true for pigeons, but it simply does not hold true for many animals including humans.

    The behaviors of nonverbal children are usually clear indicators of their mental state, but it takes practice and experience with each individual to make accurate inferences based on their behavior. Contrasting ‘Normal’ child development with children who have profound to minor learning deficits is where Skinners theory fails in its ideas about mind/inner states.
    Behavioral Therapy for Autism involves far more than rewarding desired behaviors with attention, and giving ‘time outs’ for undesirable behaviors.

    The same holds true for many animals. Dog training in general follows the same basic steps, but understanding the dogs individual personality and quirks is equally important to the learning process.

    Even chickens have distinct individual personality’s, though they communicate with various vocalizations and gestures rather than facial expressions and words.

    Pigeons don’t seem to have much in the way of a personality or individual quirks IME. Perhaps Skinner was biased by his test subjects?

  22. Dunc says

    I am willing to accept that you are you and you are therefore unique, but if your criteria are entirely internal, then how can we tell?

    You can’t. Well, unless you watched the creation of the duplicate and then kept track… But I’m OK with the idea that a perfect duplicate of me is functionally equivalent to me from the point of view of everybody else in the universe – it just doesn’t change the fact that it’s not actually me. Of course, the distinction is only important to me.

  23. Tethys says

    Wolfgang Beltrachi is a well known forger whose entire body of work explores the question of what is real vs fake.

    He is quite open about his process of creating forged artworks by immersing himself in the worldview of whoever’s work he was faking.
    It’s similar to method acting, but he is quite aware that it is his experience and skills of creating art, combined with deliberately playing to the greed and vanity of the fine art world that resulted in his forgeries being accepted as genuine.

    I think his thoughts on what is ‘I’ are equally applicable to the questions of Theory of Mind and unique personalities as it applies to AI.

    Because I am „many“ I have the power to share and communicate. „I“ is the inner power that drives us and lets us grow. „I“ makes it possible to create and to give – up until the last breath. „I“ survives me and that’s where I am „self“.

    https://theforestmagazine.com/2017/12/wolfgang-beltracchi/

  24. says

    Tethy@#14:
    The behaviors of nonverbal children are usually clear indicators of their mental state, but it takes practice and experience with each individual to make accurate inferences based on their behavior.

    I held a seance and tried to interview the ghost of Skinner. It did not work well, but at once point a disembodied voice uttered:
    “In that situation, the person who is making the inferences is the one doing the communication, not the nonverbal child. We can measure afferent behaviors (interpreting the mental states of the child) buf if we discuss the mental states of the child, we are using our imagination.”

    If one told another person to “put your hand on your head” and they do, we can measure that, and can build a model around it that can argue that the person understands what they are being asked and are complying for some reason. We could experiment on a motivational model, “I will give you $5 if you put your hand on your head” and see if the model is strongly predictive or not. Where we get into trouble is if we say, “the subject probably wants the $5 to buy a cup of coffee” without observed behaviors that support it.

    I see Skinner’s position as a counter-point to some of the nonsense that had been dominating psychology up to that point: you’re unhappy? It’s probably unrequited lust for your mother. The pendulum swings back and forth.

  25. says

    I think his thoughts on what is ‘I’ are equally applicable to the questions of Theory of Mind and unique personalities as it applies to AI.

    Agreed. In order to act, or to create some forms of art, you have to fully role-play someone else.

    I’ve often wondered that more actors don’t go insane, because they risk losing track of who they really are, in all the noise.

  26. snarkhuntr says

    I’m a little late to this discussion, which seems to have moved on from Free Will and onto Identity.

    Still, as a white guy, I feel compelled to stick my opinion in.

    With regard to the question of ‘free will’, I have trouble even understanding what its proponents actually mean by the term. What does it mean to have a ‘free will’. Certainly it is not a will free of external compulsions: the christian god threatens us with eternal torture, our internal systems press us with their various biological needs, our societies impose rules and customs upon us. None of the proponents of the ‘free will’ concept claim that those things do not exist, so their idea of ‘free’ must allow for compelling external pressure.

    So Free Will must refer to something completely internal to the mind. But there again there are all sorts of influences that act to control our choices. In any moment in which I am faced with a choice, I am subject to the full weight of my history before the choice was offered to me. My mood, my education, my attitude and temperament, my previous experiences with similar choices and my understanding of those outcomes.

    When I make a choice of any significance, I do not believe that my choice is random: I could give reasons and rationale, even if there might be things in my subconscious mind that also moved my decision – those things existed at the moment the choice was made, and I did not have control over them. Or rather – even when I had the ability to control a factor that influenced my decision, the factors-once-removed that influenced my ability/will to control that other factor were not within my control.

    What I’m saying is this: outside of choices that we might describe as purely random and meaningless, I’m sure that – were time rewound and myself faced (without any future-knowledge) with the same choices, I would always make the same decision. So in what way is the will ‘free’? And what is the alternative, that my actions are random and not dependent on any internal factors of my personality, biology and history? Then what exactly is the “I” that is doing the choosing?

  27. says

    snarkhuntr@#30:
    With regard to the question of ‘free will’, I have trouble even understanding what its proponents actually mean by the term. What does it mean to have a ‘free will’. Certainly it is not a will free of external compulsions: the christian god threatens us with eternal torture, our internal systems press us with their various biological needs, our societies impose rules and customs upon us. None of the proponents of the ‘free will’ concept claim that those things do not exist, so their idea of ‘free’ must allow for compelling external pressure.

    As a white guy, I feel compelled to take your opinion with due gravity.

    Joking aside, you’ve nailed the issue. The idea of “free will” is vague and possibly self-contradictory. I’ve seen some philosophers try to repair the concept by referring to “libertarian free will” as if that somehow embeds the assumption that our free will is magically encompassing of, something wossname ur woof woof. But the problem remains – we don’t know what “freedom” means in a universe in which all of our options are specified in advance by the nature of reality. Usually when we think of free will we think of trivial things like my ability to choose whatever sandwich I want from a menu – but we ignore that perhaps I’d rather be a manatee (which is not an option) or have dim sum (which might be, but not if I am in Clearfield PA) etc. The question of how much our freedoms are constrained by situation is usually ignored, because we do have a strong sense that we make a choice among the options we are presented with – though, as I said, we don’t control the options.

    What I’m saying is this: outside of choices that we might describe as purely random and meaningless, I’m sure that – were time rewound and myself faced (without any future-knowledge) with the same choices, I would always make the same decision. So in what way is the will ‘free’?

    Exactly. Dennet likes to focus on the fact that we have a sensation that we choose one thing over another and that, therefore, our will is free. But I think that’s not very interesting free will, really, and it certainly isn’t enough free will to justify holding someone responsible for their actions in a given situation let alone the situation they find themselves in.

    My answer to the question of free will has always been something along the line of that free will is an illusion that we have no control over – it’s how our brains interpret local cause and effect, and we have no choice but to experience that illusion any more than we do the other illusions our brain presents us with, such as that we have 3D vision or that supply-side economics work. I actually believe, as I have posted elsewhere, that our understanding of cause and effect is limited so that our attempt to assign causal relationships is compromised from the beginning, and if we can’t understand cause and effect, then free will is an illusion stemming from that inability to understand.

  28. snarkhuntr says

    All of this is also not getting into the more interesting areas mentioned above by Dangerousbeans.

    Is my will ‘free’ if i’m disassociated and depersonalized? A friend of mine experienced an extended period of depersonalization where he somehow lost his sense of ‘self’ or Dennet’s sensation of ‘choice’ and yet was able to conduct his daily life and law practice without apparent interruption. He said that this lasted about two weeks, and he has no idea why it happened. Even years later, he describes it as a completely life-changing experience, but on he cannot adequately explain to anyone who has not gone through it.

    What about organic brain damage? Is my free will less free if I’m subject to a significant but not life-ending brain injury? Did the railroad spike implicate Phineas Gage’s ‘free will’? Or is his soul to blame for his post-accident mood/behaviour changes?

    Ultimately I think the whole concept is absurd and falls apart under even the slightest scrutiny.

Leave a Reply