In a previous post, I quoted John Richards, who said of Lawrence Krauss:
He was a personal friend of Christopher Hitchens, who sadly died nine years and three days ago (there’s been talk about designating December 15th “Hitchmas”) and he was an expert witness at the Kitzmiller vs Dover Area School District trial of Intelligent Design*.
I pointed out the “Hitchmas” nonsense, but I should have also mentioned that no, Krauss was not an expert witness at the Kitzmiller vs Dover Area School District trial of Intelligent Design
. In fact, he had nothing to do with the Kitzmiller trial.
What a curious little claim. Why would Richards just make that up?
John Morales says
Because he would’ve got away with it, were it not for pesky fact-checkers. :)
—
He probably opined on it, at the time. Close enough for fanbois.
Phrenotopian says
I know it’s an unpopular opinion here, but the direction the “Atheist Establishment” has taken the past decade, has made it so much easier for me to flirt with panpsychism. It now seems like a much more wholesome and sensible alternative to both dualistic woo and materialist cynicism.
Phrenotopian says
It’s just that the diatribes on the nature of reality and morality by the likes of Dawkins, Krauss and Dillahunty, that I once found lucid, now sound hollow and pretentious.
PaulBC says
Hey, if it works for you. But you’re setting up a false dichotomy (or maybe trichotomy). What’s cynical about materialism?
Dago Red says
Phrenotopian, I am quite sure your brand of crazy is no better than mine, but glad you finally found a way to feel superior to everyone else. It must be nice to scrape off that cynical little dirty label of materialism, and slip on a wholesome and sensible ten-dollar one, like panpsychism. And, if that’s what makes you feel sexy at night, then go for it. I hope all the best for you.
Knabb says
Maybe someone else told Richards, and he just assumed it was true. You know, as if he wasn’t, what’s that word, skeptical, about the claim.
John Morales says
[in the interest of fairness]
Dago @5,
There’s nothing Phrenotopian wrote that even suggests such a sense of superiority.
—
Paul @4, re the same:
There’s a claim about materialist cynicism, but there’s no claim that materialism is perforce cynical, as your retort intimates.
Intransitive says
Probably for the same reason he didn’t mention Ken Miller (Brown U) testifying, a reputable scientist and catholic.
They have become the extreme right that they always claimed to oppose. So much for being scientific.
birgerjohansson says
Another weird statement, Pence is accidentally giving the game away.
A real statement by VP Mike Pence, telling listeners why Democrats are evil.
“They want to make rich people poor……and poor people comfortable!”
Susan Montgomery says
@8 That makes the potentially flawed assumption that they were ever not on the right to begin with. At best, they likely thought that liberalism only meant cynicism, nihilism and iconoclasm for it’s own sake Maybe all they wanted was to look edgy and cool as a political fashion statement and they’re getting more edgy/cool mileage out of being on the right than they would on the left.
oddie says
Hitchens hated Christmas. He said it was the closest many of us would come to experiencing what life is like in North Korea. He also hated people being deified. But sure, lets honor his memory by celebrating “Hitchmas” I’m sure he would have loved that 🙄
cervantes says
Hitchens was also a neo-con — he made the not uncommon journey from Trotskyite to reactionary. Also a misogynist and generally a jerk who hurled gratuitous and ugly insults at people who criticized him. Otherwise sure, he should have a holiday named after him.
Reginald Selkirk says
I only care about the thing that is not on your list: What is true? That is the thing I will try to believe in, no matter whether it is easier, or wholesome, or sensible to believe elsewise.
PaulBC says
John Morales@7
The claim was that panpsychism is “a much more wholesome and sensible alternative to both dualistic woo and materialist cynicism.”
While it’s true Phrenotopian never ruled out other choices, the implication is that he’s justifying his choice. Given that no other argument is presented, I thought the point was that this trichotomy covered everything important. If it was something else, please correct me.
I’m not really sure of the term for it, but my materialism is along the lines of Carl Sagan’s. We’re all star stuff, and isn’t that cool. I would add that compassion also fits very well into a framework of materialism. If we’re all flesh and blood, the sacred and mundane are the same thing. Offering a cold person a bowl of hot soup is a spiritual act.
So in short, while I can rule out “dualistic woo and materialist cynicism”, panpsychism would not be my next stop. Also, I am quite sincere in saying “whatever works”. I am not the spirituality police.
unclefrogy says
I have no idea what a psychic aspect or any thing psychic is. It sounds like reaching for a nebulous magic answer to me, no better then what the nuns tried to teach in grammar school.
I took the comment as a humor attempt myself.
as for the subject I thought the he in question was Hitchens also wrong, that clown has so little to say I am interested in that I did not even care to check it out. His double think ,double standard and hypocrisy were too thickly displayed.
uncle frogy
Giliell says
Well, well, they’ll rather believe that Kraus was an expert witness, a claim that can be easily checked and dismissed as false, than believe a woman that Kraus made inappropriate remarks towards her.
DanDare says
Thanks Giliell, hit the nail on the head.
Phrenotopian says
Yikes! I was just emotively expressing a sentiment or personal feeling. I wasn’t really making a necessarily cogent statement.
It would seem that only John Morales @7 understands what I was trying to say. Never mind me, though. I’ll let myself out.
PaulBC says
Phrenotopian@18 Hey, so was I. I kneejerk at the suggestion (real or imagined) that there is anything cynical about a materialist worldview.
WMDKitty -- Survivor says
Phrenotopian — I want what you’re smoking.
John Morales says
Paul:
Hm. This very post is an example of an instance of it. No?
So that’s an existence proof, right there.
—
Phrenotopian, your nym is not unfamiliar to me — I’ve been around here for a long time, in internet time. I think you do belong.
jenorafeuer says
@11:
@12:
The juxtaposition of those two makes me wonder if Hitchens would sympathize with Lenin… someone else who said they didn’t want to be deified, but ended up becoming so anyway to serve other political purposes.
I rather suspect Hitchens would not have appreciated the irony. And quite possibly would have tried to deny any similarity at all.
History may not repeat exactly, but it certainly does rhyme.
PaulBC says
oddie@11
I haven’t really looked forward to Christmas since I was a kid, but this is a ludicrous comparison and if accurate removes whatever shred of credibility I might have attached to Hitchens. (Though his support for the Iraq war was enough by itself.)
Even as humorous hyperbole it makes no sense… it’s pretty easy to opt out of Christmas and that’s been true for as long as I can remember. I passed plenty of non-Christmas Christmases as a student. You can’t opt out of North Korea if you happen to live there. I suppose (and searched it up a little) he meant the aesthetic. But that’s bogus too. First off, the reason life sucks in North Korea is (a) because you’re starving most of the time and (b) (probably not even in close second place) that you’re living in a police state where you’re under constant surveillance and might wind up in prison just for slipping up. Neither of these are much like Christmas whatever else you may think of it.
The pictures of Dear Leader? For all I know, a lot of people think they’re just great. And why compare it to Christmas? When not compare it to Chinese Lunar New Year or any other big holiday? How about July 4 in the US?
So you don’t like big holidays with lots of decorations and pressure to conform? Fine, I am not a great fan myself, but some people like it. It’s like Homecoming or the prom in high school. You don’t like it? Skip it. I have heard there are people into it, and that’s fine for them.
PaulBC says
John Morales@21
Not that I’m aware of, but if you think so.
Phrenotopian says
PaulBC @19
No worries! Anything expressed in black on white letters can be prone to misinterpretation. I hope it’s clear now that I never implied that a materialist worldview in itself is cynical. We’ve after all got PZ Myers here to provide a salient example of the opposite. However, the cynicism of the “Atheist Establishment” is tangible and poignant.
It would almost seem that some people, especially those in positions of privilege, the notion that we all seem to be insignificant specks in a cold and uncaring universe arrived at through the proverbial “survial of the fittest”, gravitate towards social-darwinism or even fascism. That is the exact kind of hubris that arises from thinking you’ve got all the (correct) answers. Fortunately, there are many non-religious that still have a good sense and a heart to not roll into that particular ditch.
John Morales @21
I’m sorry, but I’m not following what you’re trying to say.
PaulBC says
Phrenotopian@25 Of course it’s impossible to tell if something like that is tongue-in-cheek, or even guess, if I don’t recognize who’s writing it. People set up false choices all the time. We wouldn’t have an advertising industry if they didn’t.
(At least I kept my first reply pretty short and didn’t go out guns blazing in defense of compassionate materialism as I had before a serious edit.)
I had to look up panpsychism and I’m not sure if I’d be able to tell it from animism. I may believe in a weak form of panpsychism. If mind is an emergent property of matter and energy then where do you draw the line? Maybe a rock just has less mind, some kind of diffuse consciousness (though I doubt it). There is probably more “mind” than we think there is, assuming self-awareness is some inevitable result of self-reference.
Phrenotopian says
PaulBC @ 26 : And I appreciate you assuming good faith.
I don’t feel particularly inclined to lay out the case for panpsychism in this forum. I’ve just personally become persuaded to take the idea seriously through reading a general audience book by the British academic philosopher Philip Goff called “Galileo’s Error”. However, I am willing to give a brief outline of my, admittedly, limited understanding.
Let me tell you that, as per the definition of Goff and those he references, it is nothing like animism, since animism is by definition dualistic and panpsychism is monistic. What’s more: The concept of “emergent properties” belongs in principle to the domain of materialism where it is invoked to explain phenomena such as “consciousness”. The objection is that materialism actually doesn’t give a fundamental explanation for how these mental phenomena arise.
Panpsychism on the other hand, tries to explain this without resorting to either dualism, where mental phenomena are separate from material, or idealism, where mentality is the end-all of everything. It’s simply stating that the most fundamental properties of reality are in themselves infinitessimally tiny bits of consciousness. Not in the sense that e.g. a quark has a mind of its own, but all are fundamental building blocks of mental phenomena. The interesting thing about the concepts that Goff is advocating is that he involves Integrated Information Theory to distinguish between random clumps of matter/energy (like a rock) from complexly intertwined aggregates that from brains and thus minds.
None other than Bertrand Russel favoured a weak form of panpsychism that he termed neutral monism, and which settles at a middle ground of sorts. Although his writings are rather dated (I know because I tried (and failed) to work through it), they are nevertheless lucid. Perhaps this is something for you consider?
PaulBC says
Phrenotopian@27
Yeah, I did consider that difference.
In fact, I don’t really find emergence to be a very satisfactory explanation of consciousness. I just don’t consider that to be a major blocker to understanding other things that are explained by emergence. I am more interested in considering tractable problems, of which there are many, during my ephemeral existence, before I turn back into sea foam as the little mermaid feared so unnecessarily.
I doubt the explanation relies on special properties of the underlying components. I.e., while it’s a complete mystery to me why I experience awareness in addition to merely have self-sensing ability, if a robot were developed with self-sensing and behavior that appeared at a high level of intelligence, it would be shocking to me if that robot did not also experience awareness. I can’t really see how to test this. You’d ask it, and then I guess if you were willing to take its word for it, that is about as reasonable as taking my word for it (not entirely, since there could be some other reason for humans to share this trait).
And needless to say, it would still apply if it turned out it was driven by some kind “Chinese Room” scenario like Searle’s, a Turing machine that was reading and writing symbols in serial fashion, a trained chicken laying out Wang tiles, or anything else. (Assuming that these processes are sufficient to emulate human intelligence, which is also only a hypothesis at this point). I think the notion of an instant would be very different, though, if it could only manipulate one symbol at a time.
That’s far from saying I understand it. If ET flew by on Oumuamua, I wish they had stopped by to say hello, because maybe they have it figured out. One of my core beliefs is that the human brain is inherently limited. Therefore, if something strikes me as mysterious, it may simply be that I am not equipped to understand it. I find it very funny that many people feel the need to tie up the loose ends of understanding by giving some open issue a name and then claiming they get it.
KG says
Er… that’s (in briefest outline) how a materialist explanation of consciousness works. If you haven’t, read Daniel Dennett’s Consciousness Explained. I think he under-emphasises the role that action plays in consciousness, but it’s certainly way better than any mysterian pseudo-explanation such as panpsychism.
It’s an utterly absurd hypothesis. For the “Chinese Room”, how could it possibly deal with the potentially infinite range of questions about: “The last question but n I asked you”? A Turing machine (of reading head, rules and tape) is a closed system – actual consciousness involves dealing with a continually changing environment. As I did Phrenotopian, I urge you to read Consciousness Explained if you haven’t!
PaulBC says
KG@29 Searle’s Chinese Room seems kind of ridiculous (though I admit I haven’t read the scenario in detail).
I don’t really understand the other distinctions you’re making here. A Turing machine is not a closed system. The tape is potentially unbounded. Even a process that began with a small program to fill a large region of the tape could make something that appeared vast and complex, and contained segments that effectively reasoned about themselves.
Do you have some reason to rule out consciousness in such a scenario? (Granted, no sequential computer could carry out anything very interesting except over many eons.)
No, I have not read Dennett and I should.
PaulBC says
KG@29
I’ll happily concede the inadequacy of Searle’s specific model, but I don’t really believe my own brain is all that good at dealing with infinity. In mathematics, we can set up formal systems that model certain aspect of infinite systems, but in the end, we’re still just manipulating formal symbolic systems.
BTW, I do not accept “strong AI” as necessarily true. I just don’t see any good reason to rule it out. It’s not at all clear what the brain is doing, i.e. whether a faithful simulation of neuron-like elements to sufficient precision (and therefore embeddable in a Turing machine) would give brain-like behavior or if there is some more surprising physics going on. The former seems more plausible to me, but I don’t see how I can reasonably claim to know.
If a system could pass the Turing test and expressed that it felt aware the same way I do, it would seem absurd to rule this out. Suppose it is running on a computer much slower than the brain and I use some contrivance (relativity or suspended animation) to come back a million years later to see an answer indicating it has experience awareness. Do I rule out actual awareness just because of the lack of simultaneity of its reasoning steps or some other thing?
For that matter, I only really know that I’m conscious of some instant. It’s possible and even likely that large parts of my day go by on unconscious autopilot the same way I assume I pass my non-dreaming sleep. It’s unclear to me what I even mean by an instant.
PaulBC says
To clear up something I said:
In this case, the TM would represent the entire universe rather than an individual potentially conscious entity. It might not be sufficient to model our universe. I am not making a claim about real physics. It would still be a very robust system, capable of enumerating and checking proofs of theorems for instance, and these could concern the content of parts of the tape.
I don’t see why entities in the above universe wouldn’t experience “a continually changing environment” though it appears to change at a very slow pace to us. It’s possible they would not, and it’s possible that there is something in actual physics that makes everything different and results in the subjective experience of consciousness. I just can’t think of a good reason, except personality incredulity, to believe this.
John Morales says
[A decade or so ago, we had a Seagull (OK, Segall) who wanked on about process theology and panpsychism, claiming it was cosmology. He was funny.]
…
Ah yes, still has some presence; https://footnotes2plato.com/
Heh.
—
Paul:
It is a closed “system”. No input or output, only initial conditions and process, and the tape is necessarily infinite, not just potentially. No stack overflows!
But it’s only an abstraction
In short, it’s a thought experiment, not an actual thing.
PaulBC says
John Morales@33 The universe is also a closed system. It’s unclear if it’s infinite. It’s unclear if it’s embeddable in the set of natural numbers. A TM is infinite and has storage capacity that maps 1-to-1 on the natural numbers, making it easier to discuss.
A physicist would have a better informed opinion than mine on whether the universe is discrete the way a TM is, but they can’t honestly claim to know. What we perceive as quantum superposition and are trying to use to build faster computers could still just consist of discrete symbolic calculations occurring fast enough to support quantum behavior to the extent we can observe it.
A TM is a computational model. It can be emulated readily and in fact won’t run out of tape in practice as long as you supply it with more tape as necessary. There’s nothing abstract about that. You can not only ask whether consciousness would exist in such a micro-universe, you could literally build one of sufficient scale to model a vast interacting system. This would be a waste of time in my view, since you’re probably better off using a parallel architecture, but there’s certainly nothing infeasible about it.
Anyway, I’m not sure what its existence as an “actual thing” has to do with whether consciousness would exist in it as a subjective experience.
The reason for my focus on a TM architecture is that it removes nearly every place where you might localize consciousness. The automata head is doing something easily explained and mechanical. The tape consists of static storage that is very rarely altered. There is no simultaneous action occurring in any instant that looks like intelligence. On the other hand, given sufficient time, it could at least simulate any other computer architecture, including one that simulates a brain as faithfully as possible.
If you assume (and you don’t have to) that a brain can be simulated well enough by a parallel architecture to exhibit recognizably intelligent behavior, make reasonable self-observations and claim to have a subjective experience of consciousness, then the question is what happens (and yes it’s a “thought experiment” but actually doable) if you run the same simulation on a TM. Does this also have a subjective experience of consciousness.
My claim is, yes, it probably does. Why wouldn’t it? Though an instant would have to encompass a much longer time period.
John Morales says
Paul, it’s a Turing machine, not a physical thing.
This talk of mapping it to the universe is just silly; at best, a category error.
Heh. There’s Wolfram and his New Kind of Science.
(Hasn’t had much traction, hitherto)
Ahem. You, earlier: “A TM is infinite and has storage capacity that maps 1-to-1 on the natural numbers”.
(sigh)
Nothing. That’s the very point! There is no such existence.
(Abstractum, not concretum)
There is no “architecture” to a TM. It works by fiat.
You might as well just say “mathematics could at least simulate any other computer architecture, including one that simulates a brain as faithfully as possible”. It’s even broader, and no less vapid.
Heh. You know what simulates a brain? A brain. A lump of matter.
Look, it’s bleedingly obvious a physical stratum can generate consciousness, since we ourselves are living proof of it.
Yet again: you can’t. A TM is an abstraction.
So, no. Not the question, not even a question.
PaulBC says
John Morales@35
Yes, but that wasn’t my point. I thought the question was whether consciousness arises from simultaneity it some mysterious way inherent in the physical universe, or if it would exist in any system that produced the same outward behavior as a human brain.
As for your insistence that a TM is purely an abstraction, OK, we start with an actual piece of digital hardware that implements a state machine. There’s no need to minimize the number of states, but let’s say it has 100 or so, well beyond needed for the the finite head of a UTM. It is still a very simple piece of hardware, something you could breadboard with TTL chips. There are literal reels of magnetic type on which it can read and write bits. When it reaches the far end of a reel, it beeps and someone replaces it with the reel going left or right. The operation is slow enough that you can make more tape a lot faster than this machine ever consumes it. TTL circuitry doesn’t last forever, but we can repair this component as necessary without change the nature of the system.
What about this is abstraction? It’s entirely tangible.
You could make some other assumptions about the contents of the initial tape reel. It could contain information about the physical universe or it could contain a generative program to build a self-contained universe with artificial life entities.
Suppose the initial programming is such subsystems could reason about themselves and deduce their own existence. Would a subjective sense of consciousness exist? Again, my point is simply that I don’t see why not. Which was intended mainly as a rebuttal to KG’s statement:
I don’t see why it’s absurd at all to imagine that any Turing complete system would experience conscious over its execution.
PaulBC says
Maybe I should have stuck with the trained chicken and Wang tiles. Is that an abstraction?
I grant that it may not really be feasible to train the chicken.
John Morales says
Paul,
Not much of a question.
If it exists in some system, either it’s inherent to that system or it was caused by influence from an external system. Since you hold that the system in question is the universe — which encompasses the totality of everything — there can be no external agency.
It follows that it’s inherent, from your own premises.
BTW, you need to exercise more clarity with your ontological dependencies; clearly, since the universe is the the superset of systems, any and all systems exist within the universe — are part of it, even.
(sigh)
I quote you someone whose opinion you presumably respect:
“A TM is infinite and has storage capacity that maps 1-to-1 on the natural numbers”.
Infinite. Your own word.
You really think an infinite physical system is other than an abstraction?
… or you could consult the Magical egg of Nog, which will tell you the answer if you can only find it. I’m sure it exists, in the infinite possibilities of the Turing machine phase-space.
Heh.
Mmmhmm…
https://en.wikipedia.org/wiki/Rule_110
John Morales says
PS if you like thoughful SF, and if you’re hitherto unfamiliar with his corpus, I recommend Greg Egan. Explores some of those ideas, cleverly.
PaulBC says
John Morales@38
For purposes of computational equivalence, the point is that available space is unbounded. The amount the machine actually uses is limited by the number of steps it performs.
Yes, so what? An embedding in rule 110 serves about the same purpose as a Turing machine.
PaulBC says
John Morales@39 Yes, I’m familiar with Egan.
John Morales says
Paul:
A particularly clumsy way to evade the question. Futile, of course.
But fine, you don’t want to commit to a ‘yes’ or a ‘no’ or even an ‘I don’t know’.
“I don’t see why it’s absurd at all to imagine that any Turing complete system would experience conscious[ness] over its execution.”
So you find it not at all absurd that Rule 110 (an abstract thing) experiences consciousness if and when implemented. In some infinite machine, which is not at all abstract.
(Seagull territory, that)
John Morales says
Ah, so you’re familiar with Permutation City.
That’s basically the same idea.
PaulBC says
@42
It would depend on the initial conditions not resulting in a trivial terminal state, but sure, why not? It would be extremely slow. Neary and Woods apparently provided a polynomial reduction https://link.springer.com/chapter/10.1007/11786986_13 though I haven’t tried to read it. I believe that you also need to assume a periodic assignment to cells outside the starting state (i.e. not just empty) but this can be handled as the active pattern expands into it.
Again (this is a thought experience). I have my supersmart superslow rule 110 computer. I ask it if it experiences consciousness and to tell me a little about this (in some shared encoding) (yes, I need some kind of I/O, maybe just assign cells at the frontier). Then I go off on a relativistic trip and come back a million years later by its time, giving it enough time to think. Its answer suggests self-awareness. Do I have any good reason not to assume this was accompanied by a subjective experience of consciousness?
@43
Yes, but I was familiar with the concept already.
Actually, a more basic question than consciousness is what it even means for some systems to be mere abstractions and others to be realized in some sense. Assuming there is consciousness associated with “running” rule 110, it presumably would make a difference whether it “really” executed or if I just wrote out notation that said “This starting state taken out to a million zillion steps.”
But honestly I have no idea. I’m only aware of the physical reality I inhabit and only experience consciousness within it. It’s at least a little interesting to me to ask what explains this subjective experience. It’s also intractable as far as I’m concerned. This was my initial point. I don’t see a good start on these questions, so I will stick to the ones I can answer. This is not the same as resolving the questions. I’m intentionally ignoring them.
John Morales says
Paul,
Heh. How inchoate.
You give yourself away with your phrasing.
(“mere abstractions” implies abstractions lack something that non-mere abstractions don’t)
Again, ontological categories are your friend. Reification ain’t.
If you’re unfamiliar with the distinction between abstract and concrete entities, you won’t make much progress in clarifying your thinking.
If you’re not, you’re doing it wrong.
I know. Thus your vacuous speculations and suppositions and withholding of judgement.
You here allude to Chalmers’ “hard problem of consciousness”, but even he asked “how” instead of “why”.
But sure, “Why aren’t we philosophical zombies?” ;)
(Perhaps something something Turing machines something accounts for it!)
PaulBC says
John Morales@46 I’m honest about my ignorance. Do you have a really good resolution here?
You sound fairly certain that a physical implementation of rule 110 automaton with sufficient time and memory expansion as needed would not experience subjective consciousness. Or did I misunderstand what you said?
John Morales says
Paul,
There’s nothing to resolve. Just chewing the fat. :)
Sorta. As you yourself noted, it needs infinite memory.
And no, page-swapping is not a solution; the limit is total physical memory (must be infinite), not core memory.
That ain’t gonna happen, ever.
You’d be better off talking about Boltzmann brains.
(No infinity required for those)
PaulBC says
Lalala whatever. It looks like my stance at least has a name: “Mysterianism”.
I don’t subscribe to any particular reason for why I don’t understand consciousness. I am simply pretty sure that I’m not cognitively equipped to understand it, because I don’t even see a really good start on it, and it is a problem that has interested me enough to think about and sometimes read about for many years. Maybe other people are just a lot smarter about these things, though it does not seem to be fully resolved even among people who claim to study it.
It’s incorrect to say I believe
I bring up universal computation because it’s the most obvious approach to simulating cognitive function. You have something better? Plumbing maybe. I can do plumbing too, but I’d still probably first build something that looked like a computer out of hydraulic logic. It’s my one trick.
Given my limited grasp of what I mean by consciousness, I am reluctant to rule it out in cases that bear enough similarity to where I observe it. I.e., I’m conscious, at least some times, so if I were to have a conversation with an artificial intelligence, whatever its implementation and whatever time frame involved that touched upon the subject in a meaningful way, I would take it at its word if it told me that it experienced something similar.
Dennett (and I haven’t read him in full, just summaries) doesn’t really seem to be addressing the subjective question at all. This is justifiable, because it is unclear how to go about addressing it. That doesn’t mean it is not a legitimate question.
PaulBC says
John Morales@47
No you don’t. A space-bounded Turing machine is a very standard computational model and it’s a way to build complexity classes. https://people.eecs.berkeley.edu/~luca/cs172-07/notenl.pdf I don’t know why you place so much emphasis on the infinite part.
In fact, time is a more serious issue. I could trivially write a Turing machine theorem prover in very limited memory (and no, I do not claim this would make it conscious). But I could start with a formal theorem statement. It would enumerate and check possible proofs up to a certain length. If it found one, it would halt with success. if it eventually ran out of tape, it would halt with failure, not that there is no proof, but that there is not one within the required space limits.
Using a very modest amount of memory, something on the order of 16-bit microcomputers I could surely find a formal proof of Fermat’s Last Theorem this way (assuming Wiles and his reviewers didn’t make a mistake). It is uninteresting, because it would take more than the expected lifetime of the universe to run, but the sticking point isn’t the infinite tape. It’s the lack of an efficient algorithm.
John Morales says
Paul:
You yourself wrote: “A TM is infinite”.
It follows that if it’s not infinite, it’s not a TM.
(Or are you now resiling from that claim?)
Think about it this way: if the TM is limited to some arbitrarily large (but not infinite) amount of storage, there will be categories of problems it will not be able to solve even in infinite time. And the categories of problems it will not be able to solve will remain infinite, no matter how many are actually soluble.
Of relevance is infinities are weird; our naive expectations about arithmetic don’t really apply. (Though, cf. https://en.wikipedia.org/wiki/Transfinite_number)
An efficient algorithm for an imaginary but impossible machine. Right.
PaulBC says
John Morales@50 Sure, a TM needs an infinite tape to be universal, and it also needs to be infinite e.g. for its original application to the Entscheidungsproblem. E.g., demonstrating the halting problem to be undecidable requires infinite tape because any device with finite memory will trivially reach a repeating state if it does not halt, making it “easy” to test, though not necessarily within the lifetime of the universe assuming some reasonable time for each step.
That’s fine for recursion theory, but if the point is to look at it as a computational model, it only needs to be as powerful as any other computer, none of which literally have infinite memory. The human brain, whether you want to consider it reducible to a digital computer or not, does not seem to have infinite capacity, or at least not infinite accessible capacity. I.e., at some point the state of my brain at its finest resolution is probably something reasonably modeled as a fixed number of bits with some noise (at least, I believe so).
So it really depends on the application. Actually a Turing machine is a lot more like a conventional computer than an abstraction such as lambda calculus. You can build something like in hardware, e.g. using a series of shift registers and for many applications, it does not matter that the memory is finite, because you won’t reach the end. And again, there’s an enormous body of old work on space-bounded computation, when a TM is mentioned in this context, nobody apologizes for the fact that we are limiting the amount of tape that is used. The subset of problems solvable in https://en.wikipedia.org/wiki/PSPACE is much smaller than the set of computable problems, though it encompasses all tractable problems and many intractable ones.
John Morales says
Paul:
Heh. By ‘solvable’, you mean an output of either ‘1’ or a ‘0’ to some input.
A fair way from consciousness. :)
Look, I get you like TM as a concept, but you keep reifying it as an actual thing.
I mean, if you want to model an abstract universal computing machine, why stick to the very simplest? Use random access instead of sequential. Use any number of registers instead of just one. And so forth.
It wouldn’t do any more, but it would sure as fuck be far, far less inefficient.
—
PS
Again: You yourself wrote: “A TM is infinite”.
(Why say it, if it doesn’t matter?)
—
BTW, you seem confused about UTMs vs TMs. The ‘universal’ there refers to its being able to simulate any arbitrary TM, not to it being able to do any extra computations.
PaulBC says
John Morales@52
No, I am less confused about theoretical computer science than about most other things. I am not sure where you got the idea I was referring to “extra computations.” Obviously, a UTM would need an infinite tape to be able to simulate any arbitrary TM. That may be what I was getting at.
First off, I am not that wedded to TMs. I brought them up mostly as a unlikely place to expect to find consciousness, mainly because of their processing a single symbol at a time. They’re in fact considerably simpler than random access machines since they don’t need an addressing scheme, just left and right movement. So assuming one was running an AI program that could emulate human level intelligence, it provides a stark illustration of not being able to localize the “intelligence”, which is not in the head, a very simple state machine, nor in the tape, consisting of static storage.
If it’s more like a conventional computer, there’s a lot more room for mystification, though you still may be processing one symbol at a time if it has a single processor. I suspect it is easy for many people to fool themselves into thinking there is some kind of localized “brain” in their computer. So my aim is to use a model that in no way resembles a brain even superficially. (But despite that, I have no reason to believe it would be a “philosophical zombie” just because I suffer from a failure of imagination in seeing how it could be conscious as it does this emulation.)
If you really have a problem with referring to it as an actual device, I suppose I could refer to “hardware inspired by a Turing machine.” I.e., static storage and a very limited processing unit. I realize that unlike a von Neumann architecture, the TM was proposed to solve a mathematical problem, but in fact it was explicitly proposed as a a model of how a machine might carry out symbolic manipulation, which makes it a little different from lambda calculus, which is an abstraction of symbolic manipulation without the definition of a machine.
In fact Turing in contrast to Church was using the metaphor of a real machine and makes it clear in his description, which is unusually concrete for a mathematics paper:
So yeah, it’s a mathematical abstraction, but that’s necessary to use it in a proof. Turing is clear that the goal of this abstraction is to model an actual computing device, a subject that interested him as evidenced by his later work on the ACE computer. In fact, I can’t find it in the paper, but I remember reading that Turing was inspired by the process of writing symbols on a blackboard and switched to a tape description in order to yield a simpler formalism.
John Morales says
Paul, righto:
Probably because a TM definitionally has an infinite tape, else it would not be a TM.
As you noted earlier.
But then, you also claimed “it does not matter that the memory is finite”.
—
Anyway, fine. You want to think the concept of a Turing machine somehow supports panpsychism, I can’t stop you.
Phrenotopian says
@Paul BC #28:
What you’re describing is covered by the concept of “philosophical zombies”, which is one out of many devices meant to tease out the phenomenon of “consciousness” described in Goff’s book. In theory, it’s entirely possible to conjure up an entirely convincing simulation of a human being, mind included, responding adequately through a series of algorithms tuned to near-perfection, but without actually having conscious awareness. We cannot ever really know whether any person in front of us experiences consciousness more or less like we do. We can only take their word for it.
The main point of Goff’s book is that, besides empiricism, which has been greatly successful, there is one other pathway to truth and that is a purely philosophical elimination of logical contradictions. Galileo did this by logically, and not empirically, proving that all object falls at the same speed under ideal circumstances, whatever their weight. The stories that he tested this by throwing things off the tower of Pisa were just lore. Goff attempts this method to prove panpsychism while standing on the shoulders of giants like Russel.
I’m perfectly fine with an agnostic approach to these issues, and in fact, I’m still largely agnostic to these ideas and waiting for some more empirical confirmation of whatever nature. I claim no certitude, but it does personally for me resolve a lot of issues I’ve pondering my whole life. That doesn’t mean I’m ready and able to defend these musings, so please be gentle. If not for general civility then by knowing I’m currently recovering from an anxiety attack I had over a week ago.
The otherwise fascinating back-and-forths on Turing Machines and computation go a bit over my head BTW, and I’m woefully behind.
KG says
I think we were somewhat at cross-purposes. (I have read the subsequent exchanges between you and John Morales, but best to clear this up and clarify my own views first.)
My view is that consciousness consists of interaction between an information processor and its environment. It’s because a TM is a closed system – in the clear sense that no symbols or rules are added or changed during its operation – that it, as a whole, can’t be conscious – it has nothing to interact with, or to put it another way, nothing to be conscious of. I’m not trying to rule out strong AI (I don’t believe protoplasm is magic). I also don’t believe in “zimbos” (the term is Dennett’s AFAIK). A zimbo is a (philosophical) zombie – i.e. something that has behaviour but no consciousness – that gives a perfect impression of being conscious, including insisting that it is conscious, possibly expressing its puzzlement about the “hard problem”, and so on. I consider that if you are faced with something able to do that, then it must be conscious, in the everyday sense of being able to take in and comprehend information from its environment, act on that environment, know what it’s doing, and so on. And I think that implies subjective experience, and there is no “hard problem”, except in the sense that e.g protein folding or Fermat’s last theorem are hard problems. So if a robot could perform a sufficiently wide range of actions requiring the ability to take in and use complex information about the environment, and said that it was aware of itself and what it was doing, I would unhesitatingly accept it as a conscious being.
I agree with John that a TM – or rule 110, or Conway’s Game of Life – are mathematical abstractions, to which we can (only) build physical approximations. I don’t rule out such approximations containing conscious beings – parts that are interacting with their environment in a sufficently sophisticated way to require consciousness – although any actual conscious artefacts we build would almost certainly be robots, able to interact with the external world much as we do.
One more point that may be of some interest: dreaming. Here, consciousness is, I would say, present but attenuated (even in the case of lucid dreaming, somethnig I occasionally do); and I think what’s going on is that part of the brain (probably a functional rather than an anatomical part) interacts with an environment consisting of another (functional) part of the brain – but consciousness lacks many of its normal features because this is not a very good substitute for the real external world. (Incidentally, another recommendation is Andy Clark’s Supersizing the Mind, which argues that the brain is not the totality of the “hardware” of the mind, which can extend outside the body. Not directly relevant to consciousness, but deals with the importance of interaction adn the fluid boundaries of the “self”.)
Phrenotopian says
Again, I’m ill-equipped to parttaking in this otherwise interesting dialogue, but I couldn’t help remarking on this:
Goff highlights a fascinating notion (I forgot from whom originally) that we actually are not unconscious when we’re sleeping. We just don’t retain a memory from our deep sleeping phases, because the memory systems of our brains are off. Consciousness experienced during sleeping does however seem to reside in our short-term memory during a few fleeting moments.
The main point of this exercise was really that consciousness is rife everywhere all the time. However, that doesn’t make a conscious mind such as humans (and “lesser” beings) have. The information just dissipates and isn’t integrated in ways that would create coherent mental processes.
I lost my father to Alzheimer’s a few years ago, and I experienced the strange and heart-wrenching sense that he seemed to be there still, but in the end could no longer retain a functionally conscious mind. I slowly watched as his mind became shattered into ever further time slices until the person that he was, was no longer really there.
In fact, my own mind took a plunge in the deep frier as I already wrote, and I’m regularly faced with my on-board computer being glitchy and not up to many tasks I normally confidently rely on. You won’t believe how many times I had to spell check this text.
Phrenotopian says
ever narrower time slices
SC (Salty Current) says
KG @ #56:
That’s one of those books that pops into my head from time to time.
On a recent thread unrelated to any of this I mentioned that I was reading Peter Godfrey-Smith’s new book Metazoa: Animal Life and the Birth of the Mind. I recommend it generally and it seems highly relevant to the discussion here.
KG says
Thanks for that reference, SC@59 – I’ll check it out.
Phrenotopian@57, consciousness is not an all-or-none, on-or-off phenomenon: there are various aspects to it, which may or may not co-occur. But there’s absolutely no reason I know of to think that it’s everywhere. It’s not a kind of stuff, or a bulk property of matter like mass or electric charge – Star Wars and His Dark Materials notwithstanding!
Sorry to hear about your father – mine was in the early stages of MID when he died, and I’m glad he did so before it progressed much further. I’ve no doubt you are right that your father was increasingly unable to maintain full consciousness. The book Supersizing the MindI mentioned above uses the example of an Alzheimer’s sufferer externalising parts of their memory in order to cope better with the damage to the brain; and it’s often noted that some kinds of highly structured external stimulus – such as a familiar living environment, or music – can help sufferers retain, or partially revive, their faculties.
Susan Montgomery says
All this talk of consciousness and machines raises a question: If a machine became conscious, would we be capable of understanding it? Would a Rule 110 machine, for example, give rise to a comprehensible consciousness?
PaulBC says
John Morales@54
No, that wasn’t my point. In fact, I don’t think panpsychism is supported at all.
A Turing machine* is my go-to example for looking for consciousness in a counterintuitive place, because it strips away every mental crutch (network connectivity or simultaneity) that makes the existence of consciousness seem plausible. Maybe other people don’t see the distinction. I suspect many do. I.e., one would be more likely to believe an android resembling Data from Star Trek with a cranially-placed brain analog and the ability to speak was conscious than some system consisting of symbolic manipulations on static media that can be looked up in a table small enough to memorize.
*Or hardware that uses the same sequential addressing approach using a large but necessarily finite memory, assuming this distinction is significant, though I fail to see how.
Phrenotopian says
God dammit! I meant during dreaming! That was the whole point of that paragraph!!!
That’s it! My brain is clearly too fried still to write words down. Checking back in later.
PaulBC says
KG@56
I’m not entirely sure I agree, since I could remain conscious, though maybe not sane for long, if I were placed in a sensory deprivation tank and had only the contents of my brain to process (or conscious during a dream as you said later). A TM* programmed as an AI pre-populated with some internal evidence leading to self-recognition might be in a similar situation.
In this case there is an interaction between an information processor and its environment, though it’s a little hard to nail down. The “processor” is not the finite head, which is a very uninteresting piece of hardware doing only lookups from a small table. The computational mechanism largely exists as static symbols. There is no network of wires or dendrites connecting them (where we might naively localize “interaction”, but there is a dependency over the execution of the device**. The mechanism is never active all at once, but over a sufficiently long time interval there is activity and interdependency.
(I find it valuable to try to trick my intuition with scenarios like the above, but I get how it could be tiresome.)
I don’t think the lack of connectivity to the outside world matters that much, though you could easily augment the hardware with sensors (with suitable changes to the model, calling it a tape automaton, or 2-stack deterministic pushdown with an input stream or similar). Or, as I initially stated, the entire universe could be contained in the machine, but in that case, I agree that the machine as a whole is not a unified conscious entity.
I agree too, but as a non-philosopher, I feel free to use language somewhat loosely. If an architect says “Here’s the new building.” but he’s pointing to a scale model, does John react like Zoolander? “What is this, a center for ants?” In context, I was talking about an actual machine with a kind of serial processing and sequential access. The fact that its tape cannot literally be infinite seems like the least of my worries, since it is unlikely to be fast enough to run out of tape and thus isomorphic to a system that had the infinite tape to begin with.
There also appears to be something significant about “actually doing something” that mathematicians especially gloss over. Suppose I had program to simulate of the creation of “the universe” (or some tractable classical variation), and compiled it into a starting state for rule 110. (This is within reach in practice, though not really worth doing.) Suppose I map cell states to the lower half of the plane in a grid as commonly depicted (rows and columns, row number increasing going down and corresponding to successive generations). We agree on the position of cell (0,0) and I can now refer to cell(i, j) and give large values for i and j in terms of some very fast growing function like the Ackermann function. I’ve referred to something that “exists” and I know trivially that its value is either 0 or 1. It’s not very interesting, but it’s well defined and I can make up a new symbol for it.
It seems clear to me that referring to this cell, making up a symbol, etc. does not turn me into a creator god that launched a new universe into existence (though I could imagine someone believing this if they like). Yet I do think that painstakingly constructing the hardware to calculate this value and letting it run out for however long, supplying it with memory as needed, could conceivably lead to subjective consciousness experienced by inhabitants of that universe (and yes I’m getting Greg Egany here, but I didn’t have to read his work to have this thought).
Maybe just because I’m at least half-accustomed to thinking like a mathematician, it is still not obvious to me at all what the distinction is, though it would be “absurd” to think there is not one.
*TM I.e. an implementation of a computer that processes symbolic data using only sequential access and necessarily is limited to a finite tape in order to embed it in feasibly constructible hardware.
**By which I mean an actual, physical device that “is” no more a TM than Magritte’s painting of pipe “is” a pipe, but I will treacherously conflate the two.
PaulBC says
And I wish I could find the quote, so it may just be internal department folklore about a CS theory professor from a nearby university who worked on parallel algorithms in the early 90s (and may do so still). Quoting from memory, he apparently once said:
In this context, which I will defend as totally reasonable, you have a “result” if you can prove something about its asymptotic behavior. If you merely implemented something like it on a CM-2 (as existed then) a typical reaction would vary from “Well, yes, obviously you could implement it. That’s not a result.” to “It’s a shame, because a hypercube introduces logarithmic slowdown that completely negates the benefit of this approach.”
Different cultures. You don’t win the Boston marathon by using an electric scooter, though it might be a fine way to get around on another day. Personally, I got a little tired of the “real” result being the proof and the rest just a lot of tinkering, but it’s a matter of individual motivation I guess. I do think distinction of what “exists” and what mathematicians mean by “exists” is interesting and not always appreciated (especially by mathematicians!).
John Morales says
A rather more interesting take on self-awareness:
I Am a Strange Loop by Douglas Hofstadter.
(TLDR: self-awareness as a reflexive system)
PaulBC says
John Morales@66 Well, yeah. Self-reference seems like it ought to be part of it, and contrary to Hofstadter’s lament “that the book was perceived as a hodgepodge of neat things with no central theme.” I actually did get the point that GEB was all about self-reference and even “What is a self, and how can a self come out of stuff that is as selfless as a stone or a puddle?” So did my friends in college at the time.
I still don’t get where the subjective experience of consciousness, or for that matter any subjective experience is supposed to come from. I grant it may be a really stupid question, like asking “How does my brain turn the image on my retina right side up.” In that case, I understand enough about image processing to get that the position of the image on sensors is irrelevant and the question shows a misunderstanding of what’s going on (like imagining a homunculus in a sperm cell, or believing my TV set is really a puppet show).
What I can state with confidence is that I do not understand the subjective experience of consciousness, though I can certainly understand how a self-referential system can manipulate information about itself, and I can work effectively within that framework (e.g. writing code that generates or modifies copies of itself–and by doing string substitutions, not with an OS cheat).
I suspect it may have something to do with self-reference, but none of it suggests to me the necessity of consciousness. My evidence for the latter is personal, subjective observation, and none of the explanations I have seen really get me a lot further that I am already: “Yes it’s there, and I suppose it probably has something to do with self-reference.”
Finally, this thread aside, I don’t think about it a lot. My comment from @28 was intended to be dismissive. I get that Hofstadter and Dennett are pretty sure they have it nailed down. I just remain unsatisfied with their conclusion, possibly because I am a foolish person who wonders where the little people go when he turns off his TV. I suspect it is is potentially comprehensible, just not something I comprehend.
It is also something I can leave unresolved and continue to live a very happy life.