We are so special » « This would have harshed my mellow, if I had any mellow left I ain’t afraid of no “Great Filter” Amateur cosmologists sure get themselves tangled up in a lot of bullshit, don’t they? I’m also not worried about Roko’s Basilisk. Share this:PrintEmailShare on TumblrTweet We are so special » « This would have harshed my mellow, if I had any mellow left
As a fan of Isaac Arthur, who has done a lot of videos about the ‘great filters’ and Kardashev civilizations and etc., I’m not sure he would really disagree with you about much of this. It is a lot of speculation and not very useful in that regard. I would not recommend his channel as it sounds like you wouldn’t like it. Even I roll my eyes sometimes at the unbridled optimism about the human race.
That said, I think the question he likes to raise is not as much of why aren’t there aliens visiting us, as much as why haven’t we spotted any evidence of advanced alien civilizations given the age and vastness of the observable universe. Your video makes a good point about the meter stick and the 100 coin-flips, and Isaac would note we shouldn’t even assume the odds on any particular step are as high as 50/50.
….Also, having read “Rare Earth” and similar books, I am fairly convinced the Great Filter is in our past, not in our future.
Going off on a tangent to ‘strong’ AI, it will arrive, not in 20 years but maybe in a hundred. As my home village has rock carvings that are 3000 years old, that is a trivial waiting period. We just need to stop thinking in time spans beyond individual humans.
Once we have strong AI, we have ‘pilots’ that can travel to Alpha Centauri in just a few centuries.
Add 3D printers for infrastructure and DNA for a few million different organisms and we can spread a bit like the Polynesians did (it took them more than a millennium to cross the Pacific, but they got it done).
So on a realistic time span, we can indeed be optimistic.
The classics will help you. Stanislaw Lem adressed the ‘silent cosmos’ (silentium universii) many times. His last novel “Fiasco” (1985) should be required reading for those who are optimistic about contact.
“Solaris” (1961) has an entire sentient ocean which makes the idea of human civilisation the model of advanced consciousness rather silly.
And “His Master’s Voice’ is the best novel about SETI I have ever found.
Lem was misogynic, but otherwise nearly perfect.
James Fehlinger says
You mean — gasp! — that when Elon Musk says that a
“neural lace” (now where could he have gotten that term?
Not from Banks’ “Culture” novels, surely?) is right around
the corner, he is — how does one put this delicately? —
exhibiting the crack in his pot?
Say it isn’t so!
I to am no longer excited by the speculation of space colonial empires nor do I have any fear of some space invaders either.
I have watched a few of Isaac Arthur’s videos they are entertaining sometimes and do make some sense about the possibilities of the industrial development of our solar system in the more immediate future . it is mostly just speculation from a personally selected starting point and does not reflect many cultural and historical interpretations and are very European centered as if that is the only way humans could advance.
The vast distances in space-time are always swept away by the wave of some technological magic. just in terms of human history no reflection on what an advanced civilization stemming from the ideals of India or China are even acknowledged.
the basic function of biology which is to reproduce is just no where to be seen
Rob Grigjanis says
birgerjohansson @3: I see nothing ‘optimistic’ about the idea of our species spreading across the galaxy. In our own biosphere we are a disease, and you want to propagate that disease beyond our solar system? I really don’t get it.
I doubt our AI brethren would permit us to behave like hairless baboons on their worlds (because, face it, they would be the ones calling the shots, like the Minds in Iain Banks’ Culture novels).
John Morales says
Teleological thinking is… well, at best, wishful.
Rob Grigjanis says
birgerjohansson @7: So why would they even want us on their worlds?
Ed Peters says
Unclefroggy #5 says: “The vast distances in space-time are always swept away by the wave of some technological magic.”
This is exactly right. Notice how steps 1-8 are biological. What a coincidence. Once we get big brains we’re just supposed to warp out of orbit? No. The great filter is not biological, but physical. Step 9 requires traversing the enormous distances while obeying the absolute speed limit – c.
Even the best case scenario requires enormous expenditure for no reasonable probability of getting to another star system, and all just to drop a few seeds. As far as we know, the odds of not colliding with something catastrophic along the way are infinitesimal. Space is anything but empty, and colliding with a grain of dust at .1c is pretty certain to leave a dust-sized hole in your space craft. Multiply that by the number of grains along the way and you are lucky if your craft has enough working parts to decelerate.
So physics is the problem, not biology. And those who sweep it away with techno-babble and talk of space seeds are just showing their ignorance by pretending their sci-fi fantasies are a possible reality. Simply put, they do not understand or appreciate the magnitude of the problem. It’s like they believe 4×10^13 KM (a-Centauri) is only 8 away from 4×10^5 KM (moon).
More generally, the problem with our species is: we think just because we can imagine it, we can do it. Save that crap for business guru motivational seminars. Some things just cannot be done. Get over it humans!
Well bacteria has already won. Bacteria outnumber all other life on earth, and wherever humans go, since it lives on, and in us, bacteria will end up thriving. Even if humans eventually died out, it will still live on any worlds we’ve inhabited. It meets all 9 points on their list. It doesn’t need to use tools, as it has us to use them.
Matt Cramp says
I think a lot of this is driven by the anxiety that Earth as a method of supporting human life has a time limit. One asteroid and the human race is extinct, and the answer that some people arrived at is that humans cannot be dependent on Earth ecosystems if we are to survive long-term.
Still, I’ve been suspicious of the Great Filter ever since I noticed it tended to show up in the same places as the Simulation Hypothesis, which drives me up the wall because it’s not as clever as it thinks it is.
@12 – Humans are not dependent on terrestrial ecosystems. We are exploiting them to death. It’s hubris to think that humans “master” the planet’s ecosystems for our own needs. We are little more than bacteria ripping through a finite supply of agar. When it’s gone, we’re done. Rogue asteroids should be the least of our concerns.
I loved Ian Banks works, alas, due to illness, he had to return home.
Still, the linear progression theory gets falsified by, oh, our eldest child, who learned to pull herself up and walk, never having learned to crawl. I literally had to teach her to crawl to an object to pull herself upright to walk.
Obviously, we’re superhuman or something, right? Wrong, somewhat observed and documented, lost in the general noise.
Detecting alien civilizations is absurd to begin with, the inverse square law is our friend in regards to solar and cosmic radiation. It also works against our own type of civilization, where we started out noisier than hell, then rapidly went to tight microwave links, then fiber optics.
Want species immortality, which I’ll argue is a questionable experiment, go for storage and reconstruction under specific conditions, on millions of spacecraft to crawl along the way to distant stars.
Even money, later human civilizations would seek to exterminate such vermin.
The reality still remains, the inverse square law for communications detection.
If any managed the magical trick of star travel, whyinhell would someone sane want to visit this hopelessly primitive and paranoid planet?
Of all science fiction, as opposed to science friction films, Clara was our favorite.
I’d happily thank Clara.
Re: the coin flipping (20 minutes in) I read another example or allegory.
What are the chances of flipping four heads in four coin flips? 1/16. What are the chances of four consecutive heads if you flip twenty times? If you scour the full binary tree, better than 50%. If you flip a few billion times, what are the chances of a thousand or even ten thousand in a row? Specialized or specified results become near inevitability if given enough opportunity. It’s the observers’s mistake in assuming it’s the norm or desired result.
Ed Peters (#10) – Yup. If there is a “wall” that stops aliens from flying ships around the universe, it’s technological (distance, energy, resources, etc.), not intellectual.
Rob Grigjanis @9
They do not need to allow us to follow them out, it is sufficient- from my point of view- that some sentient, intelligent entities emerge in the future, and spread out so their survival does not depend on Earth, or even the sun.
The amphixous is not kept in aquaria everywhere, as -even though we are descended from its close relatives- it generally is not a clever conversationalist.
consciousness razor says
What if these sentient bots don’t want to be your slaves? They might not want to go on this long interstellar trip that you have planned for them. So then what?
Marcus Ranum says
Wouldn’t any self-replicating or self-repairing robot probe experience differential survival therefore it would evolve?
James Fehlinger says
Then you contact MIRI and demand your money back.
consciousness razor says
But I gave all my money to HAL 9000 quite a few years ago. I haven’t heard from him since, which frankly is a bit annoying…. It’s probably about time I write a strongly worded letter.
Consciousness razor @17
A human-level AI should obviously have the same right as a biological mind with a biological body.
Travel should obviously be voluntary, only the recruitment pool will be rather exclusive.
You think about robots and AIs in the traditional American way as either slaves or rivals. I think of them the way the Japanese do, as partners. They might not build Iain Banks’ “Culture” but good luck oppressing something that is to you as you are to a lemur.
(I am not Ray Kurzweill, I recognise the time scale is centuries and millennia).
Marcus Ranum @18
Yes. And diversity is a good thing.
Conversely, who says they’re slaves or that biological humans are the ones asking them to go on the trip? If I were a sentient bot independent of any specific environmental needs, I might want to go off into space, possibly staying dormant much of the time, or possibly spending the trip having lots of deep bot thoughts. Alternatively, I might want to stay where I am, based on the assumption that I wouldn’t find anything that would interest me. But given enough bots, why wouldn’t some want to explore?
For that matter, you really don’t need sentience to function as a space probe. You need enough hardware to bootstrap a self-replicating system when you reach your destination (or else your options are limited to what you can pack along). The guidance system needs to be smart enough to measure and react to unknowns, but lots of living things do this and we do not consider them very intelligent.
And I know interstellar space is a harsh environment, etc., etc. But if you’re not in a hurry you can bring a small amount of hardware, just as much information as you need, and a lot of dumb shielding. So what if it takes 20000 years to go one lightyear?
The key technology needed here is neither AI nor major improvements to propulsion, but autonomous self-replication. I am not saying that’s easy, but there is certainly no theoretical obstacle.
birgerjohansson@2 I agree that “strong” AI will not arrive in 20 years, but why do I think that? Probably no very good reason except that it’s been a little under 40 years since I started to learn enough about real computers to get an idea of what it might actually take to get a “nickel-plated nincompoop” like the robot from Lost in Space. (That was when “Byte books” actually published some interesting, though now ancient, AI treatments that I read as a teen.)
And clearly there have been massive improvements in computer power since then, as well as some new ideas about AI (but really an enormous backlog of old ideas we just didn’t have the hardware to try out). So what’s the chance I’ll be waiting half the time I have already waited? It seems like it should be much longer.
On the other hand, that’s not much of an argument, and for all I know it could be less time or more time than 20 years. But to put it in perspective, a family member uses an automatic peritoneal dialysis machine based on a design from over 25 years ago (that looks it!) (in fact, the work of the prolific Dean Kamen). We are nowhere near an implantable artificial kidney or a way to grow human kidneys outside human bodies. Will we have that in 20 years? I have no idea.
So yeah, a human brain equivalent or something surpassing one seems like a stretch. What can happen in 100 years is anybody’s guess.
And… uh filter schmilter. It is one prospective explanation for what we observe, but not one that provides any fruitful guidance on what to do next.
consciousness razor says
Then why will any them want to do it? Of course, it’s possible that some would. But I mean, we’re talking about an incredibly enormous and expensive project that takes centuries or millennia. (We’ve never done anything on scales like that, more or less because people don’t live that long, and there’s no obvious reason why that would change.)
And that’s just for all the basic pieces to fall into place. Then, it would be lots more time for some possible recruits to travel … somewhere … while leaving the rest behind to do … something. Is there anything important at Alpha Centauri, for example? Doubtful. So why should they want to do that, and why should people put so much behind a vague hope that maybe some of them (once they exist) will be interested?
You said it matters that their survival doesn’t depend on the Earth or the Sun. Presumably, you’re worried about the Earth being toast in a billion years or so. But that sort of thing certainly isn’t specific to our solar system. Stars all over will go boom, and eventually there simply won’t be anywhere that’s safe, for humans or robots or anybody.
That’s not something I’m concerned about. I don’t want any sort of immortality, and I think things that last only a finite amount of time can still be good, meaningful, important, etc. (They can also be awful and pointless, obviously.) That includes human civilization and all of its products, and I have no trouble accepting that.
No I don’t.
Well, “rival” isn’t the right word for it, but that’s not a “partner” either. If all they did was send me to Madagascar, it wouldn’t be so bad. I could imagine much worse.
consciousness razor says
I know, but people like birgerjohansson seem to want that, because they think they’ll be happier if something sentient is still around in the very distant future, although humans themselves are not. If you think there’s a pressing need (and not just a desire) to leave the planet, because it will eventually be gone, then you would want to remedy that with something that isn’t merely a probe.
I’m not sure what “expensive” means in this context. Resource-intensive, sure, but so is nature. A rainforest would be “expensive” but we don’t see it as such because it grows by itself (I agree a rainforest is priceless). Things are only expensive if they occur as part of an economy and some sentient being has a choice to produce or not, has be to involved in the process consciously, and has limited resources.
Once there is autonomous self-replicating hardware able to tap off-world resources, the construction of interstellar probes is not something anyone pays for. It is simply one of the many products churned out the way living creatures churn out new cells willy-nilly, many of which have a very short existence and are reclaimed. There are many human reasons why it might not ever happen, including the “expense” of getting to that point, but it would be a natural outgrowth of engineering within the solar system, which would be an outgrowth of increasing demand. (None of which is inevitable–maybe our appetites have a limit after all–nor is our survival inevitable or even likely as far as I know.)
And yes, it takes time, and yes, the motivations are unclear. But again, if I think it sounds interesting to travel to another star system, and I know there are other people like me, the most reasonable assumption is that some sentient bots would also find the idea interesting. The idea that they would be “recruited” is silly, and assumes biological humans are in charge.
I think something sentient is around now in many other parts of the universe (if not our own galaxy) and will be in the future no matter what we do. So I agree, but I think what you mean is that his happiness hinges on that sentience being connected to earth somehow.
If the future sentience does not include our progeny (biological or otherwise) that’s not necessarily a bad thing, and probably a sign that the rest of the universe is better off without us. I have to admit it took years for me to come around to that view, but I now think it’s a pathology to believe that humans need to leave their special stamp on the universe. Subjectively, individual humans want to survive and want to be happy during their lives. Admittedly, it does make me happy to fantasize about future space travelers from earth, but that does not turn it into an imperative of any kind.
I don’t even like Carl Sagan’s formulation “We are a way for the universe to know itself.” Who says the universe wants or needs to know itself? Consciousness is of interest only to the conscious entities themselves.
consciousness razor says
Heh. Is that all you need? Why didn’t you just say so?
But seriously, that’s way easier said than done, and I figured you would understand what I meant by “expensive.”
Okay, but “in the future” (for some amount of time) doesn’t mean “forever.” I think some find it depressing or even frightening to think that there may ever be a time when there isn’t anybody anymore. I don’t think less of them for it, but honestly, they have a lot of other things to be depressed about, which are a lot more immediate. And we could actually do something about those things.
Just as a followup, I admit I have a very different outlook from what I had in my teens or even my 30s when it comes to pronouncements that “we will one day explore the stars.”
The fact is that human beings do form not a hive mind, let alone a hive mind that somehow spans centuries, so what is this “we” supposed to mean? Somebody out there may be “exploring the stars” right now. Somebody may do it in the future, maybe even somebody “from earth” but that doesn’t mean I get to say “we.”
For example, I could probably get away with a statement like “We now have a proof of Fermat’s Last Theorem.” Andrew Wiles presented a proof of a conjecture that had been elusive for centuries (by human beings here on planet earth). It was peer reviewed by other mathematicians and generally accepted as correct by 1995.
Well, am I part of “we”? In what sense do I “have” the proof? If I download PDF and print it, I have a copy. Now, suppose I try to read it. Without the foundational mathematics, I’m really no better off than I was in 1993, believing it was a plausible-sounding conjecture that may very well defy proof indefinitely. There is no way I will ever understand it (barring that I suddenly decide to spend years learning what I would need to know).
It also seems to me rather likely that there are other sentient beings in the universe (but maybe not the galaxy; I don’t have the numbers to fill in), that most past a certain point would have discovered the Pythagorean theorem, and at least a few would wonder about Fermat’s silly generalization, and some of them would develop the requisite math to resolve it. So I’m still really not that much better off. If I trust the peer review, I now feel confident that the claim is correct, so that’s something. But in a cosmic sense, nothing very big happened.
Something really big happened for Andrew Wiles, needless to say. I am not saying the proof is insignificant, just the “we”. It’s like “my team” won. Well, good for you? Did you throw the winning pass? (Uh, no. I fell asleep on the couch and found out the next morning.)
My point is that first off, I actually like to imagine sentient computers and interstellar probes. I think they sound very cool and are well within the realm of possibility. However, the question of whether “future generations” from earth (biological or otherwise) are involved in any of this becomes increasingly irrelevant. It does make me happy to think it could happen. It is not a moral imperative.
I believe, for example, that it is a moral imperative to help actual humans who are suffering now and to work for a better world, both in terms of human rights and the environment. I do not believe it is a moral imperative to create some kind of “failsafe” to insure the existence of earth’s progeny, like we have in so many science fiction stories… whether it is an underground city or a probe sent off with human DNA or whatever.
I mean, I’d probably be in favor of those things too if it came to it, but the moral imperative is not to screw up that bad. Seriously, if we have reached the point where the best we can do is sent a self-replicating probe to construct an ersatz earth around another star, I think it would be safe to say that nobody else in the universe is going to miss us and we won’t be around to miss ourselves.
Well, it’s a given to me that there is not much point is “traveling to other stars” given the abundance of resources in the solar system, so it does seem like an obvious priority to me to get the autonomous manufacturing going first. I can’t even really see the point of sending out probes, given the timelines involved.
With AI, the other direction of travel is inward, assuming there are far more compact forms of sentience than the human brain, we might not need to be pushing around all this bulk material anyway.
I honestly doubt there is much point at all to “seeding” the galaxy with biological humans and I don’t see it as all that interesting. Maybe some super-intelligent AI will pick it up as a hobby project and “recruit” humans to donate their DNA (or just steal it).
@PaulBC, having been on every continent, save Antarctica, social memories can be strong. Something literally unknown on this continent.
Still, let’s look at reality. The first signal that could escape our atmosphere was Hitler’s address at the Olympics. Since, we’ve shouted at the universe, to simply get a signal into the local populace’s living rooms. Then, as cable and fiber took over mass carriage of signals, went silent again.
So much for detection!
wzrd@32, “as cable and fiber took over mass carriage of signals, went silent again.
So much for detection!”
-Your insight closely parallel the reasoning in Stanislaw Lem’s “Fiasco” (1985)
PaulBC@31 “with AI, the other direction of travel is inward,”
This possibility was considered independently by Dyson (or was it Feynman?) in “There is plenty of space at the bottom” and by Lem in “Summa Technologiae”
consciousness razor @25.
I am sorry if I misunderstood you. Popular culture is so full of “AI;s will kill us” that I tend to assume people have this attitude as a default.
Re. having a heritage of sorts. Objectively this is not necessary, just as it is not necessary to go on living. Most of us tend to be motivated by habit and instinct while checking out early is reserved for the depressed or terminally ill, but there is no objective purpose behind either choice.
As an aside, the late SF author Philip José Farmer always considered saving the environment more important than space travel. I hope his vision will triumph.
P. Z. Myers what do you think of the idea of von Neumann machines. Personally I think the whole notion, as a way of explaining that ET can’t exist because von Neumann machines are not here, is really absurd. It makes a huge number of assumptions that are dubious beyond belief. One being that ET must expand, like European colonialists.
I also find dislike the notion that ET if he / she / it, exists out there must be technological. Why? ET could be animals like Earth whales. Also since some species on Earth have entered periods of stasis with little apparent change, why not an ET that voluntarily sticks to one particular level of technology for millions upon millions of years. I also think the barriers to travel between the stars are still seriously underestimated.
I think the issue is more that any ET we are likely to encounter would have to be technological. Certainly there could be many planets with no technological species at all. Could they be highly intelligent with rich cultures, but devote themselves to forms of art (e.g. music and storytelling) that do not require tools? Sure. Though I do think that once the opportunity exists to manipulate your environment, it creates such an advantage that is it hard to see how this situation could last indefinitely (this is not to put a precise time limit except for “finite”).
Could there be natural non-technological space-faring creatures? Again, it’s been considered in science fiction, and it’s not completely impossible. It seems like a stretch to me, but it’s an interesting idea.
I agree that the naive “Why aren’t they here?” argument makes a lot of unfounded assumptions. Wikipedia lists 23 possible resolutions. https://en.wikipedia.org/wiki/Fermi_paradox