Neuralink and the delusional world of Muskians


I know, I’m getting a reputation as that guy who hates Elon Musk (I don’t, I hate hype), but his latest is just too much bullshit. He has bought a company called Neuralink, which has the goal of creating brain-machine interfaces (BMIs). OK so far. These interfaces are cool, interesting, and promising, and I’m all for more research in this field. But Musk gets involved, and suddenly his weird transhumanist-wannabe fanboys start hyperventilating. I double-dog dare you to read this puff piece, Neuralink and the Brain’s Magical Future. It begins roughly here, with the claim that Musk is going to build a Wizard Hat to make everyone super-smart:

Not only is Elon’s new venture—Neuralink—the same type of deal, but six weeks after first learning about the company, I’m convinced that it somehow manages to eclipse Tesla and SpaceX in both the boldness of its engineering undertaking and the grandeur of its mission. The other two companies aim to redefine what future humans will do—Neuralink wants to redefine what future humans will be.

The mind-bending bigness of Neuralink’s mission, combined with the labyrinth of impossible complexity that is the human brain, made this the hardest set of concepts yet to fully wrap my head around—but it also made it the most exhilarating when, with enough time spent zoomed on both ends, it all finally clicked. I feel like I took a time machine to the future, and I’m here to tell you that it’s even weirder than we expect.

But before I can bring you in the time machine to show you what I found, we need to get in our zoom machine—because as I learned the hard way, Elon’s wizard hat plans cannot be properly understood until your head’s in the right place.

I dared you to read it, because I’ll be surprised if anyone can plow through it all: it goes on for almost 40,000 words (I know, I pulled it into a text editor and confirmed it), and that doesn’t count all the crappy little cartoons scattered through out it. When the author says you cannot properly understand it without putting your head in the right place, he means you have to start with sponges and be lead step by step through a triumphalist version of 600 million years of evolutionary history, which is all about a progressive increase in the complexity of brain circuitry. It’s an extremely naive and reductionist perspective on neuroscience and intelligence that presumes that all you have to do is make brains bigger and faster to be better, and that computers extend the “bigger” part but are limited by the speed of interfaces, so all we have to do is improve the bandwidth and we’ll be able to battle the AIs that Musk thinks will someday threaten to rule the world.

All the verbiage is a gigantic distraction. It’s virtually entirely irrelevant to the argument, which I just nailed down for you in a single sentence…without bogging you down in a hypothetical history of flatworms and a lot of simplistic neuroscience. He summarizes Elon Musk’s glorious plan in yet another crude cartoon:

It is accompanied by much grandiloquent noise and promises of planetary revolutions, but what needs to be asked is “How much of this is real?”. The answer is…pretty much none of it. We are currently in the little blue ball at the lower left labeled “starting point”, and Musk has bought a company that is doing tentative, exploratory research on building BMIs (I guess that this whole field is new enough that they are all, by default, “cutting edge”). Everything else in the diagram is complete fantasy. Elon Musk has bought a company, and is cunningly trying to inflate its value by drowning the curious in glurge, techno-mysticism, and making shit up, which, because he has this mystique among young male engineers, will probably succeed in making him more money and fame, without actually doing anything in the top two thirds of that cartoon.

I do rather like how the third step is “BREAKTHROUGHS in bandwidth and implementation”. You could replace it with “And then a miracle occurs…”, and it would be just as meaningful.

Let’s add a little more reality here: Musk has a BS in physics and economics, and started a Ph.D. in engineering, which he dropped out of. He has no education at all in biology or neuroscience.

Another shot of reality: he’s buying this company in collaboration with Peter Thiel’s venture capital company. You remember Thiel, right? Wants to prolong the life of old rich people by transfusing them with the blood of the young? Libertarian acolyte of Ayn Rand who is now advising Trump on policy? If you think this is a recipe for a post-Singularity paradise, looking at the people backing it ought to tell you otherwise.

So why are these filthy rich people getting involved in this nonsense? Let’s ask Elon.

Fear and ignorance, like always.

They’ve imagined a huge, shadowy existential risk which does not exist yet — you might as well drive your decisions by the possible threat of invasion by Mole People from Alpha Centauri (oh, wait…they also fear aliens). They don’t know how AIs will develop or what they’ll do — nobody does — and they lack the competencies needed to guide the research or assess any risks, but they’ve got a plan for generating all the benefits. These guys are as terrifying to me as the Religious Right, and for all the same reasons.

They have fervent worshippers who will vomit up 40,000 words based on inspiration and wishful thinking, and then wallow about in the mess. It’s possibly the worst science writing I’ve encountered yet, and I’ve read a lot, but still, take a look at all the commenters who want it to be true, and regard grade-school and often incorrect summaries of how brains evolved to be informative.

Comments

  1. doubter says

    Wow. I probably own the same set of crumbling cyberpunk paperbacks that Musk does, but I always assumed they were fiction.

    I just hop that his excursions into neuro flapdoodle don’t interfere with the genuinely good stuff he’s doing (the cars and the battery/solar roof combo).

  2. stevewatson says

    Fact is, we do face a serious existential risk: resource depletion and environmental degradation (of which climate change is only one, albeit arguably the biggest, piece) could wipe out nine-tenths of us by famine, war, and general mayhem, and leave the survivors scratching for existence in a landscape out of every post-apocalypse movie ever made. And Musk’s batteries may be among the things that help us avoid that fate. AIs run amok, however, are way down the list of stuff worth worrying about.

  3. davidnangle says

    Imagine if eternal life were to be discovered in this country, and it wasn’t for everyone, but just the rich.

    Just imagine that happening right now. And think about who would live forever.

    Yeah.

  4. slithey tove (twas brillig (stevem)) says

    off topic (sideways)
    I want that neural interface to a GPS SatNav device. Hear me Garmin? why look at ones phone to see how to get from A to B and how long it will take. This way one can just start thinking about it and >Bingo!< the way is known, with reasonable estimate of ETA.
    I got my money here waiting for it. hear me? I’m waiaiaiaiaiaiaiaiaitinggggggg…

  5. davidnangle says

    slithey, discard those meat-person notions of “travel” and “places.” Discard them NOW.

  6. gorobei says

    Oh great, another layer in the hi-tech, transhumanist, infinite-growth Ponzi scheme:

    1. Big sunk costs in cars that might turn a profit one day. Boutique shop is worth more than GM, because: growth!
    2. Big investment in batteries that might turn a profit one day. Finances unclear: Nevada State Treasurer Dan Schwartz calls it a “Ponzi scheme”
    3. Hyperloop: a vacuum tube roller coaster with all the benefits of a plane and the cost of a train. Just keeping the passengers under 4G will require hugely cheaper bridge and tunnel building technology. If you had either one of those, you would have a $1B product without even needing to build the damn tube.
    4. SpaceX: Thunderbirds are go! We’ll just increase ISPs from 380 to 420, have parts go from +1000 degrees to absolute zero repeatedly, but be able to reuse them in 48 hours. Also, send people to Mars, so this is going to be 99.44% safe.

  7. brett says

    @4 Claire Simpson

    I was going to say myself that this feels like Musk’s latest distraction from the labor unrest at Tesla.

  8. Rich Woods says

    @gorobei #11:

    have parts go from +1000 degrees to absolute zero repeatedly

    I don’t think the rockets are up there long enough to radiate away much heat before re-entry. They’d need to be in an orbit which kept them in Earth’s shadow for them to even drop below -70C.

  9. gorobei says

    @richwoods #14:

    I admit to a bit of exaggeration on the lower bound, but the problem is not the skin temperature, it’s the engine: really hot bits with ablative cooling, radiative cooling, and cooling from preheating the liquid oxygen. It’s doable to build a reusable engine that doesn’t need a tear-down every 3 flights, but Musk’s numbers for combined efficiency/cost/reliability are just insanely optimistic.

    It’s pretty easy to launch one rocket every six months. Cycling one rocket engine in a static test stand 10 times in 20 days is a good way to see where theory and practice start to diverge.

  10. emergence says

    I hate how cool, interesting, plausible technologies like brain-machine interfaces get a bad rap because some of the people pushing for them have wild, implausible ideas about what they’ll be able to do. The same goes for a lot of other technology that gets misrepresented by hyperbolic futurists.

  11. Rich Woods says

    @gorobei #15:

    but Musk’s numbers for combined efficiency/cost/reliability are just insanely optimistic.

    I wouldn’t disagree with that. He’s bound to overstate things before reality creeps in and refines his numbers for him. There are far too many extremely wealthy people around at the moment who are making various technology claims based upon a mix of dangerous self-belief and calculated sales-spiel walkback. They’re probably better than a lottery ticket investment, but not always by much.

  12. emergence says

    gorobei @11

    However much Musk is mishandling those first two projects, making cars that run on something other than gasoline and being able to store energy collected using photovoltaics are important to stopping our reliance on fossil fuels. I just hope that Musk doesn’t end up making the technology look bad by managing his projects poorly.

  13. chrislawson says

    gorobei: Tesla has some very dodgy financials, but Ponzi scheme has a very specific meaning that I don’t believe anyone has established applies to Tesla. Unfortunately “Ponzi” has become the go-to word for many finance writers whenever a scheme raises suspicions.

    More to the point, that quote from the Nevada Treasurer was directed at Jia Yueting, a Chinese company that hopes to compete with Tesla. But he was not talking about Tesla itself.

  14. militantagnostic says

    Elon’s wizard hat plans cannot be properly understood until your head’s in the right place.

    I assume that “right place” for your head would be up your ass.

  15. KG says

    I need a brain-machine interface like I need a hole in the head!

    On top of being both a major barrier to entry and a major safety issue, invasive brain surgery is expensive and in limited supply. Elon talked about an eventual BMI-implantation process that could be automated: “The machine to accomplish this would need to be something like Lasik, an automated process—because otherwise you just get constrained by the limited number of neural surgeons, and the costs are very high. You’d need a Lasik-like machine ultimately to be able to do this at scale.”

    Making BMIs high-bandwidth alone would be a huge deal, as would developing a way to non-invasively implant devices. But doing both would start a revolution.

    How the fuck can you non-invasively implant a device in the brain? Implanting a device in the brain is invasive. Sometimes, that’s worth the risk for medical reasons, and those occasions will likely become more frequent, – but the risk is inherent.

    A whole-brain interface gives your brain the ability to communicate wirelessly with the cloud, with computers, and with the brains of anyone who has a similar interface in their head.

    And – completely incidentally – gives Elon Musk and the NSA the ability to monitor and control your thoughts!

    I asked Elon a question that pops into everyone’s mind when they first hear about thought communication:

    “So, um, will everyone be able to know what I’m thinking?”

    He assured me they would not. “People won’t be able to read your thoughts—you would have to will it. If you don’t will it, it doesn’t happen. Just like if you don’t will your mouth to talk, it doesn’t talk.” Phew.

    Naturally, Elon Musk doesn’t want you thinking bad thoughts that might get in the way of his projects. For now, he is unfortunately limited to plausible misdirection and the use of pliant numpties in the media in trying to ensure that you don’t think about how every advance in communication technology, whatever its benefits, has been listened in on, intercepted, bugged, hacked, etc., by corporations, criminals and “security” services.

  16. KG says

    To be fair, Tim Urban does have something to say about possible downsides.

    The scary thing about wizard hats

    As always, when the Wizard Era rolls around, the dicks of the world will do their best to ruin everything.

    And this time, the stakes are extra high. Here are some things that could suck:

    Trolls can have an even fielder day. The troll-type personalities of the world have been having a field day ever since the internet came out. They literally can’t believe their luck. But with brain interfaces, they’ll have an even fielder day. Being more connected to each other means a lot of good things—like empathy going up as a result of more exposure to all kinds of people—but it also means a lot of bad things. Just like the internet. Bad guys will have more opportunity to spread hate or build hateful coalitions. The internet has been a godsend for ISIS, and a brain-connected world would be an even more helpful recruiting tool.

    Computers crash. And they have bugs. And normally that’s not the end of the world, because you can try restarting and if it’s really being a piece of shit, you can just get a new one. You can’t get a new head. There will have to be a way way higher number of precautions taken here.

    Computers can be hacked. Except this time they have access to your thoughts, sensory input, and memories. Bad times.

    Holy shit computers can be hacked. In the last item I was thinking about bad guys using hacking to steal information from my brain. But brain interfaces can also put information in. Meaning a clever hacker might be able to change your thoughts or your vote or your identity or make you want to do something terrible you normally wouldn’t ever consider. And you wouldn’t know it ever happened. You could feel strongly about voting for a candidate and a little part of you would wonder if someone manipulated your thoughts so you’d feel that way. The darkest possible scenario would be an ISIS-type organization actually influencing millions of people to join their cause by altering their thoughts. This is definitely the scariest paragraph in this post. Let’s get out of here.

    That’s it – out of 40,000 words. And note the misdirection again. ISIS are a nasty lot, but how does their reach, the resources at their disposal, their ability to influence what laws are passed, compare to that of Elon Musk – suck-up to Donald Trump – or the NSA?

  17. gorobei says

    chrislawson: thanks for the correction. I should have drilled into the actual article rather than make assumptions based on the google result.

  18. cartomancer says

    Thing is, we already have a technology that could significantly improve the intelligence of the human species. It’s a non-invasive neural reconfiguration technique called “education”, and currently it is woefully under-used. Perhaps Mr. Musk would be better served funding more of that.

    When good quality education is freely available to everyone on the planet, then we can start to think about drilling holes in people and stuffing their brains with wires. Or, more likely, since we’ll all be so much more educated, we’ll give that one a miss.

  19. madtom1999 says

    This will not end well.
    Shakespeare wrote great stuff as it was hard to write fast and gave one time to think about what he was writing.
    The keyboard and the computer give us twitter. When our thoughts come without thinking we will be at war with everyone within seconds.

  20. lanir says

    This sounds like cyberpunk stuff alright. I was playing a cyberpunk-style RPG recently with some friends and it got me thinking about how the real world and the cyberpunk scenario split off and went in different directions.

    There seem to be only a rather small number of changes. Corporations in the real world prefer to be the power behind the throne rather than sit openly on the throne themselves. Corporations don’t hack things themselves, they prefer to have governments do it for them. So there is less crime and less sophisticated crime when it does happen because governments prefer to have a monopoly on sophisticated hacking. Oddly enough one could also make a reasonable case for real world privacy being less than in these dystopian cyberpunk fantasies where privacy is considered nonexistent.

  21. unclefrogy says

    until we learn how the brain thinks fundamentally and in some detail the idea of obtaining access to all data or all knowledge through a direct brain machine interface will be a pipe dream.
    the likely direction such research and development and implementation will go is in the area of brain control of prosthetic devices for mobility.
    speaking of pipe dreams where is my bong gotten to
    uncle frogy

  22. DanDare says

    @ catomamcer yep education. I have always contended that it’s cheaper and more effective to have babies than build human like AIs.

    Regarding long life for the wealthy there are lots of science fiction stories and fantasy ones on that topic. I likee ‘Drunkards Walk’ by Frederick Poll

  23. says

    @25, cartomancer

    Thing is, we already have a technology that could significantly improve the intelligence of the human species. It’s a non-invasive neural reconfiguration technique called “education”, and currently it is woefully under-used.

    SO much this. People don’t seem to realize how much their own brains can be capable of. If only they viewed their brains the way body-builders viewed their bodies. Also, people tend to have a limited view of what “education” is, as if it has to be passively listening, rather than learning through thinking (with some helpful guidance on how to think) and doing.

    Speaking of lack of imagination, it was pointed out to me recently that the dangerous Artificial Intelligence Overlords might not even be recognized as such if they aren’t made out of the materials people expect them to be. For example, powerful organizations (such as corporations, or even cultures) operate on a set of rules/algorithms, so are they “robots”? Whatever the case, they are real powerful things that exist right now and surely they deserve consideration similar to the risks of bad AI. There’s also just as much promise that they can use their power for good, if we make them right.

  24. says

    So, MIRI folk, get on that. Even their research page says “For safety purposes, a mathematical equation defining general intelligence is more desirable than an impressive but poorly-understood code kludge.” Sounds good, but poorly-understood kludge organizational cultures are things that exist right now. Which my website idea should be able to make somewhat transparent, but alas.