One way to be Less Wrong is to avoid faulty premises


While I was digging into the question of who this Gilbert Ling character was, I ran into lots of sources that didn’t make the final cut. Unfortunately, most of those sources were from fringe or unsavory places — I did check my collection of textbooks, too, and nowhere does he get any mention. So it’s down into the sewers after all! Like this article on Less Wrong.

The Association-Induction hypothesis formulated by Gilbert Ling is an alternate view of cell function, which suggests a distinct functional role of energy within the cell. I won’t review it in detail here, but you can find an easy to understand and comprehensive introduction to this hypothesis in the book “Cells, Gels and the Engines of Life” by Gerald H. Pollack. This idea has a long history with considerable experimental evidence, which is too extensive to review in this article.

No, it has a 70 year history all centered on the long-winded writings of a single crackpot. There is no experimental evidence for any of it other than the willful distortions of one Gilbert Ling. Pollack is utterly batty, and not a credible source.

Worse still, this guy is using Ling’s theories as a starting point for discussing how we can use this information to potentially increase IQ.

So this suggests a ‘systems biology’ approach to cognitive enhancement. It’s necessary to consider how metabolism is regulated, and what substrates it requires. To raise intelligence in a safe and effective way, all of these substrates must have increased availability to the neuron, in appropriate ratios.

I am always leery of drawing analogies between brains and computers but this approach to cognitive enhancement is very loosely analogous to over-clocking a CPU. Over-clocking requires raising both the clock rate, and the energy availability (voltage). In the case of the brain, the effective ‘clock rate’ is controlled by hormones (primarily triiodothyronine aka T3), and energy availability is provided by glucose and other nutrients.

Oh god. Nerds discussing overly simplistic analogies between brains and computers always makes me leery, too, so just stop already. Especially when your ‘over-clocking’ idea is built on a bogus model of cellular metabolism that has been known to be wrong for the entirety of its “long history”.

I know I started this by dissing the Less Wrong forum, but I will say that, to their credit, most of the commenters were tearing that article apart.

Comments

  1. Snarki, child of Loki says

    These SOOPER-JENIUZES need more energy for their overclocked brains?

    Fine. Trepanning + gasoline + ignite.
    I’m sure they’ll very quickly arrive at some new conclusions.

  2. ANB says

    Another way to “increase intelligence,” which is very doable and has almost “instant” results. Provide undernourished populations with adequate nutritious food. Remove toxins (lead, etc.) from their environment. Reduce unneeded stressors such as constant fear of violence (notably due to ubiquitous guns), lack of opportunity (consistent areas of high unemployment–you know where), provide free or low cost medical care, etc.

    That alone (and many other things besides) would increase the I.Q. of many millions (billions if instituted worldwide) of people in relatively short order.

  3. wzrd1 says

    Wow, so I was overclocking my brain when my T3 levels were astronomical? Odd, despite glucose availability, thought occurred at the same rate as now, when my T3 and T4 levels are survivable.
    I’ll not even go into elevated T3 associated hypertension and tachycardia…

    Oh, I know! Small sample size, if one ignores every other Grave’s disease patient in the world.

  4. says

    IF we actually manage to increase the “clock rate” of our neurons, will that really make us smarter? Or would it just make everyone think, talk, work and do stupid things faster?

    This all looks like non-sequiturs and bad analogies all the way down.

  5. birgerjohansson says

    Next stop: reviving the Nazii-era “abwehrfermente” theory conceived by Emil Abderhalden, a crook who often stole ideas from students in 1920s-and 1930s Germany.

  6. chrislawson says

    I’m not opposed to the broad analogy of the brain as a computer since they are both essentially information processors. But the analogy only works superficially. As soon as you start talking about overclocking, a process that applies specifically to the design features of modern silicon chips, and pretending you can apply that to neurons…well, it’s about as sensible as noting that there are analogies between birds and planes therefore we should make jets with cloacas. (And no, T4 regulation is not even remotely analogous to CPU frequency management; this is downright stupid of them to say and would only make sense to someone desperate to believe simplistic scams.)

    Besides, overclocking is hardly an unvarnished benefit even when applied in its original setting, computer technology. I sure as hell don’t want my brain to suffer overheating, processing errors, or shortened lifespan.

  7. simplicio says

    Is there a more effective way to increase my IQ that is unsafe? Instead of overclocking my brain, could I use quantum clocking? What is Deepak Chopra’s opinion?

  8. birgerjohansson says

    chrislawson @ 6
    Jets with cloacas is the propulsive principle for at least one animal in Terry Pratchett’s Discworld novel Guards, Guards! So as long as you live on a flat world where magic works Pollack will be at home.

  9. F.O. says

    I’m really.. “meh” about Less Wrong.

    It’s the stupidest group of smart people I’ve ever seen; it regularly attracts crackpot ideas, genetic essentialism, “race realists”, singularity, shallow morals, money worship and in general considers empirical verification optional.
    Their 80 000 hours offshot are outright climate deniers (just check their priorities page, it’s all AI AI AI and ZERO climate, WTF!?)

  10. call me mark says

    Isn’t “Less Wrong” where the idea of Roko’s Basilisk originated?

    I propose renaming the site to “More Wrong”.

  11. felixmagister says

    The Less Wrong school of thought reminds me of the Ancient Greek philosophers who decided that you could find the truth by pure reason, and that it was foolish to try to find out what reality was like by actually looking at reality, and who left western thought in a rut it took a thousand years to properly get out of. Cleverness is nice, but it’s no substitute for accurate data.

  12. gjm11 says

    F.O., the 80000 Hours priorities page begins with a list of 8 top-priority causes. #7 is climate change. There’s a link from there to a page about climate change, which says e.g. “Climate change is going to significantly and negatively impact the world”, and “people are right to be angry that too little is being done”. I think terms like “outright climate deniers” should be used for, y’know, people who outright deny that climate change is a problem, rather than for people who say it’s a serious problem that more should be done about but that someone looking to choose a career to optimize benefit to the world can probably find even higher-expected-impact things to work on.

    call-me-mark, “Roko’s basilisk” did originate on Less Wrong. When Roko posted his ideas there, what happened was that his post got a lot of downvotes and a lot of disagreement, and the nearest thing to a leader the community has threw a hissy fit for which he has quite reasonably been laughed at ever since, in which he said (I paraphrase) “if anything like this were right then saying so would be incredibly stupid”, which I think is in fact correct. Most of the things commonly believed about the Roko’s Basilisk story are false; for instance, so far as I can tell, it is not true that Less Wrong participants in general ever found Roko’s speculation credible; so far as I can tell, it is not true that it gave anyone mental breakdowns or that Eliezer Yudkowsky claimed it had; so far as I can tell, it is not true that anyone ever used the idea to support giving money for “friendly AI” work.

    felixmagister, it might well be true that people on Less Wrong are prone to trying to find out what-should-be-empirical truths by pure reason, but to whatever extent there’s a Less Wrong school of thought it definitely comes down firmly against that. See e.g. https://www.lesswrong.com/s/6xgy8XYEisLk3tCjH/p/a7n8GdKiAZRX86T5A which I’m pretty sure isn’t the clearest rejection of Ancient-Greek-style pure-reason in the so-called “Sequences” but it’ll do.

    (Dis)claimer: I am a fairly frequent Less Wrong participant. I am not an alt-righter or a billionaire fetishist, and I think it is very unlikely that we will all be killed by a superintelligent artificial intelligence in the foreseeable future. I agree that sometimes some people there say stupid things, but I don’t think the community is (or ever was) anywhere near as bad as some people like to portray it as being.

  13. John Morales says

    Mmm. gjm11, I had a look though the site, and to me it seems like fluff.

    (Also, it’s not the best name, since that which is less wrong is nonetheless wrong)

  14. gjm11 says

    The proportion of fluff (and worse-than-fluff) varies, and so of course do people’s tastes :-). The name is deliberate, though: fallibility is not optional, so the most anyone can realistically aim for is to be less wrong. Dunno whether it’s actually a good name; for what little it’s worth, I quite like it. (Except that it is apt to be interpreted as “we are Less Wrong_ than you” rather than the actually-intended “let us try to make ourselves Less Wrong”.)

  15. John Morales says

    I put it to you that the most anyone can realistically aim for is to be least wrong, rather than merely less wrong. :)

    Anyway, do you care to share your opinion of the merit of the featured article?

  16. gjm11 says

    The “Thermodynamics of Intelligence and Cognitive Enhancement” one? Looks like junk to me. At least, I’ve seen nothing to make me disagree with PZ (and with e.g. the LW commenter called CellBioGuy) that Gilbert Ling doesn’t know what he’s talking about.

    As for the higher-level handwavy suggestion that maybe our brains are “designed” to work with an energy supply more limited than it need be these days, and that we could be cleverer if that weren’t so? I guess that could be true (though I am very much not an expert) but it seems like the poster is hoping that we could exploit this by designing suitable drugs, and I think that’s probably hopeless: I would expect the most plausible version of this story to be more like “it’s possible to make smarter brains that use more energy, but the way they’d do that involves being bigger, with all that that implies for other aspects of human anatomy, and if you want to get there you need to provide the human race with plentiful energy and an intelligence-rewarding environment for, optimistically, 100k years or so and let evolution do its thing”. Maybe with sufficiently intense selection you could do it faster (something something The Beak Of The Finch something), but applying intense selection pressure to human populations tends to be frowned on by institutional review boards. And that’s the optimistic version where it turns out that you can basically just scale human brains up a bit and get cleverer humans, which is by no means a given.