Experts try to be clever


One terrific chapter in Thinking Fast and Slow is 21, Intuitions vs Formulas. There Kahneman tells us a brutal unsettling truth, which is that for certain purposes in certain situations, algorithms do better than expert judgement. Thick detailed rich experiential knowledge does worse than a boring quick little formula. A psychologist called Paul Meehl made this claim more than 50 years ago and the research he inspired is still pouring out. Clinical psychologists don’t like the claim! (You can see the insurance people licking their chops, although Kahneman hasn’t mentioned that as far as I’ve read.)

This is about prediction, and for a moment I consoled myself with “oh well just for prediction…” but really, that won’t do. Prediction is the whole point of having a theory of mind, isn’t it.

It’s what people mean when they complain about “scientism,” I think – they’re resisting the horrible idea that a few questions could get a better handle on them than years of experience and conversation and deep thinking. We all want to be Isabel Archer, not a handful of ticked boxes.

Why are experts inferior to algorithms? One reason, which Meehl suspected, is that experts try to be clever, think outside the box, and consider complex combinations of features in making their predictions. Complexity may work in the odd case, but more often than not it reduces validity. Simple combinations of features are better. [p 224]

Isn’t that awful? That’s awful! All our beautiful complicatedness, tossed out the window because simple combinations of features are better. We’re not so complicated! The final scene in the movie, when the cops have shot us down, is no longer “I ain’t so tough,” it’s “I ain’t so complicated.”

Comments

  1. says

    in some cases that might well be what “scientism” refers to. Another meaning/use is referring to the conviction that the results of science at this time are already the unbiased, correct answers; something we all regularly encounter when talking about minority perspectives with The True Skeptics, and something that in the past has done harm to said minorities (and may well still be doing so now, but it’s easier to see this in hindsight than in the present)

  2. says

    other than that… I selfishly wish people working in college admission would rely more on “algorithms” than on the self-promoting essay and the exually self-promoting interview. I suck at self-promotion, but I look like such an awesome potential student on paper! :-p

  3. Bjarte Foshaug says

    Isn’t that awful? That’s awful! All our beautiful complicatedness, tossed out the window because simple combinations of features are better. We’re not so complicated! The final scene in the movie, when the cops have shot us down, is no longer “I ain’t so tough,” it’s “I ain’t so complicated.”

    I think it’s mainly that simple models that only consider a few statistically reliable indicators provide fewer opportunities to get something wrong than complex models involving multiple variables that can interact in multiple ways. Even if every single assumption in a complex can be defended as reasonable, the probability that something is wrong, quickly adds up to make the whole model virtually useless.

  4. Andrew B. says

    Scientism is just science that deflates a certain type of celebrated hubris. Nobody cares if scientists learn something new about pulsars or volcanoes, but if they learn that your funny feelings are not “a different way of knowing” but instead just a bad way of thinking, then “SCIENTISM!”

  5. Jason Dick says

    The problem with complexity is a problem that is sometimes known as overfitting: if you try to use too many parameters, chances are you’ll find some relationship, even if it’s just a chance relationship. This, to me, seems to be a large part of the driver behind a lot of the bogus evolutionary psychology research: when you look at lots of variables, and slice your data in lots of ways, chances are good that you’ll find something.

    We, as humans, naturally do this kind of thing with alarming regularity. We live in a complex world, and we see patterns all around us even when they are just chance arrangements that have no intrinsic pattern whatsoever. So it doesn’t really surprise me that even well-trained professionals will sometimes (or even frequently) fool themselves by drawing inferences that aren’t actually correct, especially in medicine where it’s so incredibly easy to get things wrong.

  6. sc_770d159609e0f8deaa72849e3731a29d says

    There was a survey among doctors some years ago, when it was found that algorithms were better than clinical experience as a guide to diagnosis. The doctors surveyed all agree that this was true in general but their own knowledge acquired through years of experience far outweighed the mere application of routine forms.

  7. Lyanna says

    Bjarte Foshaug is right, but also, “certain purposes in certain situations” is a very limited claim. What purposes? What situations?

    When I look at the world, I see simple formulas failing and expert judgments that are probably correct being ignored, because they’re too complex. This happens most strikingly in the realms of climate change and of economics.

  8. says

    Oh well that was my messy way of summarizing. Kahneman is clear about it.

    The next chapter is about his chief opponent on this subject, and how they eventually agreed to collaborate on figuring out where the boundaries are. The intuitions of experts are better for some things.

  9. Tim Harris says

    Yes, Kahneman is far far better than Haidt: for one thing, he doesn’t have a sentimental moralism driving him…

  10. says

    Indeed. And Haidt’s moralism is not just sentimental, it’s peculiarly blind. The guys in India who had me over for dinner were really nice guys, therefore it’s fine that the women in the back room were in the back room.

    Plus he has that cloyingly folksy style that is apparently considered obligatory for crossover books, and Kahneman doesn’t. Not a trace of it.

  11. Emu Sam says

    That and the number of vehicular deaths is why google cars should become available as quickly as possible. Is there an explanation for why people tend not to trust the algorithm? Why do so many airplane crashes happen because pilots decided to override autopilot, for example?

  12. chrislawson says

    Speaking as a medical doctor, I obviously have a vested interest in believing myself to be better than an algorithm. But there is increasing evidence that algorithms are getting smarter and smarter at diagnosis. Right now, computerised Pap smear programs are marginally better than human pathologists. It’s only a small margin, but we have the odd situation in Australia that human pathologists are covered by Medicare but computerised screening, even though it is about the same price, must be paid for privately. I think the base problem is that people still *want* to think there is a human in control of the process.

  13. chrislawson says

    EmuSam,

    I’m not an air safety expert by any means, but I believe that auto pilot is generally only for the routine long-haul part of a flight and is turned off for take-off and landing. It’s highly unlikely that a pilot turning off auto-pilot will be putting the plane at risk because it’s just not used at any risky time. There is a very strong argument IMHO that autopilot should run the whole flight, with a human pilot only as backup in case of system failure. I say this because I saw a paper that compared the performance of human pilots vs. autopilots in simulators running emergency scenarios, and the autopilots did consistently better at landing the plane safely. That was several years ago. I imagine the algorithms have become even smarter and faster since.

  14. chrislawson says

    Lyanna,

    I don’t think it’s simplicity vs. complexity at the root of opposition to global warming and certain economic principles, it’s sheer self-interest. There is nothing particularly complex about the theory of global warming, all the complexity is in specific predictions of future temperature changes and sea level rises…which is really no more than the difference between predicting the outcome of a specific dice roll (hard to know) vs. predicting the outcome of a week in Vegas (in the long run, the house always wins). Similarly with economics — it’s not simplicity that makes free-market theories proliferate, it’s greed. The very same people who worship simplistic free-market economics are often using humungously complex models to run their share-trading or their futures analysis. They’re quite happy to embrace complexity if they think they can turn a profit on it. (And frankly, the idea of using a few simple rules to mollify the worst excesses of a free market is not exactly hugely complicated either.)

  15. chrislawson says

    Bjarte,

    My father worked on a huge economic modelling program for the Victorian government in the 1980s. After a few years experience, he described the process to me as (i) the model gets almost all the predictions wrong, (ii) the economists go back to the original variables and weightings and tweak them so that the model outputs the correct data for the previous quarter, (iii) the new improved model is run to predict the next quarter’s economy, (i) the model gets almost all the predictions wrong…on infinite loop. (Actually, the loop turned out to be finite after all when the government pulled funding.)

    Still, I think we’re making a false distinction between simple vs. complex models. A model should be as simple as possible and no simpler (to paraphrase Einstein). So, sure, the more variables and relationships you build into a model, the more elements there are to get wrong, but it’s equally problematic to use a simple model when the phenomenon being studied is complex (with the exception of isolating one or two reliable predictive algorithms and accepting uncertainty in the rest of the phenomenon).

  16. Duke Eligor says

    I think it’s pretty obvious that valid, tested algorithms are more reliable. Though that’s kind of a tautology – reliable models are reliable. Still, it’s the reason we make these short-hands for thinking in the first place: we can get the right answer without doing all the grunt work and research necessary to construct the original model. I would only call it “scientism” to assume that because we have a model or an algorithm, it must be scientific (and therefore correct). But this only seems to happen in fields like economics. Maybe when we get out of the mindset that social sciences can somehow predict the future, that little problem will clear itself up.

    But anyhow, I know why a lot of people are scared at the idea that computerised algorithms and such might do better than human beings. It’s not because they’re luddites or something, but rather that we know in a capitalist system, working humans are as disposable as any industrial object or outdated technology (sometimes more disposable). We’ve been happy enough to replace blue collar types with machines up until now, but since white collar types are facing the same spectre of obsolescence, well, that’s a little uncomfortable for some people.

  17. says

    I didn’t mean I would call it “scientism” myself – I’m allergic to the word. But because I’m allergic to it, I always hope to find reasonable or at least understandable uses of it, and/or states of mind that might explain it. It seems undeniably disconcerting to learn that a rich knowledge of someone is less useful for prediction [of some things] than a simple algorithm. Disconcerting but also interesting.

Leave a Reply

Your email address will not be published. Required fields are marked *