Dan Ariely’s work called into question


In my teaching work and on this blog, I have often referred favorably to the work of behavioral economist Dan Ariely who devised ingenious experiments to tease out human behaviors and motivations. (See here for the posts where I have discussed his work.) I have recommended his book Predictably Irrational which, as the title succinctly suggests, argued that while people are often irrational, their irrationality is not random. He has also given very popular TED talks.

A lot of his research dealt with the issue of honesty: what corners people are willing to cut, by how much, and how they view themselves. So I was disappointed to read that he may the latest example of an academic who has been sloppy or worse in the way that he has conducted his research, throwing his work into doubt.

A landmark study that endorsed a simple way to curb cheating is going to be retracted nearly a decade later after a group of scientists found that it relied on faked data.

According to the 2012 paper, when people signed an honesty declaration at the beginning of a form, rather than the end, they were less likely to lie. A seemingly cheap and effective method to fight fraud, it was adopted by at least one insurance company, tested by government agencies around the world, and taught to corporate executives. It made a splash among academics, who cited it in their own research more than 400 times.

The paper also bolstered the reputations of two of its authors — Max Bazerman, a professor of business administration at Harvard Business School, and Dan Ariely, a psychologist and behavioral economist at Duke University — as leaders in the study of decision-making, irrationality, and unethical behavior. Ariely, a frequent TED Talk speaker and a Wall Street Journal advice columnist, cited the study in lectures and in his New York Times bestseller The (Honest) Truth About Dishonesty: How We Lie to Everyone — Especially Ourselves.

Years later, he and his coauthors found that follow-up experiments did not show the same reduction in dishonest behavior. But more recently, a group of outside sleuths scrutinized the original paper’s underlying data and stumbled upon a bigger problem: One of its main experiments was faked “beyond any shadow of a doubt,” three academics wrote in a post on their blog, Data Colada, on Tuesday.

The researchers who published the study all agree that its data appear to be fraudulent and have requested that the journal, the Proceedings of the National Academy of Sciences, retract it. But it’s still unclear who made up the data or why — and four of the five authors said they played no part in collecting the data for the test in question.

That leaves Ariely, who confirmed that he alone was in touch with the insurance company that ran the test with its customers and provided him with the data. But he insisted that he was innocent, implying it was the company that was responsible. “I can see why it is tempting to think that I had something to do with creating the data in a fraudulent way,” he told BuzzFeed News. “I can see why it would be tempting to jump to that conclusion, but I didn’t.”

He added, “If I knew that the data was fraudulent, I would have never posted it.”

But Ariely gave conflicting answers about the origins of the data file that was the basis for the analysis. Citing confidentiality agreements, he also declined to name the insurer that he partnered with. And he said that all his contacts at the insurer had left and that none of them remembered what happened, either.

That last paragraph looks particularly bad.

The Data Colada link that uncovered this is a good example of how much careful analysis is needed to detect irregularities. The authors strongly imply that the data Ariely reported were fabricated.

It is really important for academic researchers to be honest about their data and analyses. Academics do not routinely try to replicate other people’s work to see if there is any error or fraud. They tend to trust one another and build on one another’s work. Sloppiness or fraud throws a wrench into that process.

Comments

  1. consciousness razor says

    I don’t remember hearing about this study before. This one really has some layers to it.

    Plain, old-fashioned fake data is (finally) enough for the claims to be “called into question” and for the paper to actually be retracted…. Not when it couldn’t be replicated. Still not enough questions then.

    There’s also the idea that you can trust an insurance company. Let’s be honest: as assumptions go, that one’s awfully suspicious. And there’s the idea that people who are (allegedly) reporting their odometer readings would be a case which you can unproblematically use to extrapolate to all sorts of other situations. I mean, I think that’s pretty weird too. If you had to think of a prototypical or primordial example, I’m willing to bet that people reporting their odometer readings is not the first (or only) type of example that comes to mind. Yet people will still give it lots of weight that it really shouldn’t have and apply it to all sorts of very different cases, because it’s just so convenient to point at a study that might at least arguably be relevant somehow.

    Also, given the way they apparently did it, it’s not even like they’d know that people did actually sign at the beginning (or end) of the process, whenever the item was at the top (or bottom). Maybe that behavior is more common or maybe not…. All they’ve literally got is the location of the thing on the form. And look, I’ve filled out plenty of forms throughout my life in some other order, not always top to bottom. So, come on — have these people never filled out forms?

    It would be one thing if they were actually forcing the people to go through the process in a specific order. (Very impractical when we’re talking about reported odometer readings, but they didn’t need to pick odometer readings, did they? And of course, that may be how it was done in other situations, after people heard about the study and wanted to apply its “findings.” But not for the original study itself.)

    Just imagine a fairly normal/common scenario like this (assuming real data was collected obviously, instead of faked data):
    — car #1 is yours
    — car #2 is driven by your spouse, who may or may not be around (with their car) when you take the time to fill out your form
    — car #3 is driven by one of your children, who is attending a university in another city

    So when did you get all of those separate bits of information? And when did you write those things down on the form? And when did you sign the part about reporting the information honestly?

    Well, even in the best case, all sorts of stuff might have happened. There’s just no way to know. You gave the subjects a bunch of freedom to make a bunch of arbitrary and unknown choices, based on whatever their circumstances were like. So you shouldn’t ignore that fact later on, when you’re trying to interpret the information you gathered (or thought that somebody else had gathered for you). Of course, you wouldn’t need to to be quite so lazy about it, but you’d need to do things differently, so that you know for a fact how the subjects were actually behaving. Otherwise, it sounds like it’s going to be a case of garbage-in/garbage-out.

  2. says

    Predictably Irrational which, as the title succinctly suggests, argued that while people are often irrational, their irrationality is not random

    “Surely you predicted how social pressure would cause you to detonate your career by falsifying your data?”

  3. jrkrideau says

    “Surely you predicted how social pressure would cause you to detonate your career by falsifying your data?”

    Worked for Sir Cyril Burt until he was long dead.

    There still is the assumption, at least to some extent, that the data is honest although the analysis may be whacky.

    The recent Elgasser (2020) paper and a lot of work and reporting by RetractionWatch or some of Nick Brown”s may be changing that attitude.

Leave a Reply

Your email address will not be published. Required fields are marked *