A food science scam


Where’s the data on this sign’s effect on spelling?

Brian Wansink has a problem. First, he’s been jiggering his data until he gets a statistically significant result, which to me means that none of his conclusions are to be trusted. Then, he was reworking these thinly significant results into multiple papers, taking watery gruel and sliming the literature with more noise. And now he’s accumulating more retractions as his shoddy research practices are exposed.

I’m just increasingly appalled at the crap that is earning him tens of millions of dollars of research funds. It’s cartoonishly superficial. Let’s put goofy names on the food in school lunchrooms!

The most recent retraction — a rare move typically seen as a black mark on a scientist’s reputation — happened last Thursday, when JAMA Pediatrics pulled a similar study, also from 2012, titled “Can branding improve school lunches?”

Both studies claimed that children are more likely to choose fruits and vegetables when they’re jazzed up, such as when carrots are called “X-Ray Vision Carrots” and when apples have Sesame Street stickers. The underlying theory is that fun, descriptive branding will not only make an eater more aware of the food, but will “also raise one’s taste expectations,” as the scientists explained in one of the papers.

You know, I believe this actually does work — I have no doubt that creative labeling can draw the attention of kids (and adults!). But would it make a significant difference in kids’ eating habits? Don’t you suspect that there would be a bit of a backlash? Kids aren’t stupid. They’re going to see right through this game fairly quickly, and a trivial relabeling is going to have only a transient effect. And they’re paying 30,000 schools up to $2000 each to try out these labeling strategies! Is it worth it? I don’t know. And you still can’t trust Wansink’s work.

People are finding inconsistencies in the papers, statistical errors, and outright statistical abuse. What can you say about a paper that decides p=0.06 meets the criterion for signifcance, and further, miscalculated the p value in the first place?

In a blog post, Brown expressed concern about how the data had been crunched, and confusion about how exactly the experiment had worked. He noted that a bar graph looked much different in an earlier version. And, he pointed out, the scientists had said their findings could help “preliterate” children — which seemed odd, since the children in the study were ages 8 to 11.

In yet more scathing blog posts, Anaya and data scientist James Heathers pointed out mistakes and inconsistencies in the Preventive Medicine study, “Attractive names sustain increased vegetable intake in schools,” which claimed that elementary school students ate more carrots when the vegetables were dubbed “X-ray Vision Carrots.”

Worse…when those mistakes were pointed out, Wansink discovers that all the original data for those papers is ‘missing’. How convenient.

Wansink runs something called the “Food and Brand” lab. You can guess from just the name that he’s encouraging corporate support, and I suspect that’s a big part of the problem — this lab isn’t about science, it’s about reinforcing economic values for the benefit of their corporate collaborators.

Comments

  1. says

    In 1991 I was on a long trans-continental flight. The mom traveling with her 4 year old son was exhausted and spent a lot of time sleeping, and her child spent a lot of time one row up, with me, which was O.K., he was a bright and inquisitive child. The flight attendant served him a container of fruit juice, he looked at it suspiciously and asked “What’s this?” “Squished bug juice” she replied with a brilliant smile. The boy’s countenance cleared, he smiled, and started to drink it.
    p=1.
    THE END
    Please send me money

  2. says

    When the YOBling was younger, I could get her to eat a lot of foods simply by calling it something weird/funny/gross and arranging it whimsically. That only lasts until about 7 or 8 yo in my experience.

  3. handsomemrtoad says

    RE: “JAMA Pediatrics pulled a similar study, also from 2012, titled “Can branding improve school lunches?””

    The problem might be: maybe they didn’t make the branding irons hot enough.

  4. says

    Goddamned data mining is way too easy with modern statistical packages. Just try different tests until you ring the significance bell (and don’t worry if the test is invalid for the data and hypothesis you’re testing). My quantitative research professor used to rail against “push-button researchers” who insisted on results [period!] rather than valid results. Not all research papers are as obviously bogus as the one in question, where the data is “missing.” You have to look at the test applied and the rationale (if any!) for it.

  5. Mark Dowd says

    “What can you say about a paper that decides p=0.06 meets the criterion for signifcance…”

    I would certainly like to know what you think should be said about that? I get that 0.05 is the convention, but that’s a completely arbitrary threshold. It’s not like that extra 0.01 makes a difference between “real” and “not real”.

  6. jrkrideau says

    @ 6
    I would certainly like to know what you think should be said about that? I get that 0.05 is the convention

    That’s the problem. If you are going to violate the convention say why and hopefully have a good explanation. Otherwise it’s more like p-hacking (NOT a good thing).

    A Bayesian would say that a Null Hypothesis Significant Test such as the above is almost never appropriate but that is a another matter—well a different war.

    In this case, calling .06 significant is just one more thing.