The Monkey’s Climate Agreement

This must have seemed like an excellent idea to Trump.

I am fighting every day for the great people of this country. Therefore, in order to fulfill my solemn duty to protect America and its citizens, the United States will withdraw from the Paris Climate Accord.

(APPLAUSE)

Thank you. Thank you.

But begin negotiations to re-enter, either the Paris Accord or in, really entirely new transaction on terms that are fair to the United States, its businesses, its workers, its people, its taxpayers. So we’re getting out. But will we start to negotiate and we will see if we can make a deal that’s fair. And if we can, that’s great. And if we can’t, that’s fine.

It distracts from the ongoing Russia scandal, and it’s a move which will earn favour from many Republicans. But there’s also good reason to think it won’t have the effect Trump hopes for.

For one, the USA has been very successful at watering down past climate agreements.

When aggressively lobbying to weaken the Paris accord, U.S. negotiators usually argued that anything stronger would be blocked by the Republican-controlled House and Senate. And that was probably true. But some of the weakening — particularly those measures focused on equity between rich and poor nations — was pursued mainly out of habit, because looking after U.S. corporate interests is what the United States does in international negotiations.

Whatever the reasons, the end result was an agreement that has a decent temperature target, and an excruciatingly weak and half-assed plan for reaching it.

If the US withdraws from climate talks, as seems likely despite Trump’s “renegotiation” line, the US delegation won’t be at the table. And with China now in full support of taking action, India pushing for aggressive targets, and even Canada still willing to stick with the Paris agreement, there’s no one left to step on the brakes. Future climate change agreements will be more aggressive.

They might also carry penalties for non-signing nations. There are only three countries who didn’t sign the Paris agreement: Nicaragua didn’t sign because the agreement didn’t go far enough, Syria had been diplomatically isolated so they weren’t even invited to the table, and the US refused to even submit it for ratification by Congress. Yes, the US is a major player in world financial markets, but its dwarfed by the output of the rest of the world. If the globe agreed to impose a carbon tax on non-signing nations, the US could do little to push back.

Even if the rest of the world doesn’t have the appetite for that route, there are more creative kinds of penalties.

Calling the President’s decision “a mistake” for the US as well as the planet, [French President] Macron urged climate change scientists, engineers and entrepreneurs to go to France to continue their work. “They will find in France a second homeland,” Mr Macron said. “I call on them,” he added. “Come and work here with us, work together on concrete solutions for our climate, our environment. I can assure you, France will not give up the fight.”

Climate change has become the one thing the international community could reach a consensus on. Pulling from the Paris agreement was like kicking a puppy; regardless of the intent or circumstances, it’s an action the world can unite against. It makes for a convenient excuse to isolate the US or play hardball, much more so than any boorish behaviour by Trump.

It also won’t stop the US from following the Paris agreement anyway.

Representatives of American cities, states and companies are preparing to submit a plan to the United Nations pledging to meet the United States’ greenhouse gas emissions targets under the Paris climate accord, despite President Trump’s decision to withdraw from the agreement.

The unnamed group — which, so far, includes 30 mayors, three governors, more than 80 university presidents and more than 100 businesses — is negotiating with the United Nations to have its submission accepted alongside contributions to the Paris climate deal by other nations.

“We’re going to do everything America would have done if it had stayed committed,” Michael Bloomberg, the former New York City mayor who is coordinating the effort, said in an interview. […]

“The electric jolt of the last 48 hours is accelerating this process that was already underway,” said Mr. Orr, who is now dean of the School of Public Policy at the University of Maryland. “It’s not just the volume of actors that is increasing, it’s that they are starting to coordinate in a much more integral way.”

Various US states, municipalities, universities, businesses, and even the military have been working towards cutting emissions for years without waiting for the federal government to get its act in order. A national policy would be more effective, but these piecemeal efforts have substantial force behind them and look to be gaining even more.

Finally, the boost this move earns from his supporters may get cancelled out by backlash from everyone else.

It’s also possible that Trump gave a win to his base on an issue they don’t care that much about while angering the opposition on an issue they do care about. Gallup and Pew Research Center polls indicate that global warming and fighting climate change have become higher priorities for Democrats over the past year. … As we wrote earlier, if Trump’s voters view the Paris withdrawal as an economic move, he’ll likely reap some political benefit from it. If, however, it’s viewed as mostly having to do with climate change, perhaps Trump won’t see much gain with his base. Jobs, the economy and health care rate as top issues for Republicans, but climate change and the environment do not, so it’s hard to know how Trump voters would weigh the president doing something they don’t like on an issue they care a lot about (the GOP health care bill) against him doing something they do like on an issue they don’t care much about (withdrawing from Paris).

This may have looked like an easy win for Trump, but the reality could be anything from a weak victory to a solid defeat. Time will tell, as it always does.

Journal Club 1: Gender Studies

Last time, I pointed out that within the Boghossian/Lindsay kerfuffle no-one was explaining how you could falsify gender studies. As I’ve read more and more of those criticisms, I’ve noticed another omission: what the heck is in a gender studies journal? The original paper only makes sense if it closely mirrors what you’d find in a relevant journal.

So let’s abuse my academic access to pop open the cover of one such journal.

Gender & Society, the official journal of Sociologists for Women in Society, is a top-ranked journal in sociology and women’s studies and publishes less than 10% of all papers submitted to it. Articles analyze gender and gendered processes in interactions, organizations, societies, and global and transnational spaces. The journal publishes empirical articles, along with reviews of books.

They also happened to be at the top of one list of gender studies journals. I’ll go with their latest edition as of this typing, volume 31 issue 3, which is dated June 2017.

[Read more…]

Daryl Bem and the Replication Crisis

I’m disappointed I don’t see more recognition of this.

If one had to choose a single moment that set off the “replication crisis” in psychology—an event that nudged the discipline into its present and anarchic state, where even textbook findings have been cast in doubt—this might be it: the publication, in early 2011, of Daryl Bem’s experiments on second sight.

I’ve actually done a long blog post series on the topic, but in brief: Daryl Bem was convinced that precognition existed. To put these beliefs to the test, he had subjects try to predict an image that was randomly generated by a computer. Over eight experiments, he found that they could indeed do better than chance. You might think that Bem is a kook, and you’d be right.

But Bem is also a scientist.

Now he would return to JPSP [the Journal of Personality and Social Psychology] with the most amazing research he’d ever done—that anyone had ever done, perhaps. It would be the capstone to what had already been a historic 50-year career.

Having served for a time as an associate editor of JPSP, Bem knew his methods would be up to snuff. With about 100 subjects in each experiment, his sample sizes were large. He’d used only the most conventional statistical analyses. He’d double- and triple-checked to make sure there were no glitches in the randomization of his stimuli. Even with all that extra care, Bem would not have dared to send in such a controversial finding had he not been able to replicate the results in his lab, and replicate them again, and then replicate them five more times. His finished paper lists nine separate ministudies of ESP. Eight of those returned the same effect.

One way to attack an argument is to merely follow its logic. If you can find it leads to an absurd conclusion, the argument must have been flawed even if you cannot find the flaw. Bem had inadvertently discovered a “reductio ad absurdum” argument against contemporary scientific practice: if proper scientific procedure can prove ESP exists, proper scientific procedure must be broken.

Meanwhile, at the conference in Berlin, [E.J.] Wagenmakers finally managed to get through Bem’s paper. “I was shocked,” he says. “The paper made it clear that just by doing things the regular way, you could find just about anything.”

On the train back to Amsterdam, Wagenmakers drafted a rebuttal, to be published in JPSP alongside the original research. The problems he saw in Bem’s paper were not particular to paranormal research. “Something is deeply wrong with the way experimental psychologists design their studies and report their statistical results,” Wagenmakers wrote. “We hope the Bem article will become a signpost for change, a writing on the wall: Psychologists must change the way they analyze their data.”

Slate has a long read up on the current replication crisis, and how it links to Bem. It’s aimed at a lay audience and highly readable; I recommend giving it a click.

So You Wanna Falsify Gender Studies?

How would a skeptic determine whether or not an area of study was legit? The obvious route would be to study up on the core premises of that field, recording citations as you go; map out how they are connected to one another and supported by the evidence, looking for weak spots; then write a series of articles sharing those findings.

What they wouldn’t do is generate a fake paper purporting to be from that field of study but deliberately mangling the terminology, submit it to a low-ranked and obscure journal for peer review, have it rejected from that journal, based on feedback then submit it to an second journal that was semi-shady and even more obscure, have it published, then parade that around as if it meant something.

Alas, it seems the Skeptic movement has no idea how basic skepticism works. Self-proclaimed “skeptics” Peter Boghossian and James Lindsay took the second route, and were cheered on by Michael Shermer, Richard Dawkins, Jerry Coyne, Steven Pinker, and other people calling themselves skeptics. A million other people have pointed and laughed at them, so I won’t bother joining in.

But no-one seems to have brought up the first route. Let’s do a sketch of actual skepticism, then, and see how well gender studies holds up.

What’s Claimed?

Right off the bat, we hit a problem: most researchers or advocates in gender studies do not have a consensus sex or gender model.

The Genderbread Person, version 3.3. From http://itspronouncedmetrosexual.com/2015/03/the-genderbread-person-v3/

This is one of the more popular explainers for gender floating out on the web. Rather than focus on the details, however, I’d like you to note this graphic is labeled “version 3.3”. In other words, Sam Killermann has tweaked and revised it three times over. It also conflicts with the Gender Unicorn, which has a categorical approach to “biological sex” and adds “other genders,” and it no longer embraces the idea of a spectrum thus contradicting a lot of other models. Confront Killermann on this, and I bet they’d shrug their shoulders and start crafting another model.

The model isn’t all that important. Instead, gender studies has reached a consensus on an axiom and a corollary: the two-sex, two-gender model is an oversimplification, and that sex/gender are complicated. Hence why models of sex or gender continually fail, the complexity almost guarantees exceptions to your rules.

There’s a strong parallel here to agnostic atheism’s “lack of belief” posture, as this flips the burden of proof. Critiquing the consensus of gender studies means asserting a positive statement, that the binarist model is correct, while the defense merely needs to swat down those arguments without advancing any of its own.

Nothing Fails Like Binarism

A single counter-example is sufficient to refute a universal rule. To take a classic example, I can show “all swans are white” is a false statement by finding a single black swan. If someone came along and said “well yeah, but most swans are white, so we can still say that all swans are white,” you’d think of them as delusional or in denial.

Well, I can point to four people who do not fit into the two-sex two-gender model. Ergo, that model cannot be true in all cases, and the critique of gender studies fails after a thirty second Google search.

When most people are confronted with this, they invoke a three-sex model (male, female, and “other/defective”) but call it two-sex in order to preserve their delusion. That so few people notice the contradiction is a testament to how hard the binary model is hammered into us.

But Where’s the SCIENCE?!

Another popular dodge is to argue that merely saying you don’t fit into the binary isn’t enough; if it wasn’t in peer-reviewed research, it can’t be true. This is no less silly. Do I need to publish a paper about the continent of Africa to say it exists? Or my computer? If you doubt me, browse Retraction Watch for a spell.

Once you’ve come back, go look at the peer-reviewed research which suggests gender is more complicated than a simple binary.

At times, the prevailing answers were almost as simple as Gray’s suggestion that the sexes come from different planets. At other times, and increasingly so today, the answers concerning the why of men’s and women’s experiences and actions have involved complex multifaceted frameworks.

Ashmore, Richard D., and Andrea D. Sewell. “Sex/Gender and the Individual.” In Advanced Personality, edited by David F. Barone, Michel Hersen, and Vincent B. Van Hasselt, 377–408. The Plenum Series in Social/Clinical Psychology. Springer US, 1998. doi:10.1007/978-1-4419-8580-4_16.

Correlational findings with the three scales (self-ratings) suggest that sex-specific behaviors tend to be mutually exclusive while male- and female-valued behaviors form a dualism and are actually positively rather than negatively correlated. Additional analyses showed that individuals with nontraditional sex role attitudes or personality trait organization (especially cross-sex typing) were somewhat less conventionally sex typed in their behaviors and interests than were those with traditional attitudes or sex-typed personality traits. However, these relationships tended to be small, suggesting a general independence of sex role traits, attitudes, and behaviors.

Orlofsky, Jacob L. “Relationship between Sex Role Attitudes and Personality Traits and the Sex Role Behavior Scale-1: A New Measure of Masculine and Feminine Role Behaviors and Interests.” Journal of Personality 40, no. 5 (May 1981): 927–40.

Women’s scores on the BSRI-M and PAQ-M (masculine) scales have increased steadily over time (r’s = .74 and .43, respectively). Women’s BSRI-F and PAQ-F (feminine) scale  scores do not correlate with year. Men’s BSRI-M scores show a weaker positive relationship with year of administration (r = .47). The effect size for sex differences on the BSRI-M has also changed over time, showing a significant decrease over the twenty-year period. The results suggest that cultural change and environment may affect individual personalities; these changes in BSRI and PAQ means demonstrate women’s increased endorsement of masculine-stereotyped traits and men’s continued nonendorsement of feminine-stereotyped traits.

Twenge, Jean M. “Changes in Masculine and Feminine Traits over Time: A Meta-Analysis.” Sex Roles 36, no. 5–6 (March 1, 1997): 305–25. doi:10.1007/BF02766650.

Male (n = 95) and female (n = 221) college students were given 2 measures of gender-related personality traits, the Bem Sex-Role Inventory (BSRI) and the Personal Attributes Questionnaire, and 3 measures of sex role attitudes. Correlations between the personality and the attitude measures were traced to responses to the pair of negatively correlated BSRI items, masculine and feminine, thus confirming a multifactorial approach to gender, as opposed to a unifactorial gender schema theory.

Spence, Janet T. “Gender-Related Traits and Gender Ideology: Evidence for a Multifactorial Theory.” Journal of Personality and Social Psychology 64, no. 4 (1993): 624.

Oh sorry, you didn’t know that gender studies has been a science for over four decades? You thought it was just an invention of Tumblr, rather than a mad scramble by scientists to catch up with philosophers? Tsk, that’s what you get for pretending to be a skeptic instead of doing your homework.

I Hate Reading

One final objection is that field-specific jargon is hard to understand. Boghossian and Lindsay seem to think it follows that the jargon is therefore meaningless bafflegab. I’d hate to see what they’d think of a modern physics paper; jargon offers precise definitions and less typing to communicate your ideas, and while it can quickly become opaque to lay people jargon is a necessity for serious science.

But let’s roll with the punch, and look outside of journals for evidence that’s aimed at a lay reader.

In Sexing the Body, Gender Politics and the Construction of Sexuality Fausto-Sterling attempts to answer two questions: How is knowledge about the body gendered? And, how gender and sexuality become somatic facts? In other words, she passionately and with impressive intellectual clarity demonstrates how in regards to human sexuality the social becomes material. She takes a broad, interdisciplinary perspective in examining this process of gender embodiment. Her goal is to demonstrate not only how the categories (men/women) humans use to describe other humans become embodied in those to whom they refer, but also how these categories are not reflect ed in reality. She argues that labeling someone a man or a woman is solely a social decision. «We may use scientific knowledge to help us make the decision, but only our beliefs about gender – not science – can define our sex» (p. 3) and consistently throughout the book she shows how gender beliefs affect what kinds of knowledge are produced about sex, sexual behaviors, and ultimately gender.

Gober, Greta. “Sexing the Body Gender Politics and the Construction of Sexuality.” Humana.Mente Journal of Philosophical Studies, 2012, Vol. 22, 175–187

Making Sex is an ambitious investigation of Western scientific conceptions of sexual difference. A historian by profession, Laqueur locates the major conceptual divide in the late eighteenth century when, as he puts it, “a biology of cosmic hierarchy gave way to a biology of incommensurability, anchored in the body, in which the relationship of men to women, like that of apples to oranges, was not given as one of equality or inequality but rather of difference” (207). He claims that the ancients and their immediate heirs—unlike us—saw sexual difference as a set of relatively unimportant differences of degree within “the one-sex body.” According to this model, female sexual organs were perfectly homologous to male ones, only inside out; and bodily fluids—semen, blood, milk—were mostly “fungible” and composed of the same basic matter. The model didn’t imply equality; woman was a lesser man, just not a thing wholly different in kind.

Altman, Meryl, and Keith Nightenhelser. “Making Sex (Review).” Postmodern Culture 2, no. 3 (January 5, 1992). doi:10.1353/pmc.1992.0027.

In Delusions of Gender the psychologist Cordelia Fine exposes the bad science, the ridiculous arguments and the persistent biases that blind us to the ways we ourselves enforce the gender stereotypes we think we are trying to overcome. […]

Most studies about people’s ways of thinking and behaving find no differences between men and women, but these fail to spark the interest of publishers and languish in the file drawer. The oversimplified models of gender and genes that then prevail allow gender culture to be passed down from generation to generation, as though it were all in the genes. Gender, however, is in the mind, fixed in place by the way we store information.

Mental schema organise complex experiences into types of things so that we can process data efficiently, allowing us, for example, to recognise something as a chair without having to notice every detail. This efficiency comes at a cost, because when we automatically categorise experience we fail to question our assumptions. Fine draws together research that shows people who pride themselves on their lack of bias persist in making stereotypical associations just below the threshold of consciousness.

Everyone works together to re-inforce social and cultural environments that soft-wire the circuits of the brain as male or female, so that we have no idea what men and women might become if we were truly free from bias.

Apter, Terri. “Delusions of Gender: The Real Science Behind Sex Differences by Cordelia Fine.” The Guardian, October 11, 2010, sec. Books.

Have At ‘r, “Skeptics”

You want to refute the field of gender studies? I’ve just sketched out the challenges you face on a philosophical level, and pointed you to the studies and books you need to refute. Have fun! If you need me I’ll be over here, laughing.

[HJH 2017-05-21: Added more links, minor grammar tweaks.]

[HJH 2017-05-22: Missed Steven Pinker’s Tweet. Also, this Skeptic fail may have gone mainstream:

Boghossian and Lindsay likely did damage to the cultural movements that they have helped to build, namely “new atheism” and the skeptic community. As far as I can tell, neither of them knows much about gender studies, despite their confident and even haughty claims about the deep theoretical flaws of that discipline. As a skeptic myself, I am cautious about the constellation of cognitive biases to which our evolved brains are perpetually susceptible, including motivated reasoning, confirmation bias, disconfirmation bias, overconfidence and belief perseverance. That is partly why, as a general rule, if one wants to criticize a topic X, one should at the very least know enough about X to convince true experts in the relevant field that one is competent about X. This gets at what Brian Caplan calls the “ideological Turing test.” If you can’t pass this test, there’s a good chance you don’t know enough about the topic to offer a serious, one might even say cogent, critique.

Boghossian and Lindsay pretty clearly don’t pass that test. Their main claim to relevant knowledge in gender studies seems to be citations from Wikipedia and mockingly retweeting abstracts that they, as non-experts, find funny — which is rather like Sarah Palin’s mocking of scientists for studying fruit flies or claiming that Obamacare would entail “death panels.” This kind of unscholarly engagement has rather predictably led to a sizable backlash from serious scholars on social media who have noted that the skeptic community can sometimes be anything but skeptical about its own ignorance and ideological commitments.

When the scientists you claim to worship are saying your behavior is unscientific, maaaaybe you should take a hard look at yourself.]

Belling Sam Harris

I wrote off Sam Harris long ago, and currently ignore him as best as I can. Still, this seems worth the exception.

In this episode of the Waking Up podcast, Sam Harris speaks with Charles Murray about the controversy over his book The Bell Curve, the validity and significance of IQ as a measure of intelligence, the problem of social stratification, the rise of Trump, universal basic income, and other topics.

For those unaware, Charles Murray co-wrote The Bell Curve, which carried this explosive claim among others:

There is a mean difference in black and white scores on mental tests, historically about one standard deviation in magnitude on IQ tests (IQ tests are normed so that the mean is 100 points and the standard deviation is 15). This difference is not the result of test bias, but reflects differences in cognitive functioning. The predictive validity of IQ scores for educational and socioeconomic outcomes is about the same for blacks and whites.

Alas, it was written with dubious sources, based on the notion that intelligence is genetically determined (I touch on the general case here), supported by dubious organizations, and even how it was published was designed to frustrate critics.

The Bell Curve was not circulated in galleys before publication. The effect was, first, to increase the allure of the book (There must be something really hot in there!), and second, to ensure that no one inclined to be skeptical would be able to weigh in at the moment of publication. The people who had galley proofs were handpicked by Murray and his publisher. The ordinary routine of neutral reviewers having a month or two to go over the book with care did not occur. Another handpicked group was flown to Washington at the expense of the American Enterprise Institute and given a weekend-long personal briefing on the book’s contents by Murray himself (Herrnstein had died very recently), just before publication. The result was what you’d expect: The first wave of publicity was either credulous or angry, but short on evidence, because nobody had had time to digest and evaluate the book carefully. [..]

The debate on publication day was conducted in the mass media by people with no independent ability to assess the book. Over the next few months, intellectuals took some pretty good shots at it in smaller publications like the New Republic and the New York Review of Books. It wasn’t until late 1995 that the most damaging criticism of The Bell Curve began to appear, in tiny academic journals.

Entire books have been written debunking The Bell Curve.

Richard Herrnstein and Charles Murray argued that intelligence largely determined how well people did in life. The rich were rich mostly because they were smart, the poor were poor mostly because they were dumb, and middle Americans were middling mostly because they were of middling intelligence. This had long been so but was becoming even more so as new and inescapable economic forces such as global trade and technological development made intelligence more important than ever before. In a more open economy, people rose or sank to the levels largely fixed by their intelligence. Moreover, because intelligence is essentially innate, this expanding inequality cannot be stopped. It might be slowed by government meddling, but only by also doing injustice to the talented and damaging the national economy. Inequality is in these ways “natural,” inevitable, and probably desirable. [..]

Yet decades of social science research, and further research we will present here, dispute the claim that inequality is natural and increasing inequality is fated. Individual intelligence does not satisfactorily explain who ends up in which class; nor does it explain why people in different classes have such disparate standards of living.

So why was Sam Harris resurrecting this dead horse?

[9:35] HARRIS: The purpose of the podcast was to set the record straight, because I find the dishonesty and hypocrisy and moral cowardice of Murray’s critics shocking, and the fact that I was taken in by this defamation of him and effectively became part of a silent mob that was just watching what amounted to a modern witch-burning, that was intolerable to me. So it is with real pleasure (and some trepidation) that I bring you a very controversial conversation, on points about which there is virtually no scientific controversy. […]

[11:30] HARRIS: I’ve- since, in the intervening years, ventured into my own controversial areas as a speaker and writer and experienced many hysterical attacks against me in my work, and so I started thinking about your case a little – again without ever having read you – and I began to suspect that you were one of the canaries in the coal mine that I never recognized as such, and seeing your recent treatment at Middlebury, which many of our listeners will have heard about, where you were prevented from speaking and and your host was was physically attacked – I now believe that you are perhaps the intellectual who was treated most unfairly in my lifetime, and it’s, it’s just an amazing thing to be so slow to realize that. And at first I’d just like to apologize to you for having been so lazy and having been taken in to the degree that I was by the rumors and lies that have surrounded your work for the last 20 years, and so I just want to end- I want to thank you doubly for coming on the podcast to talk about these things.

Sigh.

Tell me, Robert Plomin, is intelligence hereditary?

Genes make a substantial difference, but they are not the whole story. They account for about half of all differences in intelligence among people, so half is not caused by genetic differences, which provides strong support for the importance of environmental factors. This estimate of 50 percent reflects the results of twin, adoption and DNA studies.

It’s deja-vu all over again; there are good reason to think twin studies overstate inheritance, and adoption studies are not as environmentally pure as they’re thought to be. As for DNA studies,

The literature on candidate gene associations is full of reports that have not stood up to rigorous replication. This is the case both for straightforward main effects and for candidate gene-by-environment interactions (Duncan and Keller 2011). As a result, the psychiatric and behavior genetics literature has become confusing and it now seems likely that many of the published findings of the last decade are wrong or misleading and have not contributed to real advances in knowledge. The reasons for this are complex, but include the likelihood that effect sizes of individual polymorphisms are small, that studies have therefore been underpowered, and that multiple hypotheses and methods of analysis have been explored; these conditions will result in an unacceptably high proportion of false findings (Ioannidis 2005).[1]

Ah yes, the replication crisis. I know it well. Genetic studies can easily have millions of datapoints yet only draw from less than a few hundred volunteers, and are particularly ripe for false correlations. But according to Angry White Men, Sam Harris was ignorant of all of the above.

Harris didn’t bat an eye when Murray accused critics of race realism — or human biodiversity, or whatever the alt-right calls its racist junk science nowadays — of elitism and compared them to modern-day flat Earthers. As Murray put it: “But at this point, Sam, it’s almost as if we are in the opposite position of conventional wisdom versus elite wisdom that we were, say, when Columbus was gonna sail to America. … It’s the elites who are under the impression that, oh, IQ tests only measure what IQ tests measure, and nobody really is able to define intelligence, and this and that, they’re culturally biased, on and on and on and on. And all of these things are the equivalent of saying the Earth is flat.

By now, I’m convinced he doesn’t want to hear the counter-arguments. He’d rather pretend to be rational and scientific, because then he can remain bigoted without fear of challenge.


[1] Hewitt, John K. “Editorial Policy on Candidate Gene Association and Candidate Gene-by-Environment Interaction Studies of Complex Traits.” Behavior Genetics 42, no. 1 (January 1, 2012): 1–2. doi:10.1007/s10519-011-9504-z.

Fake Journals, Too

Myers beat me to the punch with his post on fake peer reviewers, so I’ll zag and mention the other side of the fence.

The rapid rise of predatory journals—publications taking large fees without providing robust editorial or publishing services—has created what some have called an age of academic racketeering. Predatory journals recruit articles through aggressive marketing and spam emails, promising quick review and open access publication for a price. There is little if any quality control and virtually no transparency about processes and fees. Their motive is financial gain, and they are corrupting the communication of science. Their main victims are institutions and researchers in low and middle income countries, and the time has come to act rather than simply to decry them.

Clark, Jocalyn, and Richard Smith. “Firm action needed on predatory journals.” BMJ 350.jan16 1 (2015): h210-h210.

How prevalent are these journals?

Over the studied period, predatory journals have rapidly increased their publication volumes from 53,000 in 2010 to an estimated 420,000 articles in 2014, published by around 8,000 active journals. Early on, publishers with more than 100 journals dominated the market, but since 2012 publishers in the 10–99 journal size category have captured the largest market share. The regional distribution of both the publisher’s country and authorship is highly skewed, in particular Asia and Africa contributed three quarters of authors. Authors paid an average article processing charge of 178 USD per article for articles typically published within 2 to 3 months of submission.

Shen, Cenyu, and Bo-Christer Björk. “‘Predatory’ open Access: A Longitudinal Study of Article Volumes and Market Characteristics.” BMC Medicine 13, no. 1 (2015): 230.

The rise of predatory journals is an unfortunate combination of the open-access model with the pressure to publish; young researchers desperate to get something on their CV are attracted to them or naive about their existence.

One of our findings is that authors who publish in so called “predatory” journals have little to no history of previous publications and citations. This may indicate that they are young researchers, which is indeed supported by the author information. [..]

The demands stimulate a multiplying of new OA journals, particularly in developing countries. A low submission acceptance standard provides an opportunity for non-elite members of the scholarly community to survive in the “publish or perish” culture found in both the West and many developing countries. Most of the “predatory” journals initiated and operated in the developing countries charge a fee affordable to local submissions, enabling researchers to publish quickly. Publishing in such journals is much less costly than conducting expensive studies and attempting to publish without fees in a prestigious foreign non-OA journal. This is by no means only an
open access problem, but is a prevalent dilemma in the current scholarly communication system.

Xia, Jingfeng, Jennifer L. Harmon, Kevin G. Connolly, Ryan M. Donnelly, Mary R. Anderson, and Heather A. Howard. “Who Publishes in ‘predatory’ Journals?” Journal of the Association for Information Science and Technology 66, no. 7 (2015): 1406–1417.

You might think there’s an easy solution to this: do extensive research on any journal interested in your paper, and be suspicious of any journal that approaches you or isn’t up-front about costs. You’d be wrong, though.

During the last 2 years, cyber criminals have started to imitate the names of reputable journals that publish only printed versions of articles. [..]

Unfortunately, such fake websites can be created by almost anyone who has even minimal knowledge of how to design a website can do so by using open-source Content Management Systems (CMSs). However, we believe that the academic cyber criminals who are responsible for the propagation of hijacked journals are completely familiar with the academic rules of upgrading lecturers, qualifying Ph.D. candidates, and applying for admission to postgraduate programs or any professorship positions. These criminals may be ghost writers or they may be the experts who used to help scholars write and publish their research work before they decided to become full-scale “ghost publishers”. Whoever they are, it is apparent that they have the knowledge required to design a website and to hide their identities on the Internet. In addition, they definitely are familiar with authors’ behaviors, and they know that many of authors are in urgent need of publishing a couple of “ISI papers” (i.e. articles published in journals that are indexed by Thomson Reuters/Institute for Scientific Information-ISI) within a limited time. Therefore, the new version of academic cyber criminals knows what to do and how to organize a completely fake conference or hijack a printed journal.

Jalalian, Mehrdad, and Hamidreza Mahboobi. “Hijacked Journals and Predatory Publishers: Is There a Need to Re-Think How to Assess the Quality of Academic Research?” Walailak Journal of Science and Technology (WJST) 11, no. 5 (2014): 389–394.

These “hijacked” journals are good enough to fool experienced researchers.

One of our students submitted a manuscript to the International Journal of Philosophy and Theology. This is a prestigious peer-reviewed journal, founded in 1938 by Jesuit Academics at the University of Louvain in Belgium. Initially a Dutch language journal, Bijdragen, it was internationalized in 2013 and is now published by Taylor and Francis. Within a few weeks our student received a message from the journal that his contribution had been reviewed and accepted: the topic was relevant, the methodology sound, and the relevant literature engaged. His manuscript could be published rather quickly. As soon as the publication had materialized, the student received an invoice of $200 to be paid to a bank account in Bangladesh. [..]

Our first student did not know — and neither did we — that there are in fact two journals with the same name International Journal of Philosophy and Theology. The [fake] one refers to a fancy website with an impressive name: American Research Institute for Policy Development. This organization publishes 52 journals in areas such as Arts, Humanities and Social Science, as well as Science and Technology. The journals have fancy names, often identifying an international scope.

Have, Henk ten, and Bert Gordijn. “Publication Ethics: Science versus Commerce.” Medicine, Health Care and Philosophy, April 11, 2017. doi:10.1007/s11019-017-9774-1.

The fact that I’ve made this post just by quoting scientific papers should tell you there’s extensive literature on faux literature, from people much more knowledgeable than I. Unfortunately, that also means none of it offers easy solutions or quick fixes. At the root of it all is the “publish or perish” model of science, and unfortunately that’s firmly embedded in modern scientific practice.

We’re overdue for a complete overhaul of how science is done.

Fame and Citations

5Remember that “Rock Stars of Science” ad campaign? I thought it was dreadful. Science is supposed to be the pursuit of knowledge through experimentation and rigorous methodology. When you focus on the personalities behind the science, you push all that to the side and turn it into a purely creative task, mysterious and luck-dependent. You start to get situations like Lord Kelvin’s opinion of the age of the Earth.

The result of Kelvin’s assumptions about the deep interior of the Earth, without any sound evidence, was unfortunately quite significant. Because the timeframe he provided was far too brief to allow for known geological processes to produce the current topographical features of the Earth. Even worse, Kelvin then made significant attacks on the science of geology and it’s practitioners, but most of the geologists in that era were intimidated by Kelvin’s stature within the overall scientific community (Lewis, 2000). Kelvin was regarded as possibly the most well regarded and imposing scientific figure of the day (Lewis, 2000). […]

Physics was regarded as a more mature and noble field than geology (Hallam, 1989), which was still perceived as immature and without the (apparent) certainty provided by the more mathematically-oriented physics and chemistry. Kelvin derived his estimate from quantitative and repeatable measurements, physical principles of the known natural laws of the time, and elegant math (Dalrymple, 2004). That method, combined with his arguments about the uncertainty of geologic data analysis, provided Kelvin with a tremendous amount of swagger over his theory’s potential opponents. He was enthusiastic and persuasive, and was perhaps the leading scientific celebrity of his time, and this made him an exceptionally difficult opponent for Lyell and Darwin (Hallam, 1989); Darwin referred to Kelvin as his “sorest trouble” (Dalrymple, 2004; Lewis, 2000). The end result was that most scientists sought agreement rather than conflict with Kelvin (Lewis, 2000). Archibald Geikie (Hallam, 2009), James Croll, Lyell, and Samuel Haughton all adjusted their theories to make allowances for Kelvin. Additionally, P.G. Tait, T. Mellard Reade, Clarence King, and John Joly (Hallam, 1989) all reached conclusions concordant with Kelvin through their own methods. This is unfortunate and could be concluded as an effect of peer pressure biasing the scientific method, and perhaps a little bit of an inferiority complex on the part of the geologists in comparison with their 19th century physics peers.

“Rock Star science” harms productivity, too; one study found that when a “superstar” in a field dies, the output of their collaborators drops 5-8%. Instead, I prefer a “Wonder of Science” approach where cool facts are mixed with play and experimentation. When everyone has the tools to do science, anyone can pick up where someone else left off and we’re not stuck waiting for a “big name” to come along and save us.

When I entered the field of psychological science, what excited me was that, historically, the field was full of big thinkers—scholars like Sigmund Freud and Carl Rogers in psychotherapy, Edward Tolman and B. F. Skinner in learning, Herbert Simon and more recently Daniel Kahneman in cognition, and Abraham Maslow and David McClelland in personality. They represented psychological science in the large—a kind of “big psychology.” A concern I have developed over the years is that our field is moving toward a kind of psychological science in the small—a kind of “small psychology.” […]

For example, Sigmund Freud has an h index of 265, B. F. Skinner of 98, Herbert Simon of 163, and Daniel Kahneman of 123. Their total citations are prodigious, for example, 450,339 for Freud, 277,573 for Simon, and 254,326 for Kahneman. In today’s scientific climate, it may be challenging to be a “big psychological scientist,” but I believe big thinking pays off in the kind of impact (with accompanying citation statistics) that lasts over generations, not merely over the duration of one’s career or a part of one’s career. In the long run, the big thinkers are the ones who most create a lasting legacy.

That’s Robert J. Sternberg offering his counterpoint. Still,

in comments to us, some psychological scientists, including some from our book, challenged the criteria or the weighting of the criteria, which led us to wonder just how eminence, or performance at any level, should be judged. What is the future of such evaluations of scientific merit?

Don Foss and I then decided—regrettably, Susan Fiske was unavailable to participate at the time—to pursue this universally important issue by creating the present symposium for Perspectives on Psychological Science. We invited several distinguished psychological scientists who have worked on the problem of merit and eminence in psychological science and asked them each if they would write an essay for Perspectives.

The answer was “yes,” and so seven prominent male scientists weighed in on how we should judge the prominence of a scientist. The one woman allowed in, Alice H. Eagly, was graciously allowed to share a by-line with a male author so she could ask “where the women at?

Yeeeeaaah. I’ll let Katie Corker tell the tale of Perspectives‘ second attempt.

The new call was issued in response to a chorus of nasty women and other dissidents who insisted that their viewpoints hadn’t been represented by the scholars in the original special issue. The new call explicitly invited these “diverse perspectives” to speak up (in 1,500 words or less****).
Each of the six of us independently rose to the challenge and submitted comments. None of us were particularly surprised to receive rejections – after all, getting rejected is just about the most ordinary thing that can happen to a practicing researcher. Word started to spread among the rejected, however, and we quickly discovered that many of the themes we had written about were shared across our pieces. That judgments of eminence were biased along predictable socio-demographic lines. That overemphasis on eminence creates perverse incentives. That a focus on communal goals and working in teams was woefully absent from judgments of eminence.
And so all six posted their opinions online, free for anyone to read. Simine Vazire, for instance, argues that
The drive for eminence is inherently at odds with scientific values, and insufficient attention to this problem is partly responsible for the recent crisis of confidence in psychology and other sciences. The replicability crisis has shown that a system without transparency doesn’t work. The lack of transparency in science is a direct consequence of the corrupting influence of eminence-seeking. If journals and societies are primarily motivated by boosting their impact, their most effective strategy will be to publish the sexiest findings by the most famous authors. Humans will always care about eminence. Scientific institutions and gatekeepers should be a bulwark against the corrupting influence of the drive for eminence, and help researchers maintain integrity and uphold scientific values in the face of internal and external pressures to compromise.
Alas, Perspectives on Psychological Science‘s mulligan has yet to be published. But it should be obvious that this argument strikes right to the heart of how science is done.

A Computer Scientist Reads EvoPsych, Part 4

[Part 3]

The programs comprising the human mind were designed by natural selection to solve the adaptive problems regularly faced by our hunter-gatherer ancestors—problems such as finding a mate, cooperating with others, hunting, gathering, protecting children, navigating, avoiding predators, avoiding exploitation, and so on. Knowing this allows evolutionary psychologists to approach the study of the mind like an engineer. You start by carefully specifying an adaptive information processing problem; then you do a task analysis of that problem. A task analysis consists of identifying what properties a program would have to have to solve that problem well. This approach allows you to generate hypotheses about the structure of the programs that comprise the mind, which can then be tested.[1]

Let’s try this approach. My task will be to calculate the inverse square root of a number, a common one in computer graphics. The “inverse” part implies I’ll have to do a division at some point, and the “square root” implies either raising something to a power, finding the logarithm of the input, or invoking some sort of function that’ll return the square root. So I should expect a program which contains an inverse square root function to have something like:

float InverseSquareRoot( float x ) 
{

     return 1.0 / sqrt(x);

}

So you could imagine my shock if I peered into a program and found this instead:

float FastInvSqrt( float x )
{
    long i;
    float x2, y;
    
    x2 = x * 0.5;

    i = * ( long * ) &x;
    i = 0x5f3759df - ( i >> 1 );
    y = * ( float * ) &i;

    y = y * ( 1.5 - ( x2 * y * y ) );

    return y;
}

Something like that snippet was in Quake III’s software renderer. It uses one step of Newton’s Method to find the zero of an equation derived from the input value, seeded by a guess that takes advantage of the structure of floating point numbers. It also breaks every one of the predictions my analysis made, not even including a division.

The task analysis failed for a simple reason: nearly every problem has more than one approach to it. If we’re not aware of every alternative, our analysis can’t take all of them into account and we’ll probably be led astray. We’d expect convolutions to be slow for large kernels unless we were aware of the Fourier transform, we’d think it was impossible to keep concurrent operations from mucking up memory unless we knew we had hardware-level atomic operations, and if we thought of sorting purely in terms of comparing one value to another we’d miss out on the fastest sorting algorithm out there, Radix sort.

Radix sort doesn’t get implemented very often because it either requires a tonne of memory, or the overhead of doing a census makes it useless on small lists. To put that more generally, the context of execution matters more than the requirements of the task during implementation. The simplistic approach of Tooby and Cosmides does not take that into account.

We can throw them a lifeline, mind you. I formed a hypothesis about computing inverse square roots, refuted it, and now I’m wiser for it. Isn’t that still a net win for the process? Notice a key difference, though: we only became wiser because we could look at the source code. If FastInvSqrt() was instead a black box, the only way I could refute my analysis would be to propose the exact way the algorithm worked and then demonstrated it consistently predicted the outputs much better. If I didn’t know the techniques used in FastInvSqrt() were possible, I’d never be able to refute it.

On the contrary, I might falsely conclude I was right. After all, the outputs of my analysis and FastInvSqrt() are very similar, so I could easily wave away the differences as due to a buggy square root function or a flaw in the division routine. This is especially dangerous with evolutionary algorithms, as Dr. Adrian Thompson figured out in an earlier installment, because the odds of us knowing every possible trick are slim.

In sum, this analysis method is primed to generate smug over-confidence in your theories.

Each organ in the body evolved to serve a function: The intestines digest, the heart pumps blood, and the liver detoxifies poisons. The brain’s evolved function is to extract information from the environment and use that information to generate behavior and regulate physiology. Hence, the brain is not just like a computer. It is a computer—that is, a physical system that was designed to process information. Its programs were designed not by an engineer, but by natural selection, a causal process that retains and discards design features based on how well they solved adaptive problems in past environments.[1]

And is my appendix’s function to randomly attempt to kill me? The only people I’ve seen push this biological teleology are creationists who propose an intelligent designer. Few people well studied in biology would buy this line.

But getting back to my field, notice the odd dichotomy at play here: our brains are super-sophisticated computational devices, but not sophisticated enough to re-program themselves on-the-fly. Yet even the most primitive computers we’ve developed can modify the code they’re running, as they’re running it. Why isn’t that an option? Why can’t we be as much of a blank slate as forty-year old computer chips?

It’s tempting to declare that we’re more primitive than they are, computationally, but there’s a fundamental problem here: algorithms are algorithms are algorithms. If you can compute, you’re a Turing machine of some sort. There is no such thing as a “primitive” computer, at best you could argue some computers have more limitations imposed on them than others.

Human beings can compute, as anyone who’s taken a math course can attest. Ergo, we must be something like a Turing machine. Is it possible that our computation is split up into programs, which themselves change only slowly? Sure, but that’s an extra limitation imposed on our computability. It should not be assumed a-priori.

[Part 5]


[1] Tooby, John, and Leda Cosmides. “Conceptual Foundations of Evolutionary Psychology.The Handbook of Evolutionary Psychology (2005): 5-67.

A Computer Scientist Reads EvoPsych, Part 3

[Part 2]

As a result of selection acting on information-behavior relationships, the human brain is predicted to be densely packed with programs that cause intricate relationships between information and behavior, including functionally specialized learning systems, domain-specialized rules of inference, default preferences that are adjusted by experience, complex decision rules, concepts that organize our experiences and databases of knowledge, and vast databases of acquired information stored in specialized memory systems—remembered episodes from our lives, encyclopedias of plant life and animal behavior, banks of information about other people’s proclivities and preferences, and so on. All of these programs and the databases they create can be called on in different combinations to elicit a dazzling variety of behavioral responses.[1]

“Program?” “Database?” What exactly do those mean? That might seem like a strange question to hear from a computer scientist, but my training makes me acutely aware of how flexible those terms can be. [Read more…]