The Two Cultures, as per Steven Pinker

As I mentioned before, C.P. Snow’s “Two Culture” lecture is light on facts, which makes it easy to mould to your whims. Go back and re-read that old post, absorb C.P. Snow’s version of the Two Cultures, then compare it to Pinker’s summary:

A final alternative to Enlightenment humanism condemns its embrace of science. Following C.P. Snow, we can call it the Second Culture, the worldview of many literary intellectuals and cultural critics, as distinguished from the First Culture of science.[12] Snow decried the iron curtain between the two cultures and called for greater integration of science into intellectual life. It was not just that science was, “in its intellectual depth, complexity, and articulation, the most beautiful and wonderful collective work of the mind of man.” Knowledge of science, he argued, was a moral imperative, because it could alleviate suffering on a global scale …

[Pinker, Steven. Enlightenment Now: The Case for Reason, Science, Humanism, and Progress. Penguin, 2018. Pg. 33-34]

C.P. Snow went out his way to criticise scientists for failing to incorporate literature into their lives, and never ranked one culture as superior to another. Nor did he label them “First Culture” or “Second Culture.” And it wasn’t increased knowledge of science in general that would remove suffering, it was the two cultures intermixing. Pinker is presenting a very different argument than C.P. Snow, at least on the face of it.

But hang on, there’s a footnote right in the middle of that passage….

[12] Snow never assigned an order to his Two Cultures, but subsequent usage has numbered them in that way; see, for example, Brockman 2003.

[Pg. 456]

How is it “following C.P Snow” to call it “Second Culture,” when you acknowledge C.P. Snow never called it “Second Culture?!” What’s worse, look at the page numbers: that acknowledgement comes a full four hundred pages after the misleading phrasing. How many people would bother to flip that far ahead, let alone make the connection to four hundred pages ago? But all right, fine, maaaybe Steven Pinker is just going with the flow, and re-using a common distortion of C.P Snow’s original argument. The proof should lie in that citation to Brockman [2003], which fortunately is available via Google Books. In fact, I can do you one better: John Brockman’s anthology was a mix of work published in Edge magazine and original essays, and the relevant parts just happen to be online.

Bravo, John! You are playing a vital role in moving the sciences beyond a defensive posture in response to turf attacks from the “postmodernists” and other leeches on the academies. You celebrate science and technology as our most pragmatic expressions of optimism.

I wonder, though, if it’s enough to merely point out how hopelessly lost those encrusted arts and humanities intellectuals have become in their petty arms race of cynicism. If we scientists and technologists are to be the new humanists, we must recognize that there are questions that must be addressed by any thinking person which do not lie within our established methods and dialogs. …

While “postmodern” academics and “Second Culture” celebrity figures are perhaps the most insufferable enemies of science, they are certainly not the most dangerous. Even as we are beginning to peer at biology’s deepest foundations for the first time, we find ourselves in a situation in which vast portions of the educated population have turned against the project of science in favor of pop alternatives usually billed as being more “spiritual.”

It appears exactly once in that reference, which falls well short of demonstrating common usage. Even more damning is that Pinker’s citation references the 2003 edition of the book. There’s a 2008 version, and it doesn’t have a single reference to a “Second Culture.” I’ve done my own homework, and I can find a thesis from 2011 which has that usage of “Second Culture,” but falsely attributes it to Snow and never brings it up past the intro. There is an obscure 1993 book which Pinker missed, but thanks to book reviews I can tell it labels science as the “Second Culture,” contrary to how Pinker uses the term. Everything else I’ve found is a false positive, which means Pinker is promoting one mention in one essay by one author as sufficient to show a pattern.

And can I take a moment to call out the contrary labelling here: how, in any way, is science “First” relative to literature? Well before Philosophical Transactions began publishing, we’d already had the Ramayana, the Chu Ci anthology, the Epic of Gilgamesh, The Illiad, Beowulf, and on and on. Instead, Pinker and friends are invoking “Second” as in “Secondary,” lesser, inferior. Unlike de Beauvoir, though, they’re not doing it as a critique, they honestly believe in the superiority of science over literature.

Pinker didn’t invent this ranking, nor was he the first to lump all the humanities in with the literary elites. I think that honour belongs to John Brockman. Consider this essay of his; read very carefully, and you’ll see he’s a little confused on who’s in the non-scientific culture.

Ten years later, that fossil culture is in decline, replaced by the emergent “third culture” of the essay’s title, a reference to C. P. Snow’s celebrated division of the thinking world into two cultures—that of the literary intellectual and that of the scientist. …

In the twentieth century, a period of great scientific advancement, instead of having science and technology at the center of the intellectual world—of having a unity in which scholarship includes science and technology just as it includes literature and art—the official culture kicked them out. The traditional humanities scholar looked at science and technology as some sort of technical special product—the fine print. The elite universities nudged science out of the liberal arts undergraduate curriculum, and out of the minds of many young people, who abandoned true humanistic inquiry in their early twenties and turned themselves into the authoritarian voice of the establishment. …

And one is amazed that for others still mired in the old establishment culture, intellectual debate continues to center on such matters as who was or was not a Stalinist in 1937, or what the sleeping arrangements were for guests at a Bloomsbury weekend in the early part of the twentieth century. This is not to suggest that studying history is a waste of time. History illuminates our origins and keeps us from reinventing the wheel. But the question arises: history of what? Do we want the center of culture to be based on a closed system, a process of text in/text out, and no empirical contact with the world in between?

A fundamental distinction exists between the literature of science and those disciplines in which the writing is most often concerned with exegesis of some earlier writer. In too many university courses, most of the examination questions are about what one or another earlier authority thought. The subjects are self-referential. …

The essay itself is a type specimen of science cheer-leading, which sweeps all the problems of science under the carpet; try squaring “Science is nothing more nor less than the most reliable way of gaining knowledge about anything” with “Most Published Research Findings Are False,” then try finding a published literary critic doing literary criticism wrong. More importantly, Brockman’s December 2001 essay reads a lot like Pinker’s February 2018 book, right down to the “elite” and “authoritarian” “liberal arts” universities turning their back on science. Brockman was definitely ahead of his time, and while only three of his works show up in Pinker’s citation list he’s definitely had a big influence.

This also means Pinker suffers from the same confusion as Brockman. Here’s some of the people he considers part of the Second Culture:

It’s an oddball list. Karl Popper is a member, probably by accident. Adorno was actually an opponent of Heidegger and Popper’s views of science. Essayists (Wieseltier and Gopnik) rub shoulders with glaciologists (Carey, Jackson), sociologists (Bauman), and philosophers (Foucault, Derrida). It’s dominated by the bogey-people of the alt-right, none of whom can be classified as elite authors.

Stranger still, Thomas Kuhn isn’t on there. Kuhn should have been: he argued that science doesn’t necessarily follow the strength of the evidence. During Kuhn’s heyday, many physicists thought that Arthur Eddington’s famous solar eclipse data fell short of proper science. The error bars were very large, the dataset was small, and some contrary data from another telescope was ignored; nonetheless, scientists during Eddington’s heyday took the same dataset as confirmatory. Why? They wanted General Relativity to be true, because it offered an explanation for why light seemed to have a fixed speed and Mercury precessed the way it did. Kuhn called these “puzzles,” things which should be easily solvable via existing, familiar knowledge. Newtonian Mechanics violated that “easy” part of the contract, GR did not, so physicists abandoned ship even in the face of dodgy data. Utility was more important than truth-hood.

Conversely, remember the neutrinos that seemed to run faster than light? If science advanced by falsification, physicists should have abandoned General Relativity in droves; instead, they dismissed the finding and asked the scientists who ran the experiment to try again. In this case, they didn’t want GR to be false, so contrary evidence was rejected. That might seem like a cheap example, since the experimental equipment was shown to be the real problem, but consider that we already knew GR was false because it’s incompatible with Quantum Mechanics. Neither theory can be true at the same time, which means there’s a third theory out there which has a vague resemblance to both but has radically different axioms. Nonetheless no physicist has stopped using GR or QM, because both are effective at solving puzzles. Utility again trumps truth-hood.

Kuhn argued that scientists proposed frameworks for understanding the world, “paradigms,” which don’t progress as we think they do. For instance, Newtonian Mechanics says the International Space Station is perpetually falling towards Earth, because the mass of both is generating attractive forces which cause a constant acceleration; General Relativity says the ISS is travelling in a straight line, but appears to orbit around the Earth because it is moving through a spacetime curved by the energy and mass of both objects. These two explanations are different on a fundamental level, you can’t transform one into the other without destroying some axioms. You’ve gotta chose one or the other, and why would you switch ever switch back? Kuhn even rejected the idea that the next paradigm is more “truthful” than another; again, utility trumps truth-hood.

It’s opposed to a lot of what Pinker is arguing for, and yet:

The most commonly assigned book on science in modern universities (aside from a popular biology textbook) is Thomas Kuhn’s The Structure of Scientific Revolutions. That 1962 classic is commonly interpreted as showing that science does not converge on the truth but merely busies itself with solving puzzles before flipping to some new paradigm which renders its previous theories obsolete, indeed, unintelligible. Though Kuhn himself later disavowed this nihilist interpretation, it has become the conventional wisdom within the Second Culture. [22]

[Enlightenment Nowpg. 400]

Weird, I can find no evidence Kuhn disavowed that interpretation in my source:

Bird, Alexander, “Thomas Kuhn“, The Stanford Encyclopedia of Philosophy (Fall 2013 Edition), Edward N. Zalta (ed.) URL = <https://plato.stanford.edu/archives/fall2013/entries/thomas-kuhn/>

Still, Pinker is kind enough to source his claim, so let’s track it down…. Right, footnote [22] references Bird [2011], which I can find on page 500…

Bird, A. 2011. Thomas Kuhn. In E. N. Zalta, ed., Stanford Encyclopedia of Philosophy . https://plato.stanford.edu/entries/thomas-kuhn/.

He’s using the same source?! I mean, score another point for Kuhn, as he thought that people with different paradigms perceive the same data differently, but we’ve still got a puzzle here. I can’t be sure, but I have a theory for why Pinker swept Kuhn under the rug. From our source:

Feminists and social theorists (…) have argued that the fact that the evidence, or, in Kuhn’s case, the shared values of science, do not fix a single choice of theory, allows external factors to determine the final outcome (…). Furthermore, the fact that Kuhn identified values as what guide judgment opens up the possibility that scientists ought to employ different values, as has been argued by feminist and post-colonial writers (…).

Kuhn himself, however, showed only limited sympathy for such developments. In his “The Trouble with the Historical Philosophy of Science” (1992) Kuhn derides those who take the view that in the ‘negotiations’ that determine the accepted outcome of an experiment or its theoretical significance, all that counts are the interests and power relations among the participants. Kuhn targeted the proponents of the Strong Programme in the Sociology of Scientific Knowledge with such comments; and even if this is not entirely fair to the Strong Programme, it reflects Kuhn’s own view that the primary determinants of the outcome of a scientific episode are to be found within science.

Oh ho, Kuhn thought it was unlikely that sexism or racism could warp science! That makes him the enemy of Pinker’s enemies, and therefore his friend. Hence why Pinker finds it useful to bring up Kuhn, despite their contrary views of science, and for that matter why Pinker can look at Snow’s arguments and see his own: utility trumps truth-hood.

The Laziness of Steven Pinker

I know, I know, I should have promoted that OrbitCon talk on Steven Pinker before it aired. I was a bit swamped developing material for it, ironically, most of which never made it to air. Don’t worry, I’ll be sharing the good bits via blog post. Amusingly, this first example isn’t from that material. I wound up reading a lot of Pinker, and developed a hunch I wasn’t able to track down before air time. In a stroke of luck, Siggy handed me the material I needed to properly follow up.

Enough suspense: what’s your opinion of self-plagiarism, or copying your own work without flagging what you’ve done?

… self-plagiarism does carry with it some level of dishonesty, at least in some situations. The problem is that, when an author, artist or other creator presents a new work, it’s generally expected to be all-new content, unless otherwise clearly stated. … with an academic paper, one is generally expected to showcase what they have learned most recently, meaning that self-plagiarism defeats the purpose of the paper or the assignment. On the other hand, in a creative environment, however, reusing old passages, especially in a limited manner, might be more about homage and maintaining consistency than plagiarism.

It’s a bit of a gray area, isn’t it? The US Office of Research Integrity declares it unethical, but also declares that self-plagiarism isn’t misconduct. Nonetheless it could be considered misconduct in an academic context, and the ORI themselves outline the case:

For example, in one editorial, Schein (2001) describes the results of a study he and a colleague carried out which found that 92 out of 660 studies taken from 3 major surgical journals were actual cases of redundant publication. The rate of duplication in the rest of the biomedical literature has been estimated to be between 10% to 20% (Jefferson, 1998), though one review of the literature suggests the more conservative figure of approximately 10% (Steneck, 2000). However, the true rate may depend on the discipline and even the journal and more recent studies in individual biomedical journals do show rates ranging from as low as just over 1% in one journal to as high as 28% in another (see Kim, Bae, Hahm, & Cho, 2014) The current situation has become serious enough that biomedical journal editors consider redundancy and duplication one of the top areas of concern (Wager, Fiack, Graf, Robinson, & Rowlands, 2009) and it is the second highest cause for articles to be retracted from the literature between the years 2007 and 2011 (Fang, Steen, & Casadevall, 2012).

But is it misconduct in the context of non-academic science writing? I’m not sure, but I think it’s fair to say self-plagiarism counts as lazy writing. Whatever the ethics, let’s examine an essay by Pinker that Edge published sometime before January 10th, 2017, and match it up against Chapter 2 of Enlightenment Now. I’ve checked the footnotes and preface of the latter, and failed to find any reference to that Edge essay, while the former does not say it’s excerpted from a forthcoming book. You’d have no idea one copy existed if you’d only read the other, so any matching passages count as self-plagiarism.

How many passages match? I’ll use the Edge essay as a base, and highlight exact duplicates in red, sections only present in Enlightenment Now in green, paraphrases in yellow, and essay-only text in black.

The Second Law of Thermodynamics states that in an isolated system (one that is not taking in energy), entropy never decreases. (The First Law is that energy is conserved; the Third, that a temperature of absolute zero is unreachable.) Closed systems inexorably become less structured, less organized, less able to accomplish interesting and useful outcomes, until they slide into an equilibrium of gray, tepid, homogeneous monotony and stay there.

In its original formulation the Second Law referred to the process in which usable energy in the form of a difference in temperature between two bodies is inevitably dissipated as heat flows from the warmer to the cooler body. (As the musical team Flanders & Swann explained, “You can’t pass heat from the cooler to the hotter; Try it if you like but you far better notter.”) A cup of coffee, unless it is placed on a plugged-in hot plate, will cool down. When the coal feeding a steam engine is used up, the cooled-off steam on one side of the piston can no longer budge it because the warmed-up steam and air on the other side are pushing back just as hard.

Once it was appreciated that heat is not an invisible fluid but the energy in moving molecules, and that a difference in temperature between two bodies consists of a difference in the average speeds of those molecules, a more general, statistical version of the concept of entropy and the Second Law took shape. Now order could be characterized in terms of the set of all microscopically distinct states of a system (in the original example involving heat, the possible speeds and positions of all the molecules in the two bodies). Of all these states, the ones that we find useful from a bird’s-eye view (such as one body being hotter than the other, which translates into the average speed of the molecules in one body being higher than the average speed in the other) make up a tiny sliver of the possibilities, while the disorderly or useless states (the ones without a temperature difference, in which the average speeds in the two bodies are the same) make up the vast majority. It follows that any perturbation of the system, whether it is a random jiggling of its parts or a whack from the outside, will, by the laws of probability, nudge the system toward disorder or uselessness —not because nature strives for disorder, but because there are so many more ways of being disorderly than of being orderly. If you walk away from a sand castle, it won’t be there tomorrow, because as the wind, waves, seagulls, and small children push the grains of sand around, they’re more likely to arrange them into one of the vast number of configurations that don’t look like a castle than into the tiny few that do. [Enlightenment Now adds five sentences here.]

 

I could (and have!) carried on, demonstrating that almost all of that essay reappears in Pinker’s book. Maybe half of the reappearance is verbatim. I figure he copy-pasted the contents of his January 2017 essay into the manuscript for his 2018 book, and expanded it to fill an entire chapter. Whether I’m right or wrong, I think the similarities make a damning case for intellectual laziness. It also sets up a bad precedent: if Pinker can get this lazy with his non-academic writing, how lazy can he be with his academic work? I haven’t looked into that, and I’m curious if anyone else has.

Steven Pinker, “Historian”

It’s funny, if you look back over my blog posts on Steven Pinker, you’ll notice a progression.

Ignoring social justice concerns in biomedical research led to things like the Tuskegee experiment. The scientific establishment has since tried to correct that by making it a critical part. Pinker would be wise to study the history a bit more carefully, here.


Setting aside your ignorance of the evidence for undercounting in the FBI’s data, you can look at your own graph and see a decline?

When Sargon of Arkkad tried and failed to discuss sexual assault statistics, he at least had the excuse of never having gotten a higher education, never studying up on the social sciences. I wonder what Steven Pinker’s excuse is.


Ooooh, I get it. This essay is just an excuse for Pinker to whine about progressives who want to improve other people’s lives. He thought he could hide his complaints behind science, to make them look more digestible to himself and others, but in reality just demonstrated he understands physics worse than most creationists. What a crank.

You’ll also notice a bit of a pattern, too, one that apparently carries on into Pinker’s book about the Enlightenment.

It is curious, then, to find Pinker breezily insisting that Enlightenment thinkers used reason to repudiate a belief in an anthropomorphic God and sought a “secular foundation for morality.” Locke clearly represents the opposite impulse (leaving aside the question of whether anyone in period believed in a strictly anthropomorphic deity).

So, too, Kant. While the Prussian philosopher certainly had little use for the traditional arguments for God’s existence – neither did the exceptionally pious Blaise Pascal, if it comes to that – this was because Kant regarded them as stretching reason beyond its proper limits. Nevertheless, practical reason requires belief in God, immorality and a post-mortem existence that offers some recompense for injustices suffered in the present world.

That’s from Peter Harrison, a professional historian. Even I was aware of this, though I am guilty of a lie of omission. I’ve brought up the “Cult of Reason” before, which was a pseudo-cult set up during the French Revolution that sought to tear down religion and instead worship logic and reason. What I didn’t mention was that it didn’t last long; Robespierre shortly announced his “Cult of the Supreme Being,” which promoted Deism as the official religion of France, and had the leaders of the Cult of Reason put to death. Robespierre himself was executed shortly thereafter, for sounding too much like a dictator, and after a half-hearted attempt at democracy France finally settled on Napoleon Bonaparte, a dictator everyone could get behind. The shift to reason and objectivity I was hinting at back then was more gradual than I implied.

If we go back to the beginning of the scientific revolution – which Pinker routinely conflates with the Enlightenment – we find the seminal figure Francis Bacon observing that “the human intellect left to its own course is not to be trusted.” Following in his wake, leading experimentalists of the seventeenth century explicitly distinguished what they were doing from rational speculation, which they regarded as the primary source of error in the natural sciences.

In the next century, David Hume, prominent in the Scottish Enlightenment, famously observed that “reason alone can never produce any action … Reason is, and ought only to be the slave of the passions.” And the most celebrated work of Immanuel Kant, whom Pinker rightly regards as emblematic of the Enlightenment, is the Critique of Pure Reason. The clue is in the title.

Reason does figure centrally in discussions of the period, but primarily as an object of critique. Establishing what it was, and its intrinsic limits, was the main game. […]

To return to the general point, contra Pinker, many Enlightenment figures were not interested in undermining traditional religious ideas – God, the immortal soul, morality, the compatibility of faith and reason – but rather in providing them with a more secure foundation. Few would recognise his tendentious alignment of science with reason, his prioritization of scientific over all other forms of knowledge, and his positing of an opposition between science and religion.

I’m just skimming Harrison’s treatment, the rest of the article is worth a detour, but it really helps underscore how badly Pinker wants to re-write history. Here’s something the man himself committed to electrons:

More insidious than the ferreting out of ever more cryptic forms of racism and sexism is a demonization campaign that impugns science (together with the rest of the Enlightenment) for crimes that are as old as civilization, including racism, slavery, conquest, and genocide. […]

“Scientific racism,” the theory that races fall into a hierarchy of mental sophistication with Northern Europeans at the top, is a prime example. It was popular in the decades flanking the turn of the 20th century, apparently supported by craniometry and mental testing, before being discredited in the middle of the 20th century by better science and by the horrors of Nazism. Yet to pin ideological racism on science, in particular on the theory of evolution, is bad intellectual history. Racist beliefs have been omnipresent across history and regions of the world. Slavery has been practiced by every major civilization and was commonly rationalized by the belief that enslaved peoples were inherently suited to servitude, often by God’s design. Statements from ancient Greek and medieval Arab writers about the biological inferiority of Africans would curdle your blood, and Cicero’s opinion of Britons was not much more charitable.

More to the point, the intellectualized racism that infected the West in the 19th century was the brainchild not of science but of the humanities: history, philology, classics, and mythology.

As I’ve touched on, this is so far from reality it’s practically creationist. Let’s ignore the implication that no-one used science to promote racism past the 1950’s, which ain’t so, and dig up more data points on the dark side of the Enlightenment.

… the Scottish philosopher David Hume would write: “I am apt to suspect the Negroes, and in general all other species of men to be naturally inferior to the whites. There never was any civilized nation of any other complection than white, nor even any individual eminent in action or speculation.” […]

Another two decades on, Immanuel Kant, considered by many to be the greatest philosopher of the modern period, would manage to let slip what is surely the greatest non-sequitur in the history of philosophy: describing a report of something seemingly intelligent that had once been said by an African, Kant dismisses it on the grounds that “this fellow was quite black from head to toe, a clear proof that what he said was stupid.” […]

Scholars have been aware for a long time of the curious paradox of Enlightenment thought, that the supposedly universal aspiration to liberty, equality and fraternity in fact only operated within a very circumscribed universe. Equality was only ever conceived as equality among people presumed in advance to be equal, and if some person or group fell by definition outside of the circle of equality, then it was no failure to live up to this political ideal to treat them as unequal.

It would take explicitly counter-Enlightenment thinkers in the 18th century, such as Johann Gottfried Herder, to formulate anti-racist views of human diversity. In response to Kant and other contemporaries who were positively obsessed with finding a scientific explanation for the causes of black skin, Herder pointed out that there is nothing inherently more in need of explanation here than in the case of white skin: it is an analytic mistake to presume that whiteness amounts to the default setting, so to speak, of the human species.


Indeed, connections between science and the slave trade ran deep during [Robert] Boyle’s time—all the way into the account books. Royal Society accounts for the 1680s and 1690s shows semi-regular dividends paid out on the Society’s holdings in Royal African Company stock. £21 here, £21 there, once a year, once every two years. Along with membership dues and the occasional book sale, these dividends supported the Royal Society during its early years.

Boyle’s early “experiments” with the inheritance of skin color set an agenda that the scientists of the Royal Society pursued through the decades. They debated the origins of blackness with rough disregard for the humanity of enslaved persons even as they used the Royal African’s Company’s dividends to build up the Royal Society as an institution. When it came to understanding skin color, Boyle used his wealth and position to help construct a science of race that, for centuries, was used to justify the enslavement of Africans and their descendants globally.


This timeline gives an overview of scientific racism throughout the world, placing the Eugenics Record Office within a broader historical framework extending from Enlightenment-Era Europe to present-day social thought.

All this is obvious via a glance at a history book, something which Pinker is apparently allergic to. I’ll give Harrison the final word:

If we put into the practice the counting and gathering of data that Pinker so enthusiastically recommends and apply them to his own book, the picture is revealing. Locke receives a meagre two mentions in passing. Voltaire clocks up a modest six references with Spinoza coming in at a dozen. Kant does best of all, with a grand total of twenty-five (including references). Astonishingly, Diderot rates only two mentions (again in passing) and D’Alembert does not trouble the scorers. Most of these mentions occur in long lists. Pinker refers to himself over 180 times. […]

… if Enlightenment Now is a model of what Pinker’s advice to humanities scholars looks like when put into practice, I’m happy to keep ignoring it.

Steven Pinker, Crank

At least he doesn’t start out that way.

The Second Law of Thermodynamics states that in an isolated system (one that is not taking in energy), entropy never decreases. … Closed systems inexorably become less structured, less organized, less able to accomplish interesting and useful outcomes, until they slide into an equilibrium of gray, tepid, homogeneous monotony and stay there.

For a non-physicist, it’s a decent formulation. It needs more of a description of entropy, though. In computer science, we think of it as how much information is or could be packed into an space. If I have a typical six-sided die, I can send you a message by giving it to you in a specific configuration. If I just ask you to look at a specific side, there are only six unique states to send a message with; if I also ask you to look at the orientation of the other sides, I can bump that up to twenty-four. I can’t send any more information unless I increase the number of states, or get to send multiple die or the same die multiple times. Compression is just transforming a low-entropy encoding into a high-entropy one, saving some time or space.

The physics version is closely related: how many ways can I shuffle the microscopic details of a system while preserving the macroscopic ones? If you’re looking at something small like a computer circuit, the answer is “not many.” The finely-ordered detail can’t be tweaked very much, and still result in a functional circuit. In contrast, the air above the circuit can be mixed up quite a bit and yet still look and act the same. Should a microscopic fluctuation happen, it’ll be far more harmful to the circuit than the air, so when they do inevitably happen the result is a gradual breaking up of the circuit. Its molecules will be slowly stripped off and brought into equilibrium with the air surrounding it, which also changes but less so.

Still with me? Good, because Pinker starts to drift off..

The Second Law of Thermodynamics is acknowledged in everyday life, in sayings such as “Ashes to ashes,” “Things fall apart,” “Rust never sleeps,” “Shit happens,” You can’t unscramble an egg,” “What can go wrong will go wrong,” and (from the Texas lawmaker Sam Rayburn), “Any jackass can kick down a barn, but it takes a carpenter to build one.”

That’s not really the Second Law, though. Pinker himself acknowledges that it only applies to closed systems, but anyone who’s looked up can attest that it isn’t. This comes up all the time in Creationist circles:

There is a mathematical correlation between entropy increase and an increase in disorder. The overall entropy of an isolated system can never decrease. However, the entropy of some parts of the system can spontaneously decrease at the expense of an even greater increase of other parts of the system. When heat flows spontaneously from a hot part of a system to a colder part of the system, the entropy of the hot area spontaneously decreases!

It’s bad enough that Pinker invokes a creationist-level understanding of physics, but he actually manages to make them look intelligent with:

To start with, the Second Law implies that misfortune may be no one’s fault. … Not only does the universe not care about our desires, but in the natural course of events it will appear to thwart them, because there are so many more ways for things to go wrong than to go right. Houses burn down, ships sink, battles are lost for the want of a horseshoe nail.

There is no “wrong” ordering of molecules in the air or a computer chip, only orderings that aren’t what human beings want. “Misfortune” is a human construct superimposed on the universe, to model the goal we strive for. It has no place in a physics classroom, and is completely unrelated to thermodynamics.

Poverty, too, needs no explanation. In a world governed by entropy and evolution, it is the default state of humankind. Matter does not just arrange itself into shelter or clothing, and living things do everything they can not to become our food. What needs to be explained is wealth. Yet most discussions of poverty consist of arguments about whom to blame for it.

Poverty is the inability to fulfill our basic needs. Is Pinker saying that, by default, human beings are incapable of meeting their basic needs, like food and shelter? Then he is effectively arguing we should have gone extinct and been replaced by a species which has no problems meeting its basic needs, like spiders or bacteria or ants. This of course ignores that economies are not closed systems, as the Sun helpfully dumps energy on us. Innovation increases efficiency and therefore entropy, which means that people who can’t gather their needs efficiently given what they have are living in a low entropic state.

But I thought entropy only increased over time, according to the Second Law? By Pinker’s own logic, poverty should not be the default but the past, a state that we evolved out of!

More generally, an underappreciation of the Second Law lures people into seeing every unsolved social problem as a sign that their country is being driven off a cliff.

Ooooh, I get it. This essay is just an excuse for Pinker to whine about progressives who want to improve other people’s lives. He thought he could hide his complaints behind science, to make them look more digestible to himself and others, but in reality just demonstrated he understands physics worse than most creationists.

What a crank. And sadly, that seems to be the norm in Evolutionary Psychology.

No, that is not a Sokal hoax; that is a legitimate paper published by two leading Evolutionary Psychologists! There must be something about the field that breeds smug ignorance…

Steven Pinker and his Portable Goalposts

PZ Myers seems to have pissed off quite a few people, this time for taking Steven Pinker to task. His take is worth reading in full, but I’d like to add another angle. In the original interview, there’s a very telling passage:

Belluz: But as you mentioned, there’s been an uptick in war deaths driven by the staggeringly violent ongoing conflict in Syria. Does that not affect your thesis?

Pinker: No, it doesn’t affect the thesis because the rate of death in war is about 1.4 per 100,000 per year. That’s higher than it was at the low point in 2010. But it’s still a fraction of what it was in earlier years.

See the problem here? Pinker’s hypothesis is that over the span of centuries, violence will decrease. The recent spike in deaths may be the start of a reversal that proves Pinker wrong. But because his hypothesis covers such a wide timespan, we’re going to need fifty or more years worth of data to challenge it. [Read more…]

Texas Sharpshooter

Quick Note

I’m trying something new! This blog post is available in two places, both here and on a Jupyter notebook. Over there, you can tweak and execute my source code, using it as a sandbox for your own explorations. Over here, it’s just a boring ol’ webpage without any fancy features, albeit one that’s easier to read on the go. Choose your own adventure!

Oh also, CONTENT WARNING: I’ll briefly be discussing sexual assault statistics from the USA at the start, in an abstract sense.

Introduction

[5:08] Now this might seem pedantic to those not interested in athletics, but in the athletic world one percent is absolutely massive. Just take for example the 2016 Olympics. The difference between first and second place in the men’s 100-meter sprint was 0.8%.

I’ve covered this argument from Rationality Rules before, but time has made me realise my original presentation had a problem.

His name is Steven Pinker.

(Click here to show the code)

Forcibe Rape, USA, Police ReportsHe looks at that graph, and sees a decline in violence. I look at that chart, and see an increase in violence. How can two people look at the same data, and come to contradictory conclusions?

Simple, we’ve got at least two separate mental models.

(Click here to show the code)
Finding the maximal likelihood, please wait ... done.
Running an MCMC sampler, please wait ... done.
Charting the results, please wait ...

The same chart as before, with three models overlaid.

All Pinker cares about is short-term trends here, as he’s focused on “The Great Decline” in crime since the 1990’s. His mental model looks at the general trend over the last two decades of data, and discards the rest of the datapoints. It’s the model I’ve put in red.

I used two seperate models in my blog post. The first is quite crude: is the last datapoint better than the first? This model is quite intuitive, as it amounts to “leave the place in better shape than when you arrived,” and it’s dead easy to calculate. It discards all but two datapoints, though, which is worse than Pinker’s model. I’ve put this one in green.

The best model, in my opinion, wouldn’t discard any datapoints. It would also incorporate as much uncertainty as possible about the system. Unsurprisingly, given my blogging history, I consider Bayesian statistics to be the best way to represent uncertainty. A linear model is the best choice for general trends, so I went with a three-parameter likelihood and prior:

p( x,y | m,b,\log(\sigma) ) = e^{ -\frac 1 2 \big(\frac{y-k}{\sigma}\big)^2 }(\sigma \sqrt{2\pi})^{-1}, ~ k = x \cdot m + b p( m,b,\log(\sigma) ) = \frac 1 \sigma (1 + m^2)^{-\frac 3 2}

This third model encompasses all possible trendlines you could draw on the graph, but it doesn’t hold them all to be equally likely. Since time is short, I used an MCMC sampler to randomly sample the resulting probability distribution, and charted that sample in blue. As you can imagine this requires a lot more calculation than the second model, but I can’t think of anything superior.

Which model is best depends on the context. If you were arguing just over the rate of police-reported sexual assault from 1992 to 2012, Pinker’s model would be pretty good if incomplete. However, his whole schtick is that long-term trends show a decrease in violence, and when it comes to sexual violence in particular he’s the only one who dares to talk about this. He’s not being self-consistent, which is easier to see when you make your implicit mental models explicit.

Pointing at Variance Isn’t Enough

Let’s return to Rationality Rules’ latest transphobic video. In the citations, he explicitly references the men’s 100m sprint at the 2016 Olympics. That’s a terribly narrow window to view athletic performance through, so I tracked down the racetimes of all eight finalists on the IAAF’s website and tossed them into a spreadsheet.

 

(Click here to show the code)
Rio de Janeiro Olympic Games, finals
Athlete  Result  Delta
     bolt    9.81   0.00
   gatlin    9.89   0.08
de grasse    9.91   0.10
    blake    9.93   0.12
  simbine    9.94   0.13
    meite    9.96   0.15
   vicaut   10.04   0.23
  bromell   10.06   0.25

Here, we see exactly what Rationality Rules sees: Usain Bolt, the current world record holder, earned himself another Olympic gold medal in the 100m sprint. First and third place are separated by a tenth of a second, and the slowest person in the finals was a mere quarter of a second behind the fastest. That’s a small fraction of the time it takes to complete the event.

(Click here to show the code)
Race times in 2016, sorted by fastest time
Name             Min time         Mean             Median           Personal max-min
-----------------------------------------------------------------------------------------------------
gatlin                        9.8         9.95         9.94         0.39
bolt                         9.81         9.98        10.01         0.34
bromell                      9.84        10.00        10.01         0.30
vicaut                       9.86        10.01        10.02         0.33
simbine                      9.89        10.10        10.08         0.43
de grasse                    9.91        10.07        10.04         0.41
blake                        9.93        10.04         9.98         0.33
meite                        9.95        10.10        10.05         0.44

Here, we see what I see: the person who won Olympic gold that year didn’t have the fastest time. That honour goes to Justin Gatlin, who squeaked ahead of Bolt by a hundredth of a second.

Come to think of it, isn’t the fastest time a poor judge of how good an athlete is? Picture one sprinter with a faster average time than another, and a second with a faster minimum time. The first athlete will win more races than the second. By that metric, Gatlin’s lead grows to three hundredths of a second.

The mean, alas, is easily tugged around by outliers. If someone had an exceptionally good or bad race, they could easily shift their overall mean a decent ways from where the mean of every other result lies. The median is a lot more resistant to the extremes, and thus a fairer measure of overall performance. By that metric, Bolt is now tied for third with Trayvon Bromell.

We could also judge how good an athlete is by how consistent they were in the given calendar year. By this metric, Bolt falls into fourth place behind Bromell, Jimmy Vicaut, and Yohan Blake. Even if you don’t agree to this metric, notice how everyone’s race times in 2016 varies between three and four tenths of a second. It’s hard to argue that a performance edge of a tenth of a second matters when even at the elite level sprinters’ times will vary by significantly more.

But let’s put on our Steven Pinker glasses. We don’t judge races by medians, we go by the fastest time. We don’t award records for the lowest average or most consistent performance, we go by the fastest time. Yes, Bolt didn’t have the fastest 100m time in 2016, but now we’re down to hundredths of a second; if anything, we’ve dug up more evidence that itty-bitty performance differences matter. If I’d just left things at that last paragraph, which is about as far as I progressed the argument last time, a Steven Pinker would likely have walked away even more convinced that Rationality Rules got it right.

I don’t have to leave things there, though. This time around, I’ll make my mental model as explicit as possible. Hopefully by fully arguing the case, instead of dumping out data and hoping you and I share the same mental model, I could manage to sway even a diehard skeptic. To further seal the deal, the Jupyter notebook will allow you to audit my thinking or even create your own model. No need to take my word.

I’m laying everything out in clear sight. I hope you’ll give it all a look before dismissing me.

Model Behaviour

Our choice of model will be guided by the assumptions we make about how athletes perform in the 100 metre sprint. If we’re going to do this properly, we have to lay out those assumptions as clearly as possible.

  1. The Best Athlete Is the One Who Wins the Most. Our first problem is to decide what we mean by “best,” when it comes to the 100 metre sprint. Rather than use any metric like the lowest possible time or the best overall performance, I’m going to settle on something I think we’ll both agree to: the athlete who wins the most races is the best. We’ll be pitting our models against each other as many times as possible via virtual races, and see who comes out on top.
  2. Pobody’s Nerfect. There is always going to be a spanner in the works. Maybe one athlete has a touch of the flu, maybe another is going through a bad breakup, maybe a third got a rock in their shoe. Even if we can control for all that, human beings are complex machines with many moving parts. Our performance will vary. This means we can’t use point estimates for our model, like the minimum or median race time, and instead must use a continuous statistical distribution.This assumption might seem like begging the question, as variance is central to my counter-argument, but note that I’m only asserting there’s some variance. I’m not saying how much variance there is. It could easily be so small as to be inconsequential, in the process creating strong evidence that Rationality Rules was right.
  3. Physics Always Wins. No human being can run at the speed of light. For that matter, nobody is going to break the sound barrier during the 100 metre sprint. This assumption places a hard constraint on our model, that there is a minimum time anyone could run the 100m. It rules out a number of potential candidates, like the Gaussian distribution, which allow negative times.
  4. It’s Easier To Move Slow Than To Move Fast. This is kind of related to the last one, but it’s worth stating explicitly. Kinetic energy is proportional to the square of the velocity, so building up speed requires dumping an ever-increasing amount of energy into the system. Thus our model should have a bias towards slower times, giving it a lopsided look.

Based on all the above, I propose the Gamma distribution would make a suitable model.

\Gamma(x | \alpha, \beta ) = \frac{\beta^\alpha}{\Gamma(\alpha)} x^{\alpha-1} e^{-\beta x}

(Be careful not to confuse the distribution with the function. I may need the Gamma function to calculate the Gamma distribution, but the Gamma function isn’t a valid probability distribution.)

(Click here to show the code)
Three versions of the Gamma Distribution

Three versions of the Gamma Distribution.

It’s a remarkably flexible distribution, capable of duplicating both the Exponential and Gaussian distributions. That’s handy, as if one of our above assumptions is wrong the fitting process could still come up with a good fit. Note that the Gamma distribution has a finite bound at zero, which is equivalent to stating that negative values are impossible. The variance can be expanded or contracted arbitrarily, so it isn’t implicitly supporting my arguments. Best of all, we’re not restricted to anchor the distribution at zero. With a little tweak …

\Gamma(x | \alpha, \beta, b ) = \frac{\beta^\alpha}{\Gamma(\alpha)} \hat x^{\alpha-1} e^{-\beta \hat x}, ~ \hat x = x - b

… we can shift that zero mark wherever we wish. The parameter sets the minimum value our model predicts, while α controls the underlying shape and β controls the scale or rate associated with this distribution. α < 1 nets you the Exponential, and large values of α lead to something very Gaussian. Conveniently for me, SciPy already supports this three-parameter tweak.

My intuition is that the Gamma distribution on the left, with α > 1 but not too big, is the best model for athlete performance. That implies an athlete’s performance will hover around a specific value, and while they’re capable of faster times those are more difficult to pull off. The Exponential distribution, with α < 1, is most favourable to Rationality Rules, as it asserts the race time we’re most likely to observe is also the fastest time an athlete can do. We’ll never actually see that time, but what we observe will cluster around that minimum.

Running the Numbers

Enough chatter, let’s fit some models! For this one, my prior will be

p( \alpha, \beta, b ) = \begin{cases} 0, & \alpha \le 0 \\ 0, & \beta \le 0 \\ 0, & b \le 0 \\ 1, & \text{otherwise} \end{cases},

which is pretty light and only exists to filter out garbage values.

(Click here to show the code)
Generating some models for 2016 race times (a few seconds each) ...
# name          	α               	β               	b               
gatlin          	0.288 (+0.112 -0.075)	1.973 (+0.765 -0.511)	9.798 (+0.002 -0.016)
bolt            	0.310 (+0.107 -0.083)	1.723 (+0.596 -0.459)	9.802 (+0.008 -0.025)
bromell         	0.339 (+0.115 -0.082)	1.677 (+0.570 -0.404)	9.836 (+0.004 -0.032)
vicaut          	0.332 (+0.066 -0.084)	1.576 (+0.315 -0.400)	9.856 (+0.004 -0.013)
simbine         	0.401 (+0.077 -0.068)	1.327 (+0.256 -0.226)	9.887 (+0.003 -0.018)
de grasse       	0.357 (+0.073 -0.082)	1.340 (+0.274 -0.307)	9.907 (+0.003 -0.022)
blake           	0.289 (+0.103 -0.085)	1.223 (+0.437 -0.361)	9.929 (+0.001 -0.008)
meite           	0.328 (+0.089 -0.067)	1.090 (+0.295 -0.222)	9.949 (+0.000 -0.003)
... done.

This text can’t change based on the results of the code, so this is only a guess, but I’m pretty sure you’re seeing a lot of α values less than one. That really had me worried when I first ran this model, as I was already conceding ground to Rationality Rules by focusing only on the 100 metre sprint, where even I think that physiology plays a significant role. I did a few trial runs with a prior that forced α > 1, but the resulting models would hug that threshold as tightly as possible. Comparing likelihoods, the α < 1 versions were always more likely than the α > 1 ones.

The fitting process was telling me my intuition was wrong, and the best model here is the one that most favours Rationality Rules. Look at the b values, too. There’s no way I could have sorted the models based on that parameter before I fit them; instead, I sorted them by each athlete’s minimum time. Sure enough, the model is hugging the fastest time each athlete posted that year, rather than a hypothetical minimum time they could achieve.

(Click here to show the code)

100 models of blake's 2016 race times.

Charting some of the models in the posterior drives this home. I’ve looked at a few by tweaking the “player” variable, as well as the output of multiple sample runs, and they all are dominated by Exponential distributions.

Dang, we’ve tilted the playing field quite a ways in Rationality Rules’ favour.

Still, let’s simulate some races. For each race, I’ll pick a random trio of parameters from each model’s posterior and feet that into SciPy’s random number routines to generate a race time for each sprinter. Fastest time wins, and we tally up those wins to estimate the odds of any one sprinter coming in first.

Before running those simulations, though, we should make some predictions. Rationality Rules’ view is that (emphasis mine) …

[9:18] You see, I absolutely understand why we have and still do categorize sports based upon sex, as it’s simply the case that the vast majority of males have significant athletic advantages over females, but strictly speaking it’s not due to their sex. It’s due to factors that heavily correlate with their sex, such as height, width, heart size, lung size, bone density, muscle mass, muscle fiber type, hemoglobin, and so on. Or, in other words, sports are not segregated due to chromosomes, they’re segregated due to morphology.

[16:48] Which is to say that the attributes granted from male puberty that play a vital role in explosive events – such as height, width, limb length, and fast twitch muscle fibers – have not been shown to be sufficiently mitigated by HRT in trans women.

[19:07] In some events – such as long-distance running, in which hemoglobin and slow-twitch muscle fibers are vital – I think there’s a strong argument to say no, [transgender women who transitioned after puberty] don’t have an unfair advantage, as the primary attributes are sufficiently mitigated. But in most events, and especially those in which height, width, hip size, limb length, muscle mass, and muscle fiber type are the primary attributes – such as weightlifting, sprinting, hammer throw, javelin, netball, boxing, karate, basketball, rugby, judo, rowing, hockey, and many more – my answer is yes, most do have an unfair advantage.

… human morphology due to puberty is the primary determinant of race performance. Since our bodies change little after puberty, that implies your race performance should be both constant and consistent. The most extreme version of this argument states that the fastest person should win 100% of the time. I doubt Rationality Rules holds that view, but I am pretty confident he’d place the odds of the fastest person winning quite high.

The opposite view is that the winner is due to chance. Since there are eight athletes competing here, each would have a 12.5% chance of winning. I certainly don’t hold that view, but I do argue that chance plays a significant role in who wins. I thus want the odds of the fastest person winning to be somewhere above 12.8%, but not too much higher.

(Click here to show the code)
Simulating 15000 races, please wait ... done.

Number of wins during simulation
--------------------------------
gatlin                       5174 (34.49%)
bolt                         4611 (30.74%)
bromell                      2286 (15.24%)
vicaut                       1491 (9.94%)
simbine                       530 (3.53%)
de grasse                     513 (3.42%)
blake                         278 (1.85%)
meite                         117 (0.78%)

Whew! The fastest 100 metre sprinter of 2016 only had a one in three chance of winning Olympic gold. Of the eight athletes, three had odds better than chance of winning. Even with the field tilted in favor of Rationality Rules, this strongly hints that other factors are more determinative of performance than fixed physiology.

But let’s put our Steven Pinker glasses back on for a moment. Yes, the odds of the fastest 100 metre sprinter winning the 2016 Olympics are surprisingly low, but look at the spread between first and last place. What’s on my screen tells me that Gatlin is 40-50 times more likely to win Olympic gold than Ben Youssef Meite, which is a pretty substantial gap. Maybe we can rescue Rationality Rules?

In order for Meite to win, though, he didn’t just have to beat Gatlin. He had to also beat six other sprinters. If pM represents the geometric mean of Meite beating one sprinter, then his odds of beating seven are pM7. The same rationale applies to Gatlin, of course, but because the geometric mean of him beating seven other racers is higher than pM, repeatedly multiplying it by itself results in a much greater number. With a little math, we can use the number of wins above to estimate how well the first-place finisher would fare against the last-place finisher in a one-on-one race.

(Click here to show the code)
In the above simulation, gatlin was 39.5 times more likely to win Olympic gold than meite.
But we estimate that if they were racing head-to-head, gatlin would win only 62.8% of the time.
 (For reference, their best race times in 2016 differed by 1.53%.)

For comparison, FiveThirtyEight gave roughly those odds for Hilary Clinton becoming the president of the USA in 2016. That’s not all that high, given how “massive” the difference is in their best race times that year.

This is just an estimate, though. Maybe if we pitted our models head-to-head, we’d get different results?

(Click here to show the code)
Wins when racing head to head (1875 simulations each)
----------------------------------------------
LOSER->       gatlin      bolt   bromell    vicaut   simbine de grasse     blake     meite
gatlin                   48.9%     52.1%     55.8%     56.4%     59.5%     63.5%     61.9%
bolt                               52.2%     57.9%     55.8%     57.9%     65.8%     60.2%
bromell                                      52.4%     55.3%     55.0%     65.2%     59.0%
vicaut                                                 51.7%     52.2%     59.8%     59.3%
simbine                                                          52.3%     57.7%     57.1%
de grasse                                                                  57.0%     54.7%
blake                                                                                47.2%
meite                                                                                     

The best winning percentage was 65.8% (therefore the worst losing percent was 34.2%).

Nope, it’s pretty much bang on! The columns of this chart represents the loser of the head-to-head, while the rows represent the winner. That number in the upper-right, then, represents the odds of Gatlin coming in first against Meite. When I run the numbers, I usually get a percentage that’s less than 5 percentage points off. Since the odds of one person losing is the odds of the other person winning, you can flip around who won and lost by subtracting the odds from 100%. That explains why I only calculated less than half of the match-ups.

I don’t know what’s on your screen, but I typically get one or two match-ups that are below 50%. I’m again organizing the calculations by each athlete’s fastest time in 2016, so if an athlete’s win ratio was purely determined by that then every single value in this table would be equal to or above 50%. That’s usually the case, thanks to each model favouring the Exponential distribution, but sometimes one sprinter still winds up with a better average time than a second’s fastest time. As pointed out earlier, that translates into more wins for the first athlete.

Getting Physical

Even at this elite level, you can see the odds of someone winning a head-to-head race are not terribly high. A layperson can create that much bias in a coin toss, yet we still both outcomes of that toss to be equally likely.

This doesn’t really contradict Rationality Rules’ claim that fractions of a percent in performance matter, though. Each of these athletes differ in physiology, and while that may not have as much effect as we thought it still has some effect. What we really need is a way to substract out the effects due to morphology.

If you read that old blog post, you know what’s coming next.

[16:48] Which is to say that the attributes granted from male puberty that play a vital role in explosive events – such as height, width, limb length, and fast twitch muscle fibers – have not been shown to be sufficiently mitigated by HRT in trans women.

According to Rationality Rules, the physical traits that determine track performance are all set in place by puberty. Since puberty finishes roughly around age 15, and human beings can easily live to 75, that implies those traits are fixed for most of our lifespan. In practice that’s not quite true, as (for instance) human beings lose a bit of height in old age, but here we’re only dealing with athletes in the prime of their career. Every attribute Rationality Rules lists is effectively constant.

So to truly put RR’s claim to the test, we need to fit our model to different parts of the same athlete’s career, and compare those head-to-head results with the ones where we raced athletes against each other.

(Click here to show the code)
     Athlete First Result Latest Result
0      blake   2005-07-13    2019-06-21
1       bolt   2007-07-18    2017-08-05
2    bromell   2012-04-06    2019-06-08
3  de grasse   2012-06-08    2019-06-20
4     gatlin   2000-05-13    2019-07-05
5      meite   2003-07-11    2018-06-16
6    simbine   2010-03-13    2019-06-20
7     vicaut   2008-07-05    2019-07-02

That dataset contains official IAAF times going back nearly two decades, in some cases, for those eight athletes. In the case of Bolt and Meite, those span their entire sprinting career.

Which athlete should we focus on? It’s tempting to go with Bolt, but he’s an outlier who broke the mathmatical models used to predict sprint times. Gatlin would have been my second choice, but between his unusually long career and history of doping there’s a decent argument that he too is an outlier. Bromell seems free of any issue, so I’ll go with him. Don’t agree? I made changing the athlete as simple as altering one variable, so you can pick whoever you like.

I’ll divide up these athlete’s careers by year, as their performance should be pretty constant over that timespan, and for this sport there’s usually enough datapoints within the year to get a decent fit.

(Click here to show the code)
bromell vs. bromell, model building ...
year	α	β	b
2012	0.639 (+0.317 -0.219)	0.817 (+0.406 -0.280)	10.370 (+0.028 -0.415)
2013	0.662 (+0.157 -0.118)	1.090 (+0.258 -0.195)	9.970 (+0.018 -0.070)
2014	0.457 (+0.118 -0.070)	1.556 (+0.403 -0.238)	9.762 (+0.007 -0.035)
2015	0.312 (+0.069 -0.064)	2.082 (+0.459 -0.423)	9.758 (+0.002 -0.016)
2016	0.356 (+0.092 -0.104)	1.761 (+0.457 -0.513)	9.835 (+0.005 -0.037)
... done.

bromell vs. bromell, head to head (1875 simulations)
----------------------------------------------
LOSER->   2012   2013   2014   2015   2016
   2012         61.3%  67.4%  74.3%  71.0%
   2013                65.1%  70.7%  66.9%
   2014                       57.7%  48.7%
   2015                              40.2%
   2016                                   

The best winning percentage was 74.3% (therefore the worst losing percent was 25.7%).

Again, I have no idea what you’re seeing, but I’ve looked at a number of Bromell vs. Bromell runs, and every one I’ve done shows at least as much variation, if not more, than runs that pit Bromell against other athletes. Bromell vs. Bromell shows even more variation in success than the coin flip benchmark, giving us justification for saying Bromell has a significant advantage over Bromell.

I’ve also changed that variable myself, and seen the same pattern in other athletes. Worried about a lack of datapoints causing the model to “fuzz out” and cover a wide range of values? I thought of that and restricted the code to filter out years with less than three races. Honestly, I think it puts my conclusion on firmer ground.

Conclusion

Texas Sharpshooter Fallacy: Ignoring the difference while focusing on the similarities, thus coming to an inaccurate conclusion. Similar to the gambler’s fallacy, this is an example of inserting meaning into randomness.

Rationality Rules loves to point to sporting records and the outcome of single races, as on the surface these seem to justify his assertion that differences in performance of fractions of a percent matter. In reality, he’s painting a bullseye around a very small subset of the data and ignoring the rest. When you include all the data, you find Rationality Rules has badly missed the mark. Physiology cannot be as determinative as Rationality Rules claims, other factors must be important enough to sometimes overrule it.

And, at long last, I can call bullshit on this (emphasis mine):

[17:50] It’s important to stress, by the way, that these are just my views. I’m not a biologist, physiologist, or statistician, though I have had people check this video who are.

Either Rationality Rules found a statistician who has no idea of variance, which is like finding a computer scientist who doesn’t know boolean logic, or he never actually consulted a statistician. Chalk up yet another lie in his column.

A Year-End Wrap Up

… You know, I’ve never actually done one? They feel a bit self-indulgent, but having looked at the data I think there’s an interesting pattern here. Tell me if you can spot it, based on the eleven posts that earned the most traffic in 2018:

[Read more…]

A Reminder About Sexual Assault

I think Garrett Epps nailed this.

The gendered subtext of this moment is, not to put too fine a point on it, war—war to the knife—over the future of women’s autonomy in American society. Shall women control their own reproduction, their health care, their contraception, their legal protection at work against discrimination and harassment, or shall we move backward to the chimera of past American greatness, when the role of women was—supposedly for biological reasons—subordinate to that of men?

That theme became apparent even before the 2016 election, when candidate Donald Trump promised to pick judges who would “automatically” overturn Roe v. Wade. The candidate was by his own admission a serial sexual harasser. On live national television, he then stalked, insulted, and physically menaced his female opponent—and he said, in an unguarded moment, that in his post-Roe future, women who choose abortion will face “some form of punishment.”

In context, Trump promised to restore the old system of dominion—by lawmakers, husbands, pastors, institutions, and judges—over women’s reproduction.

And as they point out, the subtext has now become text with the allegations of sexual assault by Brett Kavanaugh. There are plenty of other reasons to deny Kavanaugh a Supreme Court seat, mind you, but the Republican Party has descended so low that corruption and a dismissal of human rights mean nothing when it harms them (but everything when it harms their opponents). Even Senator Susan Collins, considered to be on the liberal side of the Party, still twists in knots to defend Kavanaugh. These allegations of sexual assault might have been the straw, though.

Of course, now that sexual assault is back in the news, all the old apologetics are being vomited up. “Why didn’t she speak up?” “Boys will be boys.” “You’re ruining his life!” “There’s no evidence.” “This can’t be a common thing.” “Just trust the system.” It’s all very tired, and has been written about countless times before.

For instance, here’s a sampling of my own writing:

Evidence-Based Feminism 2: Sexual assault and rape culture

Debunking Some Skeptic Myths About Sexual Assault

Index Post: Rape Myth Acceptance

Christina Hoff Sommers: Science Denialist?

A Statistical Analysis of a Sexual Assault Case

Men Under Construction

Sexual Assault As a Con Game

Consent on Campus

Colleges and Sexual Assault

Destruction of Justice

Sexual Assault as a Talking Point

“There are no perfect victims.”

False Rape Reports, In Perspective

Everyone Needs A Hobby

Steven Pinker and His Portable Goalposts

Perfect, In Theory

Holy Fuck, Carol Tavris

Recovered Memories and Sexual Assault

Talking Sexual Assault

The evidence around sexual assault is pretty clear, and even in Kavanaugh’s specific case there’s circumstantial evidence that makes the accusations plausible. If people are still promoting myths about it at this point, it’s because they want to.

[HJH 2018-09-17: Added a few more links. Props to Salty Current of the Political Madness thread for some of them.]

It Is Friday, After All

I was sitting down to write a weighty post about child separation, while reminding myself of another post I’d promised on the subject, and eyeing up which Steven Pinker post I should begin work on, all of which is happening as I’m juggling some complex physics and computational problems, and-

You know what? Here’s a video of someone dunking oranges in a fish tank, in an excellent demonstration of the scientific method. [Read more…]