[CONTENT WARNING: Mythicist Milwaukee]
[CONTENT WARNING: Mythicist Milwaukee]
This may be hard to believe, but I’m not about to talk about Bayesian modeling nor CompSci. Nope, I got dragged into an argument over implicit bias with a science-loving “skeptic,” and a few people mobbed me over the “model minority.”
Asian-Americans, like Jews, are indeed a problem for the “social-justice” brigade. I mean, how on earth have both ethnic groups done so well in such a profoundly racist society? How have bigoted white people allowed these minorities to do so well — even to the point of earning more, on average, than whites? Asian-Americans, for example, have been subject to some of the most brutal oppression, racial hatred, and open discrimination over the years. In the late 19th century, as most worked in hard labor, they were subject to lynchings and violence across the American West and laws that prohibited their employment. They were banned from immigrating to the U.S. in 1924. Japanese-American citizens were forced into internment camps during the Second World War, and subjected to hideous, racist propaganda after Pearl Harbor. Yet, today, Asian-Americans are among the most prosperous, well-educated, and successful ethnic groups in America. What gives?
What gives is simple demographics. Take it away, Jeff Guo of the Washington Post: [Read more…]
This article from Kiara Alfonseca of ProPublica got me thinking.
Fake hate crimes have a huge impact despite their rarity, said Ryan Lenz, senior investigative writer for the Southern Poverty Law Center Intelligence Project. “There aren’t many people claiming fake hate crimes, but when they do, they make massive headlines,” he said. It takes just one fake report, Lenz said, “to undermine the legitimacy of other hate crimes.”
My lizard brain could see the logic in this: learning one incident was a hoax opened up the possibility that others were hoaxes too, which was comforting if I thought that world was fundamentally moral. But with a half-second more thought, that view seemed ridiculous: if we go from a 0% hoax rate to 11% in our sample, we’ve still got good reason to think the hoax rate is low.
With a bit more thought, I realized I had enough knowledge of probability to determine who was right.
I keep an eye out for old criticisms of null hypothesis significance testing. There’s just something fascinating about reading this…
In this paper, I wish to examine a dogma of inferential procedure which, for psychologists at least, has attained the status of a religious conviction. The dogma to be scrutinized is the “null-hypothesis significance test” orthodoxy that passing statistical judgment on a scientific hypothesis by means of experimental observation is a decision procedure wherein one rejects or accepts a null hypothesis according to whether or not the value of a sample statistic yielded by an experiment falls within a certain predetermined “rejection region” of its possible values. The thesis to be advanced is that despite the awesome pre-eminence this method has attained in our experimental journals and textbooks of applied statistics, it is based upon a fundamental misunderstanding of the nature of rational inference, and is seldom if ever appropriate to the aims of scientific research. This is not a particularly original view—traditional null-hypothesis procedure has already been superceded in modern statistical theory by a variety of more satisfactory inferential techniques. But the perceptual defenses of psychologists are particularly efficient when dealing with matters of methodology, and so the statistical folkways of a more primitive past continue to dominate the local scene.[1]
… then realising it dates from 1960. So far I’ve spotted five waves of criticism: Jerzy Neyman and Egon Peterson head the first, dating from roughly 1928 to 1945; a number of authors such as the above-quoted Rozeboom formed a second wave between roughly 1960 and 1970; Jacob Cohen kicked off a third wave around 1990, which maybe lasted until his death in 1998; John Ioannidis spearheaded another wave in 2005, though this died out even quicker; and finally the “replication crisis” that kicked off in 2011 and is still ongoing as I type this.
I do like to search for papers outside of those waves, however, just to verify the partition. This one doesn’t qualify, but it’s pretty cool nonetheless.
Berkson, Joseph. “Tests of Significance Considered as Evidence.” Journal of the American Statistical Association 1942;37:325-35. International Journal of Epidemiology, vol. 32, no. 5, 2003, pp. 687.
For instance, they point to a specific example drawn from Ronald Fisher himself. The latter delves into a chart of eye facet frequency in Drosophila melanogaster, at various temperatures, and extracts some means. Conducting an ANOVA test, Fisher states “deviations from linear regression are evidently larger than would be expected, if the regression were really linear, from the variations within the arrays,” then concludes “There can therefore be no question of the statistical significance of the deviations from the straight line.”
Berkson’s response is to graph the dataset.
The middle points look like outliers, but it’s pretty obvious we’re dealing with a linear relationship. That Fisher’s tests reject linearity is a blow against using them.
Jacob Cohen made a very strong argument against Fisherian frequentism in 1994, the “permanent illusion,” which he attributes to a paper by Gerd Gigerenzer in 1993.[3][4] I can’t find any evidence Gigerenzer actually named it that, but it doesn’t matter; Berkson scoops both of them by a whopping 51 years, then extends the argument.
Suppose I said, “Albinos are very rare in human populations, only one in fifty thousand. Therefore, if you have taken a random sample of 100 from a population and found in it an albino, the population is not human.” This is a similar argument but if it were given, I believe the rational retort would be, “If the population is not human, what is it?” A question would be asked that demands an affirmative answer. In the hull hypothesis schema we are trying only to nullify something: “The null hypothesis is never proved or established but is possibly disproved in the course of experimentation.” But ordinarily evidence does not take this form. With the corpus delicti in front of you, you do not say, “Here is evidence against the hypothesis that no one is dead.” You say, “Evidently someone has been murdered.”[5]
This hints at Berkson’s way out of the p-value mess: ditch falsification and allow evidence in favour of hypotheses. They point to another example or two to shore up their case, but can’t extend this intuition to a mathematical description of how this would work with p-values. A pity, but it was for the best.
[1] Rozeboom, William W. “The fallacy of the null-hypothesis significance test.” Psychological bulletin 57.5 (1960): 416.
[2] Berkson, Joseph. “Tests of Significance Considered as Evidence.” Journal of the American Statistical Association 1942;37:325-35. International Journal of Epidemiology, vol. 32, no. 5, 2003, pp. 687.
[3] Cohen, Jacob. “The Earth is Round (p < .05).” American Psychologist, vol. 49, no. 12, 1994, pp. 997-1003.
[4] Gigerenzer, Gerd. “The superego, the ego, and the id in statistical reasoning.” A handbook for data analysis in the behavioral sciences: Methodological issues (1993): 311-339.
[5] Berkson (1942), pg. 326.
Time to do another deep dive on polling in the US. The first item comes via Steven Rosenfeld over at AlterNet. A number of polling companies have examined Trump’s standing in swing states, and compared it to how they voted. Their findings? They like him more than the average American, but less than when they voted for him. As Chuck Todd/Mark Murray/Carrie Dann put it at MSNBC,
In the Trump “Surge Counties” — think places like Carbon, Pa., which Trump won, 65%-31% (versus Mitt Romney’s 53%-45% margin) — 56% of residents approve of the president’s job performance. But in 2016, Trump won these “Surge Counties” by a combined 65%-29%. And in the “Flip Counties” — think places like Luzerne, Pa., which Obama carried 52%-47%, but which Trump won, 58%-39% — Trump’s job rating stands at just 44%. Trump won these “Flip Counties” by a combined 51%-43% margin a year ago.
So the sagging of support I mentioned a few months ago continues to happen. Rosenfeld also links to a few interviews with Trump voters, to get a more qualitative idea of where they’re at. There’s no real change there, they have a pessimistic view of what he’ll accomplish but praise him as a disruptor in fairly irrational terms. Take Ellen Pieper.
Poll respondent Ellen Pieper is among those disapproving of the president’s performance so far. The independent from Waukee voted for Trump and said she still believes in his ideas and qualifications. It’s how he behaves that bothers her. “He’s trying to move the country in the right direction, but his personality is getting in the way,” she said, calling out his use of Twitter in particular. “He’s a bright man, and I believe he has great ideas for getting the country back on track, but his approach needs some polish.”
Still, Pieper says, she’d vote for him again today.
Rosenfeld also makes some interesting comparisons to Nixon, but you’ll have to click through for that.
The second item comes via G. Elliott Morris, who’s boosted some diagrams made by Ian McDonald as well as their own. [Read more…]
As I hoped, Marcus Ranum responded to my prior blog post. Most of it is specific to the DNC hack, but there are a few general arguments against Bayesian statistics. If those carry any weight, they collapse my two blog posts by knocking out a core premise. I should deal with the generalities first before more specific critiques. [Read more…]
I think I did a good job of laying out the core hypotheses last time, save two: the Iranian government or a disgruntled Democrat did it. I think I can pick them up on-the-fly, so let’s skip ahead to step 2.
President Vladimir Putin says the Russian state has never been involved in hacking.
Speaking at a meeting with senior editors of leading international news agencies Thursday, Putin said that some individual “patriotic” hackers could mount some attacks amid the current cold spell in Russia’s relations with the West.
But he categorically insisted that “we don’t engage in that at the state level.”
Intelligence agency leaders repeated their determination Thursday that only “the senior most officials” in Russia could have authorized recent hacks into Democratic National Committee and Clinton officials’ emails during the presidential election.Director of National Intelligence James Clapper affirmed an Oct. 7 joint statement from 17 intelligence agencies that the Russian government directed the election interference…
I know, I know, these are starting to get passé. But this third event brings a little more information.
For the third time in a year and a half, the Advanced Laser Interferometer Gravitational Wave Observatory (LIGO) has detected gravitational waves. […]
This most recent event, which we detected on Jan. 4, 2017, is the most distant source we’ve observed so far. Because gravitational waves travel at the speed of light, when we look at very distant objects, we also look back in time. This most recent event is also the most ancient gravitational wave source we’ve detected so far, having occurred over two billion years ago. Back then, the universe itself was 20 percent smaller than it is today, and multicellular life had not yet arisen on Earth.
The mass of the final black hole left behind after this most recent collision is 50 times the mass of our sun. Prior to the first detected event, which weighed in at 60 times the mass of the sun, astronomers didn’t think such massive black holes could be formed in this way. While the second event was only 20 solar masses, detecting this additional very massive event suggests that such systems not only exist, but may be relatively common.
Thanks to this third event, astronomers can set a stronger maximum mass for the graviton, the proposed name for any gravity force-carrying particle. They also have some hints as to how these black holes form; the axis of spin for these two black holes appear to be misaligned, which suggests they became binaries well after forming as opposed to starting off as binary stars in orbit. Finally, the absence of another signal tells us something important about intermediate black holes, thousands of times heavier than the Sun but less than millions.
The paper reports a “survey of the universe for midsize-black-hole collisions up to 5 billion light years ago,” says Karan Jani, a former Georgia Tech Ph.D. physics student who participated in the study. That volume of space contains about 100 million galaxies the size of the Milky Way. Nowhere in that space did the study find a collision of midsize black holes.
“Clearly they are much, much rarer than low-mass black holes, three collisions of which LIGO has detected so far,” Jani says. Nevertheless, should a gravitational wave from two Goldilocks black holes colliding ever gets detected, Jani adds, “we have all the tools to dissect the signal.”
If you want more info, Veritasium has a quick summary, while if you want something meatier the full paper has been published and the raw data has been released.
Otherwise, just be content that we’ve learned a little more about the world.
I’m a bit of an oddity on this network, as I’m pretty convinced Russia was behind the DNC email hack. I know both Mano Singham and Marcus Ranum suspect someone else is responsible, last I checked, and Myers might lean that way too. Looking around, though, I don’t think anyone’s made the case in favor of Russian hacking. I might as well use it as an excuse to walk everyone through using Bayes’ Theorem in an informal setting.
(Spoiler alert: it’s the exact same method we’d use in a formal setting, but with more approximations and qualitative responses.)
Ask me to name the graph that annoys me the most, and I’ll point to this one.
Yes, Trump entered his presidency as the least liked in modern history, but he’s repeatedly interfered with Russian-related investigations and admitted he did it to save his butt. That’s a Watergate-level scandal, yet his approval numbers have barely changed. He’s also pushed a much-hated healthcare reform bill, been defeated multiple times in court, tried to inch away from his wall pledge, and in general repeatedly angered his base. His approval ratings should be negative by now, but because the US is so polarized many conservatives are clinging to him anyway.
A widely held tenet of the current conventional wisdom is that while President Trump might not be popular overall, he has a high floor on his support. Trump’s sizable and enthusiastic base — perhaps 35 to 40 percent of the country — won’t abandon him any time soon, the theory goes, and they don’t necessarily care about some of the controversies that the “mainstream media” treats as game-changing developments. […]
But the theory isn’t supported by the evidence. To the contrary, Trump’s base seems to be eroding. There’s been a considerable decline in the number of Americans who strongly approve of Trump, from a peak of around 30 percent in February to just 21 or 22 percent of the electorate now. (The decline in Trump’s strong approval ratings is larger than the overall decline in his approval ratings, in fact.) Far from having unconditional love from his base, Trump has already lost almost a third of his strong support. And voters who strongly disapprove of Trump outnumber those who strongly approve of him by about a 2-to-1 ratio, which could presage an “enthusiasm gap” that works against Trump at the midterms. The data suggests, in particular, that the GOP’s initial attempt (and failure) in March to pass its unpopular health care bill may have cost Trump with his core supporters.
At long last, Donald Trump’s base appears to be shrinking. This raises the chances of impeachment, and will put tremendous pressure on Republicans to abandon Trump to preserve their midterm majority. I’m pissed the cause appears to be health care, and not the shady Russian ties or bad behavior, but doing the right thing for the wrong reason is still doing the right thing. It also fits in nicely with current events.
According to the forecast released Wednesday by the nonpartisan Congressional Budget Office, 14 million fewer people would have health insurance next year under the Republican bill, increasing to a total of 19 million in 2020. By 2026, a total of 51 million people would be uninsured, roughly 28 million more than under Obamacare. That is roughly equivalent to the loss in coverage under the first version of the bill, which failed to pass the House of Representatives.
Much of the loss in coverage would be due to the Republican plan to shrink the eligibility for Medicaid; for many others—particularly those with preexisting conditions living in certain states—healthcare on the open marketplace would become unaffordable. Some of the loss would be due to individuals choosing not to get coverage.
The Republican bill, dubbed the American Health Care Act, would also raise insurance premiums by an average of 20 percent in 2018 compared with Obamacare, according to the CBO, and an additional 5 percent in 2019, before premiums start to drop.
So keep an eye on Montana’s special election (I’m writing this before results have come in); if the pattern repeats from previous special elections, Republicans will face a huge loss during the 2018 midterms, robbing Trump of much of his power and allowing the various investigations against him to pick up more steam.