Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy


Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy is a book about the societal impact of algorithms, written by Cathy O’Neil. It explores how various big data algorithms are increasingly used in ways that reinforce preexisting inequality.

You can consider this blog post a book review, except I won’t really structure it the way how book reviews are usually written. Instead I will summarize the main problems discussed in this book. If you consider this topic interesting, you can get the book for more details.

Weapons of Math Destruction

Here is how Wikipedia entry summarizes the main topic of this book:

We live in the age of the algorithm. Increasingly, the decisions that affect our lives—where we go to school, whether we get a car loan, how much we pay for health insurance—are being made not by humans, but by mathematical models. In theory, this should lead to greater fairness: Everyone is judged according to the same rules, and bias is eliminated.

But as Cathy O’Neil reveals in this urgent and necessary book, the opposite is true. The models being used today are opaque, unregulated, and uncontestable, even when they’re wrong. Most troubling, they reinforce discrimination: If a poor student can’t get a loan because a lending model deems him too risky (by virtue of his zip code), he’s then cut off from the kind of education that could pull him out of poverty, and a vicious spiral ensues. Models are propping up the lucky and punishing the downtrodden, creating a “toxic cocktail for democracy.” Welcome to the dark side of Big Data.

These “weapons of math destruction” score teachers and students, sort résumés, grant (or deny) loans, evaluate workers, target voters, set parole, and monitor our health.

There are multiple problems with mathematical models that score people and sort them according to various criteria: they are opaque, unregulated and difficult to contest. Simultaneously, they are also scalable, thereby amplifying any inherent biases to affect increasingly larger populations. A single racist bank or insurance company employee can unfairly harm hundreds of people of color who get higher interest rates or insurance costs. A single poorly designed algorithm can harm millions of people.

Algorithms are not necessarily more equal than humans.

People are biased. They are bad at evaluating strangers. Usually they don’t even notice their biases. For example, there have been experiments in which scientists created identical fake résumés with either male or female names, and guess what—people who were making the hiring decisions liked the male names better. The same trend has been observed in experiments with white-sounding versus black-sounding names. Thus some people might imagine that an algorithm should be better at scoring us, after all, a computer evaluates everyone equally. Except when it doesn’t.

In her book, Cathy O’Neil gave multiple case studies where standardized testing was worse than actually interviewing people. Let’s imagine a dark-skinned person is looking for a job. They go to a job interview and get refused due to their skin color. Firstly, they can sue the business that refused to hire them. Secondly, they can go look for a job somewhere else until they are lucky to find some other employer who doesn’t have any racial prejudices.

Now let’s imagine that a person gets refused due to scoring poorly in some standardized personality questionnaire. Firstly, they won’t even find out why they got refused. The algorithm that scored their questionnaire is not transparent at all. Even the human resources people at that business probably have no idea why some person scored poorly and was rated as unfit for hiring. Secondly, if this is some standardized test, then many businesses will use it, thus the same person will get declined again and again due to scoring poorly each time.

It is hard to say whether standardized tests are better than letting individual people make hiring/admission decisions. It depends on how some standardized test was created, who made it, and how good or bad it is. Some standardized tests are much worse than others. And many of them serve to further reinforce preexisting inequality. Of course, it also depends on people who make the hiring/admission decisions in some institution (some of them are much more prejudiced than others).

Algorithms are opaque and can be used to unfairly evaluate millions of people.

Algorithms, which automatically sort people into several groups according to whatever criteria, can ruin a person’s life even when programmers who made the software had the best intentions. Plenty of programmers and banks and insurance companies have attempted to create racially unbiased algorithms that are based upon statistics. People have already tried to go beyond “black=untrustworthy.” Unfortunately, it’s not that simple.

Let’s say you tried to make an unbiased mathematical model based solely upon statistics. The algorithm would probably make a correlation that people who earn little money are more likely to fail to pay back their loan on time. It would probably also make a correlation that people who live in some poor neighborhoods are more likely to fail to pay on time. The end result: a wealthy white guy who lives in the rich people’s neighborhood gets to borrow cheaply. Simultaneously, a black person who lives in some neighborhood that got designated as “risky” by the algorithm can only get payday loans with ridiculously high interest rates.

When computers sort people, said mathematical model tends to be opaque. Programmers who created it usually don’t want to disclose how exactly their black box works. After all, if people knew how exactly they are being evaluated, they would try to game the system. Or maybe programmers don’t want to disclose the inner workings of their software, because they know that they are selling snake oil.

Either way, if you go to a bank and speak face to face to some employee who refuses you because of your skin color, you can at least sue the bank. However, when your application is rejected by some mysterious algorithm, you don’t even know the reasons and cannot sue anybody. And it’s not just loans. Algorithms are also used for hiring people, deciding which students ought to be accepted in some university, calculating how much their insurance will cost, etc.

Algorithms can reinforce preexisting inequality.

Have you noticed that in the USA wealthy people can borrow money with little interest, while poor people are denied normal loans and instead have access only to payday loans with sky high interest rates? Have you noticed that car insurance costs less for a wealthy person compared to a poor person? Such discrepancies further reinforce existing inequality.

And you don’t even need a racist programmer for things to go wrong. It’s very easy to unintentionally create a mathematical model that will mistreat already poor customers. Here is a quote from the book about how car insurance rates are deterimed:

Leading insurers including Progressive, State Farm, and Travelers are already offering drivers a discount on their rates if they agree to share their driving data. A small telemetric unit in the car, a simple version of the black boxes in airplanes, logs the speed of the car and how the driver brakes and accelerates. A GPS monitor tracks the car’s movements.

. . . The individual driver comes into focus. Consider eighteen-year-olds. Traditionally they pay sky-high rates because their age group, statistically, indulges in more than its share of recklessness. But now, a high school senior who avoids jackrabbit starts, drives at a consistent pace under the speed limit, and eases to a stop at red lights might get a discounted rate. Insurance companies have long given an edge to young motorists who finish driver’s ed or make the honor roll. Those are proxies for responsible driving. But driving data is the real thing. That’s better, right?

There are a couple of problems. First, if the system attributes risk to geography, poor drivers lose out. They are more likely to drive in what insurers deem risky neighborhoods. Many also have long and irregular commutes, which translates into higher risk.

Fine, you might say. If poor neighborhoods are riskier, especially for auto theft, why should insurance companies ignore that information? And if longer commutes increase the chance of accidents, that’s something the insurers are entitled to consider. The judgment is still based on the driver’s behavior, not on extraneous details like her credit rating or the driving records of people her age. Many would consider that an improvement.

To a degree, it is. But consider a hypothetical driver who lives in a rough section of Newark, New Jersey, and must commute thirteen miles to a barista job at a Starbucks in the wealthy suburb of Montclair. Her schedule is chaotic and includes occasional clopenings. So she shuts the shop at 11, drives back to Newark, and returns before 5 a.m. To save ten minutes and $1.50 each way on the Garden State Parkway, she takes a shortcut, which leads her down a road lined with bars and strip joints.

A data-savvy insurer will note that cars traveling along that route in the wee hours have an increased risk of accidents. There are more than a few drunks on the road. And to be fair, our barista is adding a bit of risk by taking the shortcut and sharing the road with the people spilling out of the bars. One of them might hit her. But as far as the insurance company’s geo-tracker is concerned, not only is she mingling with drunks, she may be one.

In this way, even the models that track our personal behavior gain many of their insights, and assess risk, by comparing us to others. This time, instead of bucketing people who speak Arabic or Urdu, live in the same zip codes, or earn similar salaries, they assemble groups of us who act in similar ways. The prediction is that those who act alike will take on similar levels of risk. If you haven’t noticed, this is birds of a feather all over again, with many of the same injustices.

Cathy O’Neil didn’t say it straight, but the chances are pretty high that the hypothetical barista happens to be black (since black people, on average, earn less and are forced to accept worse jobs). With all this data we still get back where we started—if you are poor and black, your car insurance will cost more than if you are white and rich. And it’s not just that. Your loan will have a higher interest rate. Your job application will get denied by some mysterious algorithm. The unfairness will be perpetuated.

Faulty algorithms can create vicious feedback loops that result in self-fulfilling prophecies.

In 2013, Reading (a small city in Pennsylvania) police chief William Heim invested in crime prediction software made by PredPol, a Big Data start-up. The program processed historical crime data and calculated, hour by hour, where crimes were most likely to occur. Police officers could view the program’s conclusions as a series of squares on a map. The idea was that if they spent more time patrolling these locations, there was a good chance they would discourage crime. Predictive programs like PredPol are common in USA police departments. For example, New York City uses a similar program, called CompStat.

In theory, the model is blind to race and ethnicity, PredPol doesn’t focus on the individual, instead, it targets geography, the key inputs being the type and location of each crime and when it occurred. At first glance, that might seem fair, and it seems useful for cops to spend more time in the high-risk zones.

The problem is that the model takes into account not only homicides and burglaries, but also petty crimes. Serious violent crimes are usually reported to the police regardless of whether some officer was nearby when the crime happened. But the model takes into consideration also less serious crimes, including vagrancy, aggressive panhandling, and selling and consuming small quantities of drugs. Many of these “nuisance” crimes would go unrecorded if a cop weren’t there to see them. Here’s the problem:

Once the nuisance data flows into a predictive model, more police are drawn into those neighborhoods, where they’re more likely to arrest more people… This creates a pernicious feedback loop. The policing itself spawns new data, which justifies more policing. And our prisons fill up with hundreds of thousands of people found guilty of victimless crimes. Most of them come from impoverished neighborhoods, and most are black or Hispanic. So even if a model is color blind, the result of it is anything but. In our largely segregated cities, geography is a highly effective proxy for race.

If the purpose of the models is to prevent serious crimes, you might ask why nuisance crimes are tracked at all. The answer is that the link between antisocial behavior and crime has been an article of faith since 1982, when a criminologist named George Kelling teamed up with a public policy expert, James Q. Wilson, to write a seminal article in the Atlantic Monthly on so-called broken-windows policing. The idea was that low-level crimes and misdemeanors created an atmosphere of disorder in a neighborhood. This scared law-abiding citizens away. The dark and empty streets they left behind were breeding grounds for serious crime. The antidote was for society to resist the spread of disorder. This included fixing broken windows, cleaning up graffiti-covered subway cars, and taking steps to discourage nuisance crimes.

Here we have it—big data is used to make abuse of poor people sound “scientifically justified.” Whenever police goes to some specific location in search of crime, that’s where they will find something:

Just imagine if police enforced their zero-tolerance strategy in finance. They would arrest people for even the slightest infraction, whether it was chiseling investors on 401ks, providing misleading guidance, or committing petty frauds. Perhaps SWAT teams would descend on Greenwich, Connecticut. They’d go undercover in the taverns around Chicago’s Mercantile Exchange.

Once poor people of color are caught committing some petty crime, another mathematical model is used to sentence them to harsher punishments—the recidivism model used for sentencing guidelines.

In the USA, race has long been a factor in sentencing:

A University of Maryland study showed that in Harris County, which includes Houston, prosecutors were three times more likely to seek the death penalty for African Americans, and four times more likely for Hispanics, than for whites convicted of the same charges. That pattern isn’t unique to Texas. According to the American Civil Liberties Union, sentences imposed on black men in the federal system are nearly 20 percent longer than those for whites convicted of similar crimes. And though they make up only 13 percent of the population, blacks fill up 40 percent of America’s prison cells.

So you might think that computerized risk models fed by data would reduce the role of prejudice in sentencing and contribute to more even-handed treatment. With that hope, courts in twenty-four states have turned to so-called recidivism models. These help judges assess the danger posed by each convict. And by many measures they’re an improvement. They keep sentences more consistent and less likely to be swayed by the moods and biases of judges.

The question, however, is whether we’ve eliminated human bias or simply camouflaged it with technology. The new recidivism models are complicated and mathematical. But embedded within these models are a host of assumptions, some of them prejudicial…

One of the more popular models, known as LSI–R, or Level of Service Inventory–Revised, includes a lengthy questionnaire for the prisoner to fill out. One of the questions—“How many prior convictions have you had?”—is highly relevant to the risk of recidivism. Others are also clearly related: “What part did others play in the offense? What part did drugs and alcohol play?”

But as the questions continue, delving deeper into the person’s life, it’s easy to imagine how inmates from a privileged background would answer one way and those from tough inner-city streets another. Ask a criminal who grew up in comfortable suburbs about “the first time you were ever involved with the police,” and he might not have a single incident to report other than the one that brought him to prison. Young black males, by contrast, are likely to have been stopped by police dozens of times, even when they’ve done nothing wrong. A 2013 study by the New York Civil Liberties Union found that while black and Latino males between the ages of fourteen and twenty-four made up only 4.7 percent of the city’s population, they accounted for 40.6 percent of the stop-and-frisk checks by police. More than 90 percent of those stopped were innocent. Some of the others might have been drinking underage or carrying a joint. And unlike most rich kids, they got in trouble for it. So if early “involvement” with the police signals recidivism, poor people and racial minorities look far riskier.

The questions hardly stop there. Prisoners are also asked about whether their friends and relatives have criminal records. Again, ask that question to a convicted criminal raised in a middle-class neighborhood, and the chances are much greater that the answer will be no. The questionnaire does avoid asking about race, which is illegal. But with the wealth of detail each prisoner provides, that single illegal question is almost superfluous.

The LSI–R questionnaire has been given to thousands of inmates since its invention in 1995. Statisticians have used those results to devise a system in which answers highly correlated to recidivism weigh more heavily and count for more points. After answering the questionnaire, convicts are categorized as high, medium, and low risk on the basis of the number of points they accumulate. In some states, such as Rhode Island, these tests are used only to target those with high-risk scores for antirecidivism programs while incarcerated. But in others, including Idaho and Colorado, judges use the scores to guide their sentencing.

This is unjust. The questionnaire includes circumstances of a criminal’s birth and upbringing, including his or her family, neighborhood, and friends. These details should not be relevant to a criminal case or to the sentencing. Indeed, if a prosecutor attempted to tar a defendant by mentioning his brother’s criminal record or the high crime rate in his neighborhood, a decent defense attorney would roar, “Objection, Your Honor!” And a serious judge would sustain it. . . But even if we put aside, ever so briefly, the crucial issue of fairness, we find ourselves descending into a pernicious WMD feedback loop. A person who scores as “high risk” is likely to be unemployed and to come from a neighborhood where many of his friends and family have had run-ins with the law. Thanks in part to the resulting high score on the evaluation, he gets a longer sentence, locking him away for more years in a prison where he’s surrounded by fellow criminals—which raises the likelihood that he’ll return to prison. He is finally released into the same poor neighborhood, this time with a criminal record, which makes it that much harder to find a job. If he commits another crime, the recidivism model can claim another success. But in fact the model itself contributes to a toxic cycle and helps to sustain it. . .

What’s more, for supposedly scientific systems, the recidivism models are logically flawed. The unquestioned assumption is that locking away “high-risk” prisoners for more time makes society safer. It is true, of course, that prisoners don’t commit crimes against society while behind bars. But is it possible that their time in prison has an effect on their behavior once they step out? Is there a chance that years in a brutal environment surrounded by felons might make them more likely, and not less, to commit another crime?

Algorithms can create perverse incentives that result in poor outcomes.

Perverse incentives can result in people doing various generally harmful actions in an attempt to game the algorithm. A mathematical model cannot directly measure whatever it claims to measure. Instead, the model must use some proxies.

For example, let’s consider the U.S. News college rankings, which have been extremely harmful for American colleges and universities. Measuring and scoring the excellence of universities is inherently impossible (how do you even quantify that?), hence the U.S. News just picked a number of various metrics. In order to improve their score, colleges responded by trying to improve each of the metrics that went into their score. For example, if a college increases tuition and uses the extra money to build a fancy gym with whirlpool baths, this is going to increase its ranking. Hence many colleges have done various things that increased their score, but were actually harmful for the students.

Some universities decided to go even further and outright manipulated their score. For example, in a 2014 U.S. News ranking of global universities, the mathematics department at Saudi Arabia’s King Abdulaziz University landed in seventh place, right next to Harvard. The Saudi university contacted several mathematicians whose work was highly cited and offered them thousands of dollars to serve as adjunct faculty. These mathematicians would work three weeks a year in Saudi Arabia. The university would fly them there in business class and put them up at a five-star hotel. The deal also required that the Saudi university could claim the publications of their new adjunct faculty as its own. Since citations were one of the U.S. News algorithm’s primary inputs, King Abdulaziz University soared in the rankings. That’s how you game the system.

Scoring an individual person by analyzing the behavior of somebody else is unethical.

Theoretically, big data can lead to a situation where insurance companies or banks learn more about us to the point where they are able to pinpoint those who appear to be the riskiest customers and then either drive their rates to the stratosphere or, where legal, deny them coverage. I perceive that as unethical. The whole point of insurance is to balance the risk, to smooth out life’s bumps. It’s would be better for everybody to pay the average rather than pay anticipated costs in advance.

Moreover, there’s also the question of how ethical it is to judge some individual person based upon the average behavior of other people who happen to have something in common with the individual you are dealing with. After all, you aren’t dealing with “the average black person,” “the average woman,” “the average person who earns $XXXXX per year,” “the average person who lives in this neighborhood,” “the average person with an erratic schedule of long commutes,” instead you are dealing with a unique individual human being. The inevitable end result of such a system is that some innocent person who isn’t guilty at all, who haven’t done anything bad will get punished because some algorithm put this person in the same bucket with other people who have engaged in some bad or risky behavior.

Conclusions

Promising efficiency and fairness, mathematical models distort higher education, drive up debt, spur mass incarceration, pummel the poor at nearly every juncture, and undermine democracy. As Cathy O’Neil explains:

The problem is that they’re feeding on each other. Poor people are more likely to have bad credit and live in high-crime neighborhoods, surrounded by other poor people. Once the dark universe of WMDs digests that data, it showers them with predatory ads for subprime loans or for-profit schools. It sends more police to arrest them, and when they’re convicted it sentences them to longer terms. This data feeds into other WMDs, which score the same people as high risks or easy targets and proceed to block them from jobs, while jacking up their rates for mortgages, car loans, and every kind of insurance imaginable. This drives their credit rating down further, creating nothing less than a death spiral of modeling. Being poor in a world of WMDs is getting more and more dangerous and expensive.

There is a problem. How do you fix it? Well, you certainly cannot rely on corporations to fix their own flawed mathematical models. Justice and fairness might benefit the society as a whole, but it does nothing to some corporation’s bottom line. In fact, entire business models, such as for-profit universities and payday loans, are built upon further abusing already marginalized groups of people. As long as profit in generated, corporations believe that their flawed mathematical models are working just fine.

The victims feel differently, but the greatest number of them—the hourly workers and unemployed, the people who have low credit scores—are poor. They are also powerless and voiceless. Many are disenfranchised politically.

Thus we can conclude that the problems won’t fix themselves. Cathy O’Neil proposes various solutions. Data scientists should audit the algorithms and search for potential problems. Politicians should pass laws that forbid companies from evaluating their employees or customers based upon dubious data points. The society as a whole needs to demand transparency. For example, each person should have the right to receive an alert when a credit score is being used to judge or vet them, they should have access to the information being used to compute that score, and if it is incorrect, they should have the right to challenge and correct it. Moreover, mathematical models that have a significant impact on people’s lives (like their credit scores, e-scores, etc.) should be open and available to the public.

Comments

  1. says

    So, this is my area of profession, and while I immediately agree with the idea that big data can increase inequality, a few of the specific arguments used here are rather frustrating.

    First, there’s this narrative:

    Let’s imagine a dark-skinned person is looking for a job. They go to a job interview and get refused due to their skin color. Firstly, they can sue the business that refused to hire them.

    That doesn’t sound so likely to me. If an interviewee turns down a job candidate, more likely that we have no idea why, and maybe even the interviewee doesn’t know why. And when it’s hard enough to prove prejudice to ourselves, just imagine trying to prove it in a court of law–if you can even afford a lawyer in the first place. Algorithms definitely have their problems, but this is just an absurdly rosy view of traditional methods.

    My other comment, is that it seems there are two distinct problems with machine learning: 1) they might make more mistakes, and 2) they might be more accurate. The first problem is obvious, so let me explain the second one. Suppose you found an algorithm that perfectly predicted people’s healthcare expenses, and started using this to price health insurance. Well then, it’s like you might as well not have health insurance, because everyone’s paying the same amount either way. This is “fair” in the sense that everyone’s paying exactly the amount of burden they’re placing on society. But it’s “unfair” in that, the amount of healthcare expenses people have is mostly beyond their control. I think it would be better if our algorithms were actually less accurate, and we just charged everyone the same price–modulo, I don’t know, smoking.

    This is easiest to illustrate with health insurance, but also applies to car insurance, and several of the other cases discussed in the OP. But what’s frustrating, is that the author comes so close, and then suddenly veers in the opposite direction. The discussion of car insurance ends with an example where the algorithm is making a mistake. And so the whole section appears to be demanding more accurate algorithms, when I think that high accuracy might actually be the thornier problem.

    The thing is, if it’s just about algorithms making mistakes, this is in principle a self-correcting problem. It’s in the interest of car insurance companies to not make mistakes, so that they might make more profit. But if the problem is about algorithms being too accurate, we need regulation.

  2. says

    Siggy @#1

    And when it’s hard enough to prove prejudice to ourselves, just imagine trying to prove it in a court of law–if you can even afford a lawyer in the first place. Algorithms definitely have their problems, but this is just an absurdly rosy view of traditional methods.

    My view isn’t rosy. I’m merely pointing out that one problem poses a relatively harder challenge than the other. Over the last century, countless lawyers, activists, and politicians have fought against various forms of bigotry in instances where they could have been detected and proven. I never said that suing some company is easy, I merely said that it is at least possible. If a person is super lucky, they might have been able to collect some proofs and they might find a black lawyer who will work on their case for free.

    The problem with algorithms is that detecting a racially bigoted mathematical model and proving that it disproportionately hurts some minority group is even harder than proving that some people have systematically made a series of racially biased decisions.

    But what’s frustrating, is that the author comes so close, and then suddenly veers in the opposite direction. The discussion of car insurance ends with an example where the algorithm is making a mistake. And so the whole section appears to be demanding more accurate algorithms, when I think that high accuracy might actually be the thornier problem.

    No, Cathy O’Neil didn’t argue in favor of more accurate models. After the paragraphs I quoted about the car insurance, the next few paragraphs are the following:

    At some point, the trackers will likely become the norm. And consumers who want to handle insurance the old-fashioned way, withholding all but the essential from their insurers, will have to pay a premium, and probably a steep one. In the world of WMDs, privacy is increasingly a luxury that only the wealthy can afford.

    At the same time, surveillance will change the very nature of insurance. Insurance is an industry, traditionally, that draws on the majority of the community to respond to the needs of an unfortunate minority. In the villages we lived in centuries ago, families, religious groups, and neighbors helped look after each other when fire, accident, or illness struck. In the market economy, we outsource this care to insurance companies, which keep a portion of the money for themselves and call it profit.

    As insurance companies learn more about us, they’ll be able to pinpoint those who appear to be the riskiest customers and then either drive their rates to the stratosphere or, where legal, deny them coverage. This is a far cry from insurance’s original purpose, which is to help society balance its risk. In a targeted world, we no longer pay the average. Instead, we’re saddled with anticipated costs. Instead of smoothing out life’s bumps, insurance companies will demand payment for those bumps in advance. This undermines the point of insurance, and the hits will fall especially hard on those who can least afford them.

  3. says

    @Andreas #2,
    Assessing bias in the output of an algorithm should be about the same as assessing the bias in the output of a human-based system. You look at the results, do statistics. Of course, in human-based systems, you sometimes also have a paper trail, like when a hiring committee exchanges racist e-mails. So I think you are right that it’s easier to detect bigotry in a human-based system, at least for now.

    I’m glad Cathy O’Neil is aware of both sides of the problem (i.e. algorithms being too accurate, and also not being accurate enough). But I’m still side-eying the example of a good driver in a bad neighborhood being put back to back with a discussion of the dangers of increasingly accurate algorithms. It makes me think she isn’t distinguishing the two problems very well.

    Really, I think O’Neil chooses that example, because it’s just easier to sympathize with the good driver who is screwed by the algorithm. It’s harder to sympathize with all the bad drivers getting screwed. But they are nonetheless getting screwed, and increasingly accurate algorithms will hurt rather than help.

  4. sonofrojblake says

    “The same trend has been observed in experiments with white-sounding versus black-sounding names”

    Given how well known this problem is, I do wonder why anyone persists in giving their children “black-sounding” names. It seems to me equivalent to deliberately mutilating your own child, and what parent would… oh, wait.

  5. says

    Siggy @#3

    In her book Cathy O’Neil uses dozens of examples. There’s a limit on how many I can quote or mention. Personally, I got the impression that she seemed concerned with all the people who got screwed by algorithms, not just the “innocent” victims.

    That being said, people in general seem more concerned about somebody else’s problem when they deem the victim “innocent.” For example, somebody who is born with a medical condition that requires expensive treatment will get more sympathy that somebody who gets a medical problem as a result of smoking.

    This is why, when you are making an argument against mathematical models, your argument will be seen as more persuasive if you focus on the “innocent” victims. In a debate that would be the smarter strategy if your goal is to convince other people that algorithms cause problems.

    Of course, personally, I believe that people who engage in some risky behavior should not be punished for it by the society. I think it is very wrong to start sorting victims into those who are “innocent” and those who “asked for it.” At least in general and most of the time.

  6. says

    sonofrojblake @#4

    Given how well known this problem is, I do wonder why anyone persists in giving their children “black-sounding” names. It seems to me equivalent to deliberately mutilating your own child, and what parent would… oh, wait.

    If a white person wants to name their child after their grandparent, that’s fine. If a person of color wants to do the same, then that’s comparable to genital mutilation? WTF?

    Ethnic groups care about their cultural heritage. Names belong to this.

    I’m not well informed about names and black Americans, so I’ll switch to a different example, given how people with Chinese-sounding or Native American-sounding names face the same form of discrimination. People of these ethnicities care about their languages and culture. Hence they give said names to their children. And they should be free to do so without facing any discrimination as a result.

    My own legal name is a Latvian name. In Latvian, it is possible to have international names, like “Anna,” “Linda,” “Robert,” “Eduard.” Then there are also very Latvian names that don’t exist in other languages. When I moved to Germany, nobody could pronounce my name. They couldn’t even type it. At my university, several professors and administrative employees typed my name incorrectly in their computer system, which I why I routinely had to waste my time trying to get it fixed. That was annoying. 27 years ago my mother didn’t anticipate this problem, she just picked a Latvian name, because she liked it and that was part of her cultural heritage. She wasn’t trying to mutilate me or deliberately harm my chances of finding employment.

    By the way, I do hate my legal name, but that is because it is a female name, and I am not a woman.

  7. sonofrojblake says

    If a person of color wants to do the same, then that’s comparable to genital mutilation? WTF?

    The analogy is a fairly simple one.

    There’s a tension (that I entirely understand) between the desire to honour one’s cultural heritage, and the observable fact that doing so will tangibly damage one’s own child.

    Now, you are of course free to honour your cultural heritage (up to a point, as long as that cultural heritage is sufficiently in line with the hosts’ cultural heritage – e.g. if you live in the UK and want to mutilate the genitals of your child, make sure it’s a boy, because nobody gives a shit about them. No mutilating girls though, oh no, that’s a criminal offence.)

    And I absolutely agree there should be no consequences for honouring your cultural heritage.

    But, as you rightly observe… there are such consequences. Complaining about it doesn’t make it go away.

    My point was, given the above facts, how to judge parents who persist in placing something as amorphous and intangible as “cultural heritage” over the observable facts of what will affect their child’s chances in life? Or, like your mother, perhaps simply don’t give that much thought to the name they saddle their children with, and pick something they think sounds nice? She wasn’t trying to harm you, any more than parents who circumcise children are trying to harm their kids (usually).

    My wife and I gave a LOT of thought to the name we gave our son. I’m white/het/cis/English-speaking so privileged in all sorts of ways he’s likely to be too, but even so there was a lot to think about. Nothing hard to pronounce. Nothing he’s going to have to spell for people (e.g. my best friend, who has gone through life having to say “Stephen, with a p h”, or my stepsister who has to specify “Ann, without an E”). Nothing that produces an unfortunate acronym or combination with his initials or surname (I’m looking at you, Mr. and Mrs. Sole, who decided on “Richard”). Nothing “wacky”. In short, we thought, long and hard, about what it might really be like to go through life with a particular name. Not everyone does.

    I’d also observe that adults are allowed to change their legal names, and if I thought for a second doing so would improve my chances of paid employment, I wouldn’t hesistate.

  8. John Morales says

    sonofrojblake, already way out of topic, so:

    … the desire to honour one’s cultural heritage …

    Q: Whyever would one desire to “honour” it?

    A: Because it’s part of every culture that its adherents should “honour” it, and there are cultural penalties for breaching that cultural obligation, and it’s perverse to desire to be penalised.

    Anyway, it’s a weird desire, akin to worshipping something or someone.

    Pointless, so I certainly give it no respect, though I accept some people just go along.

    From the OP: “People are biased.”
    Cultural bias is a thing.

  9. says

    Sonofrojblake @#8

    Now, you are of course free to honour your cultural heritage (up to a point, as long as that cultural heritage is sufficiently in line with the hosts’ cultural heritage – e.g. if you live in the UK and want to mutilate the genitals of your child, make sure it’s a boy, because nobody gives a shit about them. No mutilating girls though, oh no, that’s a criminal offence.)

    Some “cultural heritage” ought to be outlawed. Genital mutilation of children (including circumcision of boys), arranged marriages that force 14 years old girls to marry men in their forties, extreme modesty requirements that make it impossible for women to lead normal lives outside of their home, and so on.

    These crimes should be clearly distinguished from cultural heritage that isn’t harmful in itself. Minority languages, ethnic costumes/jewelry/haircuts, ethnic names, and so on should be fully accepted.

    I strongly oppose victim blaming. The problems are racism and xenophobia. Blaming various ethnic groups for wanting to preserve their traditional heritage is wrong. The problem isn’t the existence of minority groups who have usual sounding names. The problem is that in our society we have white racist assholes who discriminate people with unusual names.

    Or, like your mother, perhaps simply don’t give that much thought to the name they saddle their children with, and pick something they think sounds nice? She wasn’t trying to harm you, any more than parents who circumcise children are trying to harm their kids (usually).

    27 years ago my mother didn’t know a single word in English. She had no clue what constitutes an internationally recognizable name. You cannot blame a parent for not being fluent in English.

    I’d also observe that adults are allowed to change their legal names, and if I thought for a second doing so would improve my chances of paid employment, I wouldn’t hesitate.

    Yeah right. In some countries changing one’s first name is expensive and time consuming, while changing one’s last name is outright illegal unless you can prove to some judge that your family name is embarrassing. For example, if my last name were “Hitler,” a Latvian judge would allow me to change it. Since my last name is ordinary, it is completely illegal for me to change it, unless, of course, I get married, in which case I would be legally allowed to get my partner’s family name.

    Moreover, in Latvia I cannot even change my first name. I am legally female. The state has decided that therefore I must have a female name. Latvian law defines female names as ones that end with letters “a” or “e.” The name I prefer to use, Andreas, is classified as a male name in Latvia, thus I cannot get it. My country would allow me to change my first name to “Andrea,” but local bureaucrats won’t allow me to call myself “Andreas.”

    Anyway, even if some Chinese person changed their first name to “John,” or “Jane,” they would still have an ethnic-sounding last name, and that would be sufficient to trigger bigotry from strangers who read your job applications.

    John Morales @#9

    Because it’s part of every culture that its adherents should “honour” it, and there are cultural penalties for breaching that cultural obligation, and it’s perverse to desire to be penalised.

    I strongly dislike it when nationalists force other people to “honor their cultural heritage.” For example, at school my Latvian teacher tried to indoctrinate us that we must love Latvian and cherish it and teach it to our children and whatnot.

    Personally, I couldn’t care less what language my biological ancestors spoke or what names they had. I consider my native language overall useless and nowadays I hardly ever use it. As you must have noticed, my blog is not in my native language. The name I use, Andreas, is a German name and isn’t used at all in Latvia. Personally, I have chosen to ignore whatever cultural heritage there might be.

    Nonetheless, some other people are sentimental and care about this stuff. And in my opinion, if somebody else wants to follow some ethnic customs, be it some names or whatever else, then they should have a right to do so. And the society shouldn’t penalize people for cherishing their cultural heritage. Of course, within limits, for example, if somebody tries to claim that child marriage is their cultural heritage, they ought to be incarcerated for pedophilia.

  10. lumipuna says

    “How many prior convictions have you had?”

    Huh? I thought criminal record was invented so that authorities don’t have to take my own word on these things.

    Of course, at least in my country a record for very minor crimes will expire after a few years, and just being suspected for something doesn’t get you (I think) record at all. That’s because someone already made a judgement on how far your past crimes should be considered in whatever security assessments.

  11. sonofrojblake says

    Some “cultural heritage” ought to be outlawed. Genital mutilation of children (including circumcision of boys), arranged marriages that force 14 years old girls to marry men in their forties, extreme modesty requirements that make it impossible for women to lead normal lives outside of their home, and so on.

    Right with you. Up to the last one. You are on a VERY sticky wicket on that one. How do you outlaw modesty requirements? The French have tried, and been lambasted as interfering with the freedom of women to wear what they want (i.e. the freedom to ostentatiously flaunt the tools of their oppression by the patriarchy, and walk around a civilised country wearing a mask).

    These crimes should be clearly distinguished from cultural heritage that isn’t harmful in itself. Minority languages, ethnic costumes/jewelry/haircuts, ethnic names, and so on should be fully accepted.

    Well, there’s that word “should” again. I can only agree, they should. (Assuming the jewellery doesn’t require piercing or other body modification of children too young to consent).

    I strongly oppose victim blaming. The problems are racism and xenophobia. Blaming various ethnic groups for wanting to preserve their traditional heritage is wrong.

    I didn’t say I blame them. I’m baffled by them.

    The problem isn’t the existence of minority groups who have usual sounding names. The problem is that in our society we have white racist assholes who discriminate people with unusual names.

    The problem is it’s really, really hard to prove they’re discriminating based on names and therefore do anything about it, while at the exact same time it’s really, really easy to name your son “Michael” instead of “DeShawn” or similar.

    This is not an issue unique to African Americans. According to a blog written by an ex-teacher in the UK, there’s a game: the top set/bottom set game. It can be played in any school, even ones with more or less exclusively white pupils. A teacher reads the given names of the children in the top set (i.e. those with the highest assessed ability), and the given names of the children in the bottom set (the worst). Other teachers have to guess which group is which, based only on the list of names. And they are always right. Top sets typically feature Davids, Michaels, Andrews, Rachels, Annes, Elizabeths. Bottom sets feature Kevins, Waynes, Darrens, Tracys, Sharons and Chardonnays. Top set kids’ parents name them after saints or apostles. Bottom set kids’ parents name them after people they’ve seen on the telly. And don’t think for a minute that those kids won’t face discrimination in later life. It doesn’t matter and nobody will suggest anything should be done about it, obviously, because they’re white, but it does happen.

    27 years ago my mother didn’t know a single word in English. She had no clue what constitutes an internationally recognizable name. You cannot blame a parent for not being fluent in English.

    I’m sorry I don’t get how that’s relevant.

    I’d also observe that adults are allowed to change their legal names, and if I thought for a second doing so would improve my chances of paid employment, I wouldn’t hesitate.

    Yeah right.

    Well, yes, where I live, that is right. I concede it had not occurred to me that these reasonable conditions do not pertain everywhere, and I apologise in particular because you have complained about this before elsewhere on FtB and I absolutely should have remembered that. I’m sorry.

    Latvian law defines female names as ones that end with letters “a” or “e.”

    I can’t be the first observe that that isn’t just oppressive, it’s bloody stupid.

  12. says

    sonofrojblake @#12

    How do you outlaw modesty requirements? The French have tried, and been lambasted as interfering with the freedom of women to wear what they want (i.e. the freedom to ostentatiously flaunt the tools of their oppression by the patriarchy, and walk around a civilised country wearing a mask).

    I believe that people should be allowed to wear whatever clothes they like, but some cultural prescriptions are outright harmful. If a woman wants to wear a scarf, whatever, it doesn’t infringe her mobility and her ability to lead a normal life. But at some point modesty requirements turn into a huge problem. I have actually seen photos of women with a burqa riding a bicycle. I have no clue how they manage. Or how do they swim in a swimming pool or at some beach? Some traditional clothing is objectively harmful. I know it is hard to just ban this crap (a ban may backfire and prevent women from leaving their homes at all), but a society should at least try to discourage this shit.

    I’m sorry I don’t get how that’s relevant.

    Due to not being fluent in English, my mother had no clue what constitutes an internationally recognizable name. You said that, “My wife and I gave a LOT of thought to the name we gave our son.” You also stated that other parents also should do the same. The problem is that my mother couldn’t have possibly thought about how the name she is giving me might later affect my life outside of Latvia, because she literally had no clue what names people have in other countries. You tried to anticipate possible problems your son might face throughout his life due to having some name. That’s a good thing to do, but it is only possible when you have certain knowledge.

    I can’t be the first observe that that isn’t just oppressive, it’s bloody stupid.

    Yep. Pretty much everybody has noticed this. Except for Latvian nationalists who insist upon maintaining the purity of language. In Latvian language, traditionally, male names end with the letters “s” or “o”; female names end with the letters “a” or “e.” Period. That’s how the language worked 200 years ago, and that’s how it must be preserved, according to nationalist assholes.

    In Latvian press, people write about Donalds Džons Tramps (Donald John Trump) and Hilarija Daiena Rodema Klintone (Hillary Diane Rodham Clinton). Foreigners hate this. I also hate this. My name, Andreas, in a German name. Latvian bureaucrats wouldn’t allow even a man to have a Latvian passport with the name “Andreas,” instead it would be changed into “Andrēass.”

  13. lumipuna says

    I imagine a Black American family couldn’t escape most of the racism by just having white sounding names. That’d give some perspective.

    As for ethnic names, my mother does speak English since school age, but she only recently heard (from me) that some Finnish men’s names would be coded female in English and/or German (such as Kari, Toni, Mika, Aki). Just by coincidence, my name isn’t any of these, and also doesn’t contain uncommon letters or sound combinations.

  14. sonofrojblake says

    I imagine a Black American family couldn’t escape most of the racism by just having white sounding names

    Merely because you can’t solve ALL of a problem doesn’t mean you shouldn’t take steps to address SOME of it.

    If there’s a simple, straightforward and free thing you can do that will have a statistically proven positive effect, why on Bod’s earth would you not do it?

  15. lumipuna says

    I’m not saying it’s a meaningless step, but I suggest that in Black perspective it probably seems relatively trivial. Ultimately, I guess people resist against taking these steps because we’re tribal creatures. Why that is, I cannot explain.

Leave a Reply