Data science has an increasing impact on our lives, and not always for the better. People speak of “Big Data”, and demand regulation, but they don’t really understand what that would look like. I work in one of the few areas where data science is regulated, so I want to discuss one particular regulation and its consequences.
So, it’s finally time for me to publicly admit… I work in the finance sector.
These regulations apply to many different financial trades, but for illustrative purposes, I’m going to talk about loans. The problem with taking out a loan is that you need to pay it back plus interest. The interest is needed to give lenders a return on their investment, and to offset the losses from other borrowers who don’t pay it off. Lenders can increase profit margins and/or lower interest rates if they can predict who won’t pay off their debt, and decline those people. Data science is used to help make those decline decisions.
The US imposes two major restrictions on the data science. First, there’s anti-discrimination laws (a subject I might discuss at a later time) (ETA: it’s here). Second, an explanation must be provided to people who are declined.
Adverse Action Notices
If you ever apply for a loan and get declined, you will receive an adverse action notice. The notice will provide a variety of information intended to help you. Most relevant to data scientists, an adverse action notice must list the principal reasons why you were denied credit. In common practice, the notice will list the 4 most important reasons.
The solution draws upon one of the subfields of AI, known as explainable AI or xAI for short. xAI is a collection of methods that purport to explain why AI or machine learning models behave the way they do.
In order to decide whether to approve or decline your loan application, a machine learning model will take a large number of inputs. Most inputs are pulled from your credit report. Of those many inputs, the lender needs to provide you with just four reasons in ranked order based on the specifics of your application (that is, it shouldn’t just be the same four reasons for everyone). So your application goes through a mathemagical xAI process that ranks each input in terms of how positively or negatively it affected your score. The four inputs with the most negative contribution are selected, converted to plain English, and then listed in your adverse action notice.
Why explain?
What is the benefit of requiring lenders to explain their decisions to consumers? According to a lead data scientist at Equifax, one of the goals is
providing guidance on what areas he/she could work to improve to achieve a higher score […].
This seems a bit absurd when most of the biggest reasons you might be declined are things you can’t do anything to change. For example, one major reason to decline someone is that their income is too low. Not everyone can do something to improve their income–and to give a specific example, consider a retiree who receives a fixed social security income. Another reason might be that you had a bankruptcy in the last 7 years. Don’t do that, says the adverse action notice, so helpful. And you may be particularly frustrated to learn that a major reason for getting declined is that you have too many credit inquiries (i.e. you’ve applied for credit too many times).
But surely at least sometimes the adverse action notice will list things that the you can improve. Sometimes adverse action will list things about you that you know to be false, giving you an opportunity to dispute those facts. And it’s probably a good thing to promulgate general knowledge about what things will improve your credit.
The restrictions of explainability
Another major effect of the regulation, is that it imposes restrictions on the model. The precise nature of these restrictions is very fuzzy, because the boundaries don’t really get decided until they land in front of a judge, and most companies prefer not to reach that point. Here are three main areas where explainability imposes restrictions:
- If an input gets selected as one of the top 4 reasons for decline, and produces an explanation that consumers will find counterintuitive (even if correct!), this might generate complaints that will attract regulatory attention. So, if any inputs pose that risk, those inputs may be blocked from going into the model in the first place.
- Companies may self-impose restrictions on the structure of the model itself. Many companies use a relatively simple model called the Scorecard model. More adventurous companies are starting to use more complicated models (like XGB), which require more complicated xAI methods (like SHAP). None of these techniques are new, they’re all old and established, but it’s a matter of persuading regulators of their adequacy.
- As I explained in an earlier post, every model suffers from two sources of error: bias and variance. Variance comes from noise in the data, and bias comes from inflexibility of the model. There is a bias-variance tradeoff, and the best model will try to balance the two. However, variance may cause a model to behave counterintuitively sometimes, and that might result in some counterintuitive explanations. So data scientists will tilt their models towards bias, possibly to the detriment of the model.
Are these restrictions good? Hard to say. It makes the models perform worse, but at least some (most?) of that cost will be born by the financial industry itself, and I think most people can live with that.
On the other hand, it sure seems like these restrictions are an unintended side-effect. After all, if companies can get around the restrictions by just doing more advanced math (e.g by using SHAP), then it’s hardly a restriction at all, it’s just a source of employment for data scientists and lawyers.
The mathematical dragon
So, I’m not showing the mathematics of xAI just for the heck of it (although that’s definitely the kind of thing I would do). I actually have a point to make about the philosophy of explanation. When we select which among the input features caused a person to be declined, this is a problem known in philosophy as causal selection. Many philosophers believe that there is no objective basis to perform causal selection. That’s kind of a problem, because our goal is precisely to build a mathematical algorithm to do just that.
Because causal selection is so fraught, it seems to me that “explaining” an algorithm is… theoretically impossible. It feels like we’re being legally required to produce a mathematical dragon. Of course, regulators are committed to the belief that such dragons exist, and the US legal profession is culturally innumerate, so we just have to create the best simulacrum of a dragon that we can. My goal in this section is to show you the very simplest dragon, and reveal where the screws are hidden.
For simplicity, suppose that a model has just only two inputs, and the adverse action notice needs to select exactly one of them as an explanation. (In practice, there might be tens or hundreds of inputs, and four need to be selected.) One input is income, the other is prior debt. The model follows this simple equation:
credit score = A*income + B*debt + C
If your credit score is greater than zero, then you get approved. If it’s less than zero, you are declined. Simple enough. But if you are declined, is it because your income is too low, or because you have too much debt?
We can think of it in terms of what the consumer needs to change to get approved. For instance, maybe they could get approved by increasing their income by $5k/year. Or maybe they could get approved by reducing their debt payments by $5k/year. But from this analysis, we really haven’t figured out which of the two inputs is more important. They’re both equal in the eyes of the math.
The simple method is to attribute part of your credit score to each feature. (A*income) is attributed to income, and (B*debt) is attributed to debt. If you prefer that the numbers add up to the credit score, you can use (A*income+C/2) and (B*debt+C/2) although that really doesn’t affect the relative ranking of income and debt. The problem with this method, is that A is positive, and B is negative, so debt will always be selected as the primary reason you were declined, even if you have no debt whatsoever.
The actual solution is to compare the applicant to some sort of base case. Suppose Joe Schmo currently has an income of $50k/year and makes debt payments of $10k/year. So the part of the credit score attributed to income is (A*(income – $50k/year)), and the part attributed to debt is (B*(debt – $10k/year)).
Of course, the result of this explanation method depends on who you choose to be Joe Schmo. Is he just the average person? Or the average person who applies for loans? Or the average person who gets approved for loans? Or someone else entirely? There isn’t a right answer. But I will say that if the point of an explanation is to show the consumer how to improve their credit score, you generally want Joe Schmo to be above average, because the “explanation” is essentially advising the consumer on how to be more like Joe Schmo.
There’s another mathematical problem, which is that the real credit score function looks more like this:
credit score = f(A*income + B*debt + C)
where f is the logistic function. In this form, the approval threshold is at 0.5 instead of 0. This algorithm approves and declines precisely the same people, but some people in the industry (who most definitely have something to sell you) argue that it calls for a different explanation method. And it’s hard to argue that it’s wrong exactly, because we’re all just building dragons.
I could also get into all the complications involved in SHAP, but look at the time!
Conclusion
For the old fogeys who still use Facebook, there’s a neat toy you can play with! Next to an ad, click on the three dots and select “Why am I seeing this ad?” Facebook always tells me that it’s because I’m over 18 and live in the US. I’m sure that this explanation is accurate in some technical sense, but in a deeper sense it’s utterly unhelpful.
I’m not saying it’s bad that the finance industry is required to explain its decisions. I like that finance is so regulated, it makes me feel better about working here. I imagine that the adverse action reasons are a pretty small thing from the point of view of the most consumers, but some may find it helpful and I’m glad for that.
The regulations definitely have some strange consequences though. Model explanation is mathematically and philosophically fraught, and there’s probably not much we can do about that until policy-makers study math and data scientists study social science. I believe in the value of explaining AI decisions, but I’m not sure I actually believe in the explanations themselves.
Great American Satan says
aha you admit it! you’re doing what 99% of people that are any good at math are doing. the mystery is gone.
i’m one to talk tho. i will not be able to publicly say what i do for a living until i stop doing it, which might be twenty years from now.