As modern machine learning methods become more ubiquitous, increasing attention is being paid to understanding how these models work. Typically these questions come in two sorts of flavors.
- In general, what variables are important for this model and which are less influential?
- For a specific prediction, what factors contributed most heavily to the model’s conclusion?
General Model Understanding
In question 1, we are trying to get a general understanding of the mechanisms behind the model. For example, suppose we have an algorithm to predict the value of a house, which looks at a dozen or so factors. An “explanation” might be something along the lines of:
“The primary factors are the square footage of the house and the wealth/income of the neighborhood at large. The condition of the house is also important. Other factors such as the number of bathrooms, size of the lot, and whether it has a garage are somewhat important. The rest of the variables have a relatively minor impact.”
Often, people wish to make statements such as “variable X is more important than variable Y”. One caveat to statements such as these is that one needs to consider both the magnitude and the frequency of the impact. For example, imagine a world where half the houses have 2-car garages and half have -car garages, and 0.1% of the houses have luxury swimming pools. All else being equal, a house with a 2-car garage is worth \$20,000 more than the 1-car garage counterpart, but a luxury swimming pool adds \$150,000 to the value of the house. Which variable is more “important”? The swimming pool has a higher magnitude of impact, but much lower frequency (more precisely, the variable corresponding to the swimming pool has lower entropy). There are any number of ways to combine the two factors into a single number, but you will always lose something significant in doing so.
One way to get a feel for “how” a model is doing its reasoning, is to simply see how the predictions change and you change the inputs. Tools such as Individual Conditional Expectation plots precisely examine this. However, some care must be taken in evaluating these.
Explaining a Specific Prediction
In question 2, we are confronted with a prediction on a simple example and desire a “justification” for the conclusion of the model. Colloquially, we want to know “Why is that house so expensive?” (from the model’s point of view). The kinds of answers we are looking for might be “It’s a 5,000 sq ft mansion!” or “It’s in downtown Manhattan!”.
Beyond a simple reason, we might want something closer to what a real estate agent or professional appraiser might give. Typically, they may start with a baseline estimate, such as “The average house price in the U.S. is $225,000.” Then, from there they would highlight the aspects of this particular house that make it different from typical. “Your town is a bit more expensive than other towns, so that makes the house worth 50K more. This house is smaller than average in your town, which makes it worth 25K less. But it has a relatively large lot (compared to comparably sized homes in your town), which makes it worth 10K more. It it slightly older, which makes it worth 7K less…” and so on.
As it happens, methods like SHAP, based on the Shapley value, can do almost exactly the same kind of analysis. XGBoost has integrated SHAP directly, making it possible to get these “prediction explanations” in just a few lines of code. More details to come in future posts! I’ll also be giving a talk on this subject at ODSC West in October 2019 in San Francisco!