SHAP Foundations

Author

Kris Sankaran

Published

February 23, 2026

\[ \newcommand{\bs}[1]{\mathbf{#1}} \newcommand{\reals}{\mathbb{R}} \newcommand{\widebar}[1]{\overline{#1}} \newcommand{\E}{\mathbb{E}} \newcommand{\Earg}[1]{\mathbb{E}\left[{#1}\right]} \newcommand{\Esubarg}[2]{\mathbb{E}_{#1}\left[{#2}\right]} \]

Readings: 1 (required), 2 (optional), Code

Bullet items with \(^{\dagger}\) are not in the required reading, so not tested.

Setup

Goal. Given a model \(f\) and a sample \(x \in \reals^{D}\), return a local feature attribution \(\varphi_d\left(x\right)\) that quantifies the contribution of feature \(d\) to the prediction \(f\left(x\right)\).

Requirements.

  • Local feature attributions. Unlike global variable importances, \(\varphi_d\left(x\right)\) are specific to sample \(x\). This matters a stakeholder cares specifically about particular prediction \(f(x)\) and wants an explanation for it.

  • Model agnostic. Attributions should be computable by querying \(f\) alone, without any assumptions about what kind of model it is – it could be a black box.

  • Principled. The attribution measure should be derivable from a clear set of mathematical axioms.

Approach.

SHAP values satisfy all three requirements. This handout focuses on their theoretical development. Practical computation comes next week. We proceed in these steps,

  1. Game theory definitions. SHAP is inspired by a classic result \(n\)-player game theory. We’re not interested in game theory for its own sake, but this framing will help us see why there are several defensible ways to adapt it to machine learning.

  2. Machine learning analogy. We develop an analogy between the game theoretic definition and quantities of interest in local feature attribution.

  3. Feature removal. The most ambiguous part of the game theory \(\to\) ML analogy is how implement “feature removal.” Different choices lead to different SHAP variants

Local Feature Attributions

  1. High-stakes decisions. When an individual has a medical diagnosis made or an insurance claim denied, knowing the globally most important features isn’t enough. They deserve an explanation specific to their case.

    • A related use case is algorithmic recourse. What could a stakeholder change to reverse a decision? (e.g., which change to their resume would have gotten them a job interview?)
  2. Model debugging. A model might classify \(x\) as a husky because it had snow in the background (large \(\varphi_d(x)\) on pixels \(d\) in the snow region) rather than the dog itself. This means the model has learned a “shortcut” and won’t generalize well – a wolf in the snow might get misclassified as a husky (Ribeiro, Singh, and Guestrin 2016).

  3. Scientific discovery. In heterogeneous populations (e.g., different disease subtypes), a model might rely on different sets of features for each subpopulation. Local attributions can highlight these differences – e.g., identifying which features drive drug effectiveness in one subgroup vs. another.

    Exercise: Give one example (hypothetical, or from your own experience) where local feature attribution would be useful. How would it differ from global variable importance?

Game Theory Definitions

  1. SHAP is motivated by the credit assignment problem from game theory. Imagine agents \(\mathcal{D} = \{1, \dots, D\}\) and imagine where any subset \(S\) earns profit \(v(S)\). How much of the total profit \(v(\mathcal{D})\) should agent \(d\) receive? This share is the Shapley value \(\varphi_{d}(v)\).

  2. Intuitively, agent \(d\)’s contribution depends on how much they add to each team \(S\). Define the marginal contributions of agent \(d\) to team \(S\) as, \[C\left(d \vert S\right) = v(S \cup \{d\}) - v(S)\]

  3. The Shapley value is a weighted average of these marginal contributions across all subsets \(S \subseteq{\mathcal{D} - \{d\}}\) excluding agent \(d\),

    \[ \varphi_d(v) = \sum_{S \subseteq \mathcal{D} - \{d\}} \frac{1}{D {D - 1 \choose \left|S\right|}} C(d \vert S) \tag{1}\] The summation is over all subsets that don’t include agent \(\{d\}\) (if it had included agent \(d\), then the definition of the contribution \(C(d \vert S)\) of \(d\) to \(S\) wouldn’t make sense).

    Exercise: Express the following in terms of \(v\). Which of these could appear in the definition of the Shapley value?

    • \(C(1 \vert 2)\)
    • \(C(1 \vert \emptyset)\)
    • \(C(1 \vert 1)\)
    • \(C(3 \vert \{1, 2\})\)
    • \(C(\{1, 2\} \vert 3)\)
  4. The weights \(1/(D {D - 1 \choose \left|S\right|})\) ensure that the weights sum to 1, making \(\varphi_{d}(v)\) a proper weighted average of marginal contributions. For any coalition of size \(s\), there are \({D - 1 \choose s}\) subsets with that size, so the total weight is,

    \[\sum_{s=0}^{D-1} \binom{D-1}{s} \frac{1}{D\binom{D-1}{s}} = \sum_{s=0}^{D-1} \frac{1}{D} = 1.\]

  5. The Shapley value is the unique solution satisfying the axioms below, giving this approach a principled justification,

    • Efficiency. Shapley values sum to the total profit: \[ \sum_{d = 1}^{D} \varphi_{d}(v) = v\left(\mathcal{D}\right) - v\left(\emptyset\right) \] All profit is distributed and nothing is left over.

    • Monotonicity. If agent \(d\) contributes at least as much in game \(v_1\) as in \(v_2\) for every coalition1, then \(\varphi_d(v_1) \geq \varphi_d(v_2)\).

    • Symmetry. Equal contributors receive equal credit. That is, if \(v(S \cup \{d\}) = v(S \cup \{d'\})\) for every \(S\), then \(\varphi_d(v) = \varphi_{d'}(v)\).

    • Missingness. An agent that never contributes receives nothing. That is, if \(v\left(S \cup \{d\}\right) = v\left(S\right)\) for every \(S\), then \(\varphi_d(v) = 0\).

Machine Learning Analogy

  1. The key insight is that we can map this game theoretic setup to the local feature attribution problem:

    • “profit \(v(\mathcal{D})\)\(\to\) “prediction \(f(x)\)
    • “agent \(d\)\(\to\) “feature \(d\).”
    • “team \(S\)\(\to\) “subset of features \(S \subset \mathcal{D}\).”

    Instead of distributing profit across agents, we attribute a prediction \(f(x)\) across features. We denote this attribution \(\varphi_d(f, x)\)

  2. For this to work, we need to define \(C(d \vert S)\) as the change in prediction feature \(d\) is included vs. removed. Different answers to this feature removal question lead to different Shap values. But once \(C\) is defined, we can substitute it into Equation Equation 1 to get a local feature attribution for sample \(x\).

    Exercise: Pick one of the four axioms for game theoretic SHAP. What does it imply about \(\varphi_d(f, x)\)?

Deterministic Feature Removal

  1. There are three common approaches to feature removal: baseline, marginal, and conditional. We’ll review each in turn.

  2. Let \(x'\) denote a baseline value. For example, \(\mathbf{0}\in \reals^{D}\), or the sample mean \(\bar{x} \in \reals^{D}\). Define \[ v(S) = f(x_{S}, x'_{\bar{S}}) \\ \] where \(x_{S}\) and \(x_{\bar{S}}\) index the coordinates of \(x\) included and excluded from \(S\). This uses the real feature values from sample \(x\) for coordinates in \(S\) and substitutes the baseline \(x'\) elsewhere.

    Exercise: Suppose that \(x\) is an image and that \(x'\) is the all zeros image. What would \(\left(x_{S}, x'_{\bar{S}}\right)\) look like?

  3. The marginal contributions become,

    \[ \begin{align*} C(d \vert S) &= v\left(S \cup \{d\}\right) - v(S)\\ &= f\left(x_{S \cup \{d\}}, x'_{\overline{S \cup \{d\}}}\right) - f\left(x_{S}, x'_{\bar{S}}\right) \end{align*} \] This is the change in prediction when feature \(d\) is included (left) vs. replaced by the baseline (right).

  4. The downside of this approach is that it depends on the choice of baseline \(x'\), and there is no obvious principled way to choose it.

Sampling-based Feature Removal

  1. Both the marginal and conditional approaches replace the deterministic baseline with an expectation over randomly sampled coordinates. Let \(X_{\bar{S}}\) be a random vector of the features not in \(S\), drawn from the training distribution. The marginal approach defines, \[ v(S) = \Esubarg{p(X_{\bar{S}})}{f(x_{S}, X_{\bar{S}})} \tag{2}\]

  2. In practice, this expectation can be approximated by the training data, \(x_1, \dots, x_{N}\), \[ v(S) = \frac{1}{N} \sum_{i = 1}^{N} f(x_S, x_{i,\bar{S}}). \]

    The figure below gives a geometric representation of one term in the SHAP sum,

    Exercise: What would \(C\left(1 \vert 2\right)\) look like in this figure? What about \(C\left(1 \vert \emptyset\right)\)?

    Exercise: Would you expect \(\varphi_1(f, x)\) to be larger or smaller than \(\varphi_2(f, x)\) for the \(f\) and \(x\) shown? Explain your reasoning.

  3. A potential downside of this approach is that, if features are correlated, the marginal approach may evaluate \(f\) at unrealistic input combinations. This is because \(X_{\bar{S}}\) is sampled independently from the observed \(x_{S}\).

    Exercise: Modify the previous figure to illustrate this extrapolation issue.

  4. The conditional approach addresses this by drawing \(X_{\bar{S}}\) from the distribution conditioned on the observed \(x_{S}\),

    \[ v(S) = \Esubarg{p\left(X_{\bar{S}} \vert X_{S} = x_{S}\right)}{f(x_{S}, X_{\bar{S}})}. \]

  5. Unlike the marginal approach, there is no generic estimator for this conditional expectation. Under some assumptions (e.g., multivariate gaussianity), it might be available in closed form. Alternatively, one approach is to train a surrogate model to approximate and sample from the required conditional distributions.

Causal Intervention Perspective

  1. \(^\dagger\) Some researchers (Janzing, Minorics, and Bloebaum 2020) have argued that point (3) in the previous section is not actually a problem, and that, from a causality perspective, the marginal approach computes the more meaningful expectation.

  2. \(^\dagger\). To see this, we distinguish between observed data \(\tilde{x}\) and model inputs \(x\). An intervention conditional distribution sets the input coordinates \({S}\) to \(x_{S}\) while leaving \(X_{\bar{S}}\) undisturbed, written as \[\Earg{f(X_{S}, X_{\bar{S}}) \vert \text{do}(X_{S} = x_{S})}\] This differs from ordinary conditioning, which asks “given \(X_{S} = x_{S}\), what do we expect \(X_{\bar{S}}\)​ to look like?” In contrast, intervening asks “What if we force \(X_S = x_S\) and leave everything else untouched?”

    Exercise: Compare and contrast interventional conditional distributions with CP profiles.

  3. \(^\dagger\) Conditioning on \(X_S = x_S\) can change the distribution of \(X_{\bar{S}}\) in a way that creates nonzero attributions for any feature \(d\) that is correlated with \(X_S\), even even if they aren’t in the model. Interventions avoid this because they leave \(X_{\bar{S}}\) unchanged.

  4. \(^\dagger\) To make this concrete, suppose \(X_1\) is a thermometer measurement, \(X_2\) is the actual temperature, and \(f(X_1, X_2) = X_2\) only depends on the actual temperature. Since the two variables are correlated, \(\Esubarg{p(X_2 \vert X_1 = x_1)}{X_2} \neq \Earg{X_2}\), so \(C(1 \vert \emptyset) \neq 0\) and we get a nonzero attribution for \(X_1\), even though it isn’t in the model2.

  5. \(^\dagger\) It turns out that \[ \Esubarg{p(X_{\bar{S}})}{f(X_{S}, X_{\bar{S}}) \vert \text{do}(X_{S} = x_{S})} = \Esubarg{p(X_{\bar{S}})}{f(x_{S}, X_{\bar{S}})} \] The right hand side is exactly the marginal expectation from Equation 2. So, the marginal approach matches a causally meaningful quantity that reflects what it means to remove a feature’s influence.

Code Example

  1. Let’s apply the shap python package to identify important variables in the adult dataset. Each \(x_i\) is a survey response for person \(i\), and \(y_i\) indicates whether they make more than $50K/year.

    import shap
    import matplotlib.pyplot as plt
    plt.rcParams['figure.autolayout'] = True
    
    X, y = shap.datasets.adult()
    X, y = X.iloc[:2000], y[:2000]   # subsample for speed
    X
    Age Workclass Education-Num Marital Status Occupation Relationship Race Sex Capital Gain Capital Loss Hours per week Country
    0 39.0 7 13.0 4 1 0 4 1 2174.0 0.0 40.0 39
    1 50.0 6 13.0 2 4 4 4 1 0.0 0.0 13.0 39
    2 38.0 4 9.0 0 6 0 4 1 0.0 0.0 40.0 39
    3 53.0 4 7.0 2 6 4 2 1 0.0 0.0 40.0 39
    4 28.0 4 13.0 2 10 5 2 0 0.0 0.0 40.0 5
    ... ... ... ... ... ... ... ... ... ... ... ... ...
    1995 44.0 4 10.0 0 7 0 4 1 0.0 0.0 50.0 39
    1996 49.0 4 9.0 2 12 4 4 1 0.0 0.0 60.0 39
    1997 75.0 6 14.0 3 10 0 4 0 0.0 0.0 50.0 39
    1998 37.0 4 13.0 2 12 4 4 1 0.0 0.0 55.0 39
    1999 51.0 7 9.0 0 3 1 4 1 0.0 0.0 38.0 39

    2000 rows × 12 columns

  2. We train a random forest model to these data using the sklearn package.

    from sklearn.ensemble import RandomForestClassifier
    model = RandomForestClassifier(n_estimators=100)
    model.fit(X, y)
    RandomForestClassifier()
    In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
    On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
  3. Here, we’re explaining the first 50 samples \(x_1, \dots, x_{50}\) using the marginal feature removal approach, as implemented by KernelExplainer (we’ll go over the exact computational algorithm next week). The \(N = 2000\) rows in X are used in estimating the marginal expectation Equation 2. Notice that the explainer only needs access to the anonymous function predictor – it doesn’t require any knowledge of the type of model implemented within it.

    X_explain = X.iloc[:50]
    predictor = lambda X: model.predict_proba(X)[:, 1]
    
    #explainer  = shap.KernelExplainer(predictor, X) # if you want to run accurate version
    explainer  = shap.KernelExplainer(predictor, X.sample(100)) # if you want to run the fast version
    sv = explainer.shap_values(X_explain)

    Exercise: What is the dimension of sv?

  4. A waterfall plot shows \(\varphi_d(f, x)\) for all \(d\) features. By the efficiency axiom, the sum of these bars is equal to the predicted response.

    exp_single = shap.Explanation(
     values = sv[0],
     base_values = explainer.expected_value,
     data = X_explain.iloc[0].values,
     feature_names = X_explain.columns.tolist(),
    )
    shap.plots.waterfall(exp_single)

  5. Here is an alternative visualization well-suited for visualizing attributions for multiple samples (lines) simultaneously. This visualization is helpful for identifying clusters of samples with similar feature attributions.

    shap.decision_plot(explainer.expected_value, sv, X_explain)

    Exercise: Imagine explaining this visualization to a non-data scientist. Describe each component without jargon and summarize the main takeaways within the salary prediction context.

  6. Throughout these notes, we’ve assumed we can easily compute, \[ \varphi_d(v) = \sum_{S \subseteq \mathcal{D} - \{d\}} \frac{1}{D {D - 1 \choose \left|S\right|}} C(d \vert S) \] This is actually challenging to compute! It runs over exponentially many subsets just to explain a single sample. So far, we’ve swept this computational challenge under the rug. Next week we’ll discuss practical strategies to computing SHAP values efficiently.

References

Janzing, Dominik, Lenon Minorics, and Patrick Bloebaum. 2020. “Feature Relevance Quantification in Explainable AI: A Causal Problem.” Edited by Silvia Chiappa and Roberto Calandra, Proceedings of machine learning research, 108: 2907–16. https://proceedings.mlr.press/v108/janzing20a.html.
Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. 2016. ‘Why Should i Trust You?’: Explaining the Predictions of Any Classifier.” In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–44. KDD ’16. ACM. https://doi.org/10.1145/2939672.2939778.

Footnotes

  1. That is, if \(v_1\left(S \cup \{d\}\right) - v_1(S) \geq v_2\left(S \cup \{d\}\right) - v_2(S)\).↩︎

  2. In full detail, \(C(1 \vert \emptyset) = v(\{1\}) - v(\emptyset) = \Esubarg{p(X_2 \vert X_1 = x_1)}{X_2} - \Esubarg{p(X_1, X_2)}{X_2} \neq 0\) and \(C(1 \vert 2) = v(\{1, 2\}) - v(\{2\}) = f(x_1, x_2) - \Esubarg{p(X_1 \vert X_2 = x_2)}{f(X_1, x_2)} = x_2 - x_2 = 0\) where in the last step we used the fact that \(f(X_1, x_2) = x_2\) deterministically. Therefore the terms don’t cancel and we obtain a nonzero Shapley value.↩︎