\[
\newcommand{\bs}[1]{\mathbf{#1}}
\newcommand{\reals}{\mathbb{R}}
\newcommand{\widebar}[1]{\overline{#1}}
\newcommand{\E}{\mathbb{E}}
\newcommand{\Earg}[1]{\mathbb{E}\left[{#1}\right]}
\newcommand{\Esubarg}[2]{\mathbb{E}_{#1}\left[{#2}\right]}
\]
Readings: 1 (required), 2 (optional), Code
Bullet items with \(^{\dagger}\) are not in the required reading, so not tested.
Setup
Goal. Given a model \(f\) and a sample \(x \in \reals^{D}\), return a local feature attribution \(\varphi_d\left(x\right)\) that quantifies the contribution of feature \(d\) to the prediction \(f\left(x\right)\).
Requirements.
Local feature attributions. Unlike global variable importances, \(\varphi_d\left(x\right)\) are specific to sample \(x\). This matters a stakeholder cares specifically about particular prediction \(f(x)\) and wants an explanation for it.
Model agnostic. Attributions should be computable by querying \(f\) alone, without any assumptions about what kind of model it is – it could be a black box.
Principled. The attribution measure should be derivable from a clear set of mathematical axioms.
Approach.
SHAP values satisfy all three requirements. This handout focuses on their theoretical development. Practical computation comes next week. We proceed in these steps,
Game theory definitions. SHAP is inspired by a classic result \(n\)-player game theory. We’re not interested in game theory for its own sake, but this framing will help us see why there are several defensible ways to adapt it to machine learning.
Machine learning analogy. We develop an analogy between the game theoretic definition and quantities of interest in local feature attribution.
Feature removal. The most ambiguous part of the game theory \(\to\) ML analogy is how implement “feature removal.” Different choices lead to different SHAP variants
Local Feature Attributions
High-stakes decisions. When an individual has a medical diagnosis made or an insurance claim denied, knowing the globally most important features isn’t enough. They deserve an explanation specific to their case.
- A related use case is algorithmic recourse. What could a stakeholder change to reverse a decision? (e.g., which change to their resume would have gotten them a job interview?)
Model debugging. A model might classify \(x\) as a husky because it had snow in the background (large \(\varphi_d(x)\) on pixels \(d\) in the snow region) rather than the dog itself. This means the model has learned a “shortcut” and won’t generalize well – a wolf in the snow might get misclassified as a husky (Ribeiro, Singh, and Guestrin 2016).

Scientific discovery. In heterogeneous populations (e.g., different disease subtypes), a model might rely on different sets of features for each subpopulation. Local attributions can highlight these differences – e.g., identifying which features drive drug effectiveness in one subgroup vs. another.
Exercise: Give one example (hypothetical, or from your own experience) where local feature attribution would be useful. How would it differ from global variable importance?
Game Theory Definitions
SHAP is motivated by the credit assignment problem from game theory. Imagine agents \(\mathcal{D} = \{1, \dots, D\}\) and imagine where any subset \(S\) earns profit \(v(S)\). How much of the total profit \(v(\mathcal{D})\) should agent \(d\) receive? This share is the Shapley value \(\varphi_{d}(v)\).
Intuitively, agent \(d\)’s contribution depends on how much they add to each team \(S\). Define the marginal contributions of agent \(d\) to team \(S\) as, \[C\left(d \vert S\right) = v(S \cup \{d\}) - v(S)\]

The Shapley value is a weighted average of these marginal contributions across all subsets \(S \subseteq{\mathcal{D} - \{d\}}\) excluding agent \(d\),
\[
\varphi_d(v) = \sum_{S \subseteq \mathcal{D} - \{d\}} \frac{1}{D {D - 1 \choose \left|S\right|}} C(d \vert S)
\tag{1}\] The summation is over all subsets that don’t include agent \(\{d\}\) (if it had included agent \(d\), then the definition of the contribution \(C(d \vert S)\) of \(d\) to \(S\) wouldn’t make sense).
Exercise: Express the following in terms of \(v\). Which of these could appear in the definition of the Shapley value?
- \(C(1 \vert 2)\)
- \(C(1 \vert \emptyset)\)
- \(C(1 \vert 1)\)
- \(C(3 \vert \{1, 2\})\)
- \(C(\{1, 2\} \vert 3)\)
The weights \(1/(D {D - 1 \choose \left|S\right|})\) ensure that the weights sum to 1, making \(\varphi_{d}(v)\) a proper weighted average of marginal contributions. For any coalition of size \(s\), there are \({D - 1 \choose s}\) subsets with that size, so the total weight is,
\[\sum_{s=0}^{D-1} \binom{D-1}{s} \frac{1}{D\binom{D-1}{s}} = \sum_{s=0}^{D-1} \frac{1}{D} = 1.\]

The Shapley value is the unique solution satisfying the axioms below, giving this approach a principled justification,
Efficiency. Shapley values sum to the total profit: \[
\sum_{d = 1}^{D} \varphi_{d}(v) = v\left(\mathcal{D}\right) - v\left(\emptyset\right)
\] All profit is distributed and nothing is left over.
Monotonicity. If agent \(d\) contributes at least as much in game \(v_1\) as in \(v_2\) for every coalition, then \(\varphi_d(v_1)
\geq \varphi_d(v_2)\).
Symmetry. Equal contributors receive equal credit. That is, if \(v(S
\cup \{d\}) = v(S \cup \{d'\})\) for every \(S\), then \(\varphi_d(v) =
\varphi_{d'}(v)\).
Missingness. An agent that never contributes receives nothing. That is, if \(v\left(S \cup \{d\}\right) = v\left(S\right)\) for every \(S\), then \(\varphi_d(v) = 0\).
Machine Learning Analogy
The key insight is that we can map this game theoretic setup to the local feature attribution problem:
- “profit \(v(\mathcal{D})\)” \(\to\) “prediction \(f(x)\)”
- “agent \(d\)” \(\to\) “feature \(d\).”
- “team \(S\)” \(\to\) “subset of features \(S \subset \mathcal{D}\).”
Instead of distributing profit across agents, we attribute a prediction \(f(x)\) across features. We denote this attribution \(\varphi_d(f, x)\)

For this to work, we need to define \(C(d \vert S)\) as the change in prediction feature \(d\) is included vs. removed. Different answers to this feature removal question lead to different Shap values. But once \(C\) is defined, we can substitute it into Equation Equation 1 to get a local feature attribution for sample \(x\).
Exercise: Pick one of the four axioms for game theoretic SHAP. What does it imply about \(\varphi_d(f, x)\)?
Deterministic Feature Removal
There are three common approaches to feature removal: baseline, marginal, and conditional. We’ll review each in turn.
Let \(x'\) denote a baseline value. For example, \(\mathbf{0}\in \reals^{D}\), or the sample mean \(\bar{x} \in
\reals^{D}\). Define \[
v(S) = f(x_{S}, x'_{\bar{S}}) \\
\] where \(x_{S}\) and \(x_{\bar{S}}\) index the coordinates of \(x\) included and excluded from \(S\). This uses the real feature values from sample \(x\) for coordinates in \(S\) and substitutes the baseline \(x'\) elsewhere.

Exercise: Suppose that \(x\) is an image and that \(x'\) is the all zeros image. What would \(\left(x_{S}, x'_{\bar{S}}\right)\) look like?
The marginal contributions become,
\[
\begin{align*}
C(d \vert S) &= v\left(S \cup \{d\}\right) - v(S)\\
&= f\left(x_{S \cup \{d\}}, x'_{\overline{S \cup \{d\}}}\right) - f\left(x_{S}, x'_{\bar{S}}\right)
\end{align*}
\] This is the change in prediction when feature \(d\) is included (left) vs. replaced by the baseline (right).

The downside of this approach is that it depends on the choice of baseline \(x'\), and there is no obvious principled way to choose it.

Sampling-based Feature Removal
Both the marginal and conditional approaches replace the deterministic baseline with an expectation over randomly sampled coordinates. Let \(X_{\bar{S}}\) be a random vector of the features not in \(S\), drawn from the training distribution. The marginal approach defines, \[
v(S) = \Esubarg{p(X_{\bar{S}})}{f(x_{S}, X_{\bar{S}})}
\tag{2}\]
In practice, this expectation can be approximated by the training data, \(x_1,
\dots, x_{N}\), \[
v(S) = \frac{1}{N} \sum_{i = 1}^{N} f(x_S, x_{i,\bar{S}}).
\] 