Library companion to the paper Efficient Shapley Performance Attribution for Least-Squares Regression by Logan Bell, Nikhil Devanathan, and Stephen Boyd.
The results provided in the reference paper were generated using a more performant, but harder to use implementation of the same algorithm. This benchmark code and the numerical experiments from the reference paper can be found at cvxgrp/ls-spa-benchmark. We recommend caution in trying to use the benchmark code.
To install this package, execute
pip install ls_spa
Import ls_spa
by adding
from ls_spa import ls_spa
to the top of your Python file.
ls_spa
has the following dependencies:
numpy
scipy
pandas
Optional dependencies are
marimo
for using the demo notebookmatplotlib
for plotting in the demo notebook
We assume that you have imported ls_spa
and you have a X_train
, a X_test
,
a y_train
, and a y_test
for positive integers
attrs = ls_spa(X_train, X_test, y_train, y_test).attribution
attrs
will be a JAX vector containing the Shapley values of your features.
The ls_spa
function computes Shapley values for the given data using
the LS-SPA method described in the companion paper. It takes arguments:
X_train
: Training feature matrix.X_test
: Testing feature matrix.y_train
: Training response vector.y_test
: Testing response vector.
We present a complete Python script that utilizes LS-SPA to compute the Shapley attribution on the data from the toy example described in the companion paper.
# Imports
import numpy as np
from ls_spa import ls_spa
# Data loading
X_train, X_test, y_train, y_test = [np.load("./data/toy_data.npz")[key] for key in ["X_train","X_test","y_train","y_test"]]
# Compute Shapley attribution with LS-SPA
results = ls_spa(X_train, X_test, y_train, y_test)
# Print attribution
print(results)
This example uses data from the data
directory of this repository.
The line print(results)
prints a dashboard of information generated while
computing the Shapley attribution such as the attribution, the
To extract just the vector of Shapley values, use results.attribution
.
For more info, see optional arguments.
In this demo, we walk through the process of
computing Shapley values on the data for the toy example in the
companion paper. We then use ls_spa
to compute the Shapley attribution
on the same data.
ls_spa
takes the optional arguments:
reg
: Regularization parameter (Default0
).method
: Permutation sampling method. Options include'random'
,'permutohedron'
,'argsort'
, and'exact'
. IfNone
,'argsort'
is used if the number of features is greater than 10; otherwise,'exact'
is used.batch_size
: Number of permutations in each batch (Default2**7
).num_batches
: Maximum number of batches (Default2**7
).tolerance
: Convergence tolerance for the Shapley values (Default1e-2
).seed
: Seed for random number generation (Default42
).return_history
: Flag to determine whether to return the history of error estimates and attributions for each feature chain (DefaultFalse
).
ls_spa
returns a ShapleyResults
object. The ShapleyResults
object
has the fields:
attribution
: Array of Shapley values for each feature.attribution_history
: Array of Shapley values for each iteration.None
ifreturn_history=False
inls_spa
call.theta
: Array of regression coefficients.overall_error
: Mean absolute error of the Shapley values.error_history
: Array of mean absolute errors for each iteration.None
ifreturn_history=False
inls_spa
call.attribution_errors
: Array of absolute errors for each feature.r_squared
: Out-of-sample R-squared statistic of the regression.
If you use this code for research, please cite the associated paper.
@article{Bell2024,
title = {Efficient Shapley performance attribution for least-squares regression},
volume = {34},
ISSN = {1573-1375},
url = {http://dx.doi.org/10.1007/s11222-024-10459-9},
DOI = {10.1007/s11222-024-10459-9},
number = {5},
journal = {Statistics and Computing},
publisher = {Springer Science and Business Media LLC},
author = {Bell, Logan and Devanathan, Nikhil and Boyd, Stephen},
year = {2024},
month = jul
}