Issue
Hi everyone i came across an example of how to use shap on lstm Time-step wise feature importance in deep learning using SHAP. I'm curious why the author chose to use
e = shap.DeepExplainer((regressor.layers[0].input,
regressor.layers[-1].output),data)
instead of just
e = shap.DeepExplainer(regressor,data)
I suspect the reason is very important but I cannot be sure. Anyone can shed some light on this?
Partial code below
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from keras.models import load_model
import shap
regressor = load_model(‘lstm_stock.h5’)
pred_x = regressor.predict_classes(X_train)
random_ind = np.random.choice(X_train.shape[0], 1000,
replace=False)
print(random_ind)
data = X_train[random_ind[0:500]]
e = shap.DeepExplainer((regressor.layers[0].input,
regressor.layers[-1].output),data)
test1 = X_train[random_ind[500:1000]]
shap_val = e.shap_values(test1)
shap_val = np.array(shap_val)...
Solution
The solution is quite simple. Let's look at the DeepExplainer documentation. This is the __init__
function:
__init__(model, data, session=None, learning_phase_flags=None)
Your confusion is about the first argument, that is, model
. According to the documentation, for Tensorflow, model
is:
a pair of TensorFlow tensors (or a list and a tensor) that specifies the input and output of the model to be explained.
So that's it. The first argument is just a pair indicating the input and the output of the model. In this case:
(regressor.layers[0].input, regressor.layers[-1].output)
Update:
In the example Front Page DeepExplainer MNIST Example it is however shown this piece of code:
import shap
import numpy as np
# select a set of background examples to take an expectation over
background = x_train[np.random.choice(x_train.shape[0], 100, replace=False)]
# explain predictions of the model on three images
e = shap.DeepExplainer(model, background)
# ...or pass tensors directly
# e = shap.DeepExplainer((model.layers[0].input, model.layers[-1].output), background)
shap_values = e.shap_values(x_test[1:5])
From this it seems that, besides the pair of tensors, it is also possible to use the model
as input, just like it is possible for Pytorch. Indeed in Pytorch, instead of the pair, you can also use a nn.Module
object.
At this point then I guess that the one for Tensorflow is just an undocumented feature. It should then be equivalent to use directly the model or the pair of tensors.
Answered By - claudia
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.