signxai.torch_signxai.methods_impl package
Subpackages
- signxai.torch_signxai.methods_impl.zennit_impl package
- Submodules
- signxai.torch_signxai.methods_impl.zennit_impl.analyzers module
AnalyzerBaseGradientAnalyzerIntegratedGradientsAnalyzerSmoothGradAnalyzerGuidedBackpropAnalyzerDeconvNetCompositeDeconvNetAnalyzerGradCAMAnalyzerLRPAnalyzerGradientXSignAnalyzerGradientXInputAnalyzerVarGradAnalyzerDeepTaylorAnalyzerAdvancedLRPAnalyzerLRPSequentialBoundedLRPAnalyzerLRPStdxEpsilonAnalyzerDeepLiftAnalyzer
- signxai.torch_signxai.methods_impl.zennit_impl.direct_hook_analyzer module
- signxai.torch_signxai.methods_impl.zennit_impl.hooks module
LrpBaseHookVarGradBaseAnalyzerGammaHookStdxEpsilonHookFlatHookEpsilonHookSignEpsilonHookLrpSignEpsilonMuHookLrpSignEpsilonStdXHookLrpSignEpsilonStdXMuHookWSquareHookSignHookSignMuHookVarGradAnalyzerVarGradXInputAnalyzerVarGradXSignAnalyzerlrp_composite()lrpsign_epsilon()lrpsign_epsilon_mu()lrpsign_epsilon_std_x()lrpsign_epsilon_std_x_mu()lrpz_epsilon()w2lrp_epsilon()w2lrp_stdx_epsilon()lrpz_stdx_epsilon()stdx_epsilon()lrpz_sequential_composite_a()lrpz_sequential_composite_b()w2lrp_sequential_composite_a()w2lrp_sequential_composite_b()epsilon_composite()zplus_composite()zbox_composite()wsquare_composite_standard()sequential_composite()alphabeta_composite()flat_composite()wsquare_composite()gamma_composite()sign_composite()w2lrp_composite_a()vargrad_analyzer()vargrad_x_input_analyzer()vargrad_x_sign_analyzer()vargrad_x_input_x_sign_analyzer()lrpsign_epsilon_stdx()lrpsign_epsilon_stdx_mu()
- signxai.torch_signxai.methods_impl.zennit_impl.sign_rule module
- signxai.torch_signxai.methods_impl.zennit_impl.stdx_rule module
- Module contents
calculate_relevancemap()GradientAnalyzerSmoothGradAnalyzerIntegratedGradientsAnalyzerLRPAnalyzerAdvancedLRPAnalyzerLRPSequentialBoundedLRPAnalyzerLRPStdxEpsilonAnalyzerDeepLiftAnalyzerGuidedBackpropAnalyzerDeconvNetAnalyzerGradCAMAnalyzerGradientXSignAnalyzerGradientXInputAnalyzerVarGradAnalyzerDeepTaylorAnalyzerAnalyzerBaseStdxEpsilonSIGNRuleSIGNmuRule
Submodules
signxai.torch_signxai.methods_impl.base module
Base gradient attribution methods for PyTorch.
- class signxai.torch_signxai.methods_impl.base.BaseGradient(model)[source]
Bases:
objectBase gradient attribution method.
- class signxai.torch_signxai.methods_impl.base.InputXGradient(model)[source]
Bases:
BaseGradientInput times gradient attribution method.
- class signxai.torch_signxai.methods_impl.base.GradientXSign(model, mu: float = 0.0)[source]
Bases:
BaseGradientGradient times sign attribution method.
signxai.torch_signxai.methods_impl.deconvnet module
PyTorch implementation of DeconvNet.
- class signxai.torch_signxai.methods_impl.deconvnet.DeconvNetReLU(*args, **kwargs)[source]
Bases:
FunctionDeconvNet ReLU activation.
This modified ReLU passes the gradient if the gradient from the next layer is positive, regardless of the input value.
- static forward(ctx, input_tensor)[source]
Define the forward of the custom autograd Function.
This function is to be overridden by all subclasses. There are two ways to define forward:
Usage 1 (Combined forward and ctx):
@staticmethod def forward(ctx: Any, *args: Any, **kwargs: Any) -> Any: pass
It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).
See Combined or separate forward() and setup_context() for more details
Usage 2 (Separate forward and ctx):
@staticmethod def forward(*args: Any, **kwargs: Any) -> Any: pass @staticmethod def setup_context(ctx: Any, inputs: Tuple[Any, ...], output: Any) -> None: pass
The forward no longer accepts a ctx argument.
Instead, you must also override the
torch.autograd.Function.setup_context()staticmethod to handle setting up thectxobject.outputis the output of the forward,inputsare a Tuple of inputs to the forward.See Extending torch.autograd for more details
The context can be used to store arbitrary data that can be then retrieved during the backward pass. Tensors should not be stored directly on ctx (though this is not currently enforced for backward compatibility). Instead, tensors should be saved either with
ctx.save_for_backward()if they are intended to be used inbackward(equivalently,vjp) orctx.save_for_forward()if they are intended to be used for injvp.
- static backward(ctx, grad_output)[source]
Define a formula for differentiating the operation with backward mode automatic differentiation.
This function is to be overridden by all subclasses. (Defining this function is equivalent to defining the
vjpfunction.)It must accept a context
ctxas the first argument, followed by as many outputs as theforward()returned (None will be passed in for non tensor outputs of the forward function), and it should return as many tensors, as there were inputs toforward(). Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input. If an input is not a Tensor or is a Tensor not requiring grads, you can just pass None as a gradient for that input.The context can be used to retrieve tensors saved during the forward pass. It also has an attribute
ctx.needs_input_gradas a tuple of booleans representing whether each input needs gradient. E.g.,backward()will havectx.needs_input_grad[0] = Trueif the first input toforward()needs gradient computed w.r.t. the output.
- class signxai.torch_signxai.methods_impl.deconvnet.DeconvNetReLUModule(*args, **kwargs)[source]
Bases:
ModuleModule wrapper for the DeconvNetReLU function.
- forward(x)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- signxai.torch_signxai.methods_impl.deconvnet.replace_relu_with_deconvnet_relu(model)[source]
Replace all ReLU activations with DeconvNetReLU.
- Parameters:
model – PyTorch model
- Returns:
Modified model with DeconvNet ReLU activations
- signxai.torch_signxai.methods_impl.deconvnet.build_deconvnet_model(model)[source]
Build a DeconvNet model by replacing ReLU activations.
- Parameters:
model – PyTorch model
- Returns:
DeconvNet model for backpropagation
- signxai.torch_signxai.methods_impl.deconvnet.deconvnet(model, input_tensor, target_class=None)[source]
Generate DeconvNet attribution map.
- Parameters:
model – PyTorch model
input_tensor – Input tensor (requires_grad=True)
target_class – Target class index (None for argmax)
- Returns:
Gradient attribution map
signxai.torch_signxai.methods_impl.grad_cam module
Unified PyTorch implementation of Grad-CAM combining the best features from both implementations.
- class signxai.torch_signxai.methods_impl.grad_cam.GradCAM(model, target_layer=None)[source]
Bases:
objectUnified Grad-CAM implementation for PyTorch models.
Combines the automatic layer detection from gradcam.py with the TensorFlow-compatible behavior from grad_cam.py.
Grad-CAM uses the gradients of a target concept flowing into the final convolutional layer to produce a coarse localization map highlighting important regions in the image for prediction.
- __init__(model, target_layer=None)[source]
Initialize GradCAM.
- Parameters:
model – PyTorch model
target_layer – Target layer for Grad-CAM. If None, will try to automatically find the last convolutional layer.
- forward(x, target_class=None)[source]
Generate Grad-CAM attribution map using the TensorFlow-compatible approach.
- Parameters:
x – Input tensor
target_class – Target class index (None for argmax)
- Returns:
Grad-CAM attribution map
- attribute(inputs, target=None, resize_to_input=True)[source]
Generate Grad-CAM heatmap (compatible with gradcam.py interface).
- Parameters:
inputs – Input tensor
target – Target class index (None for argmax)
resize_to_input – Whether to resize heatmap to input size
- Returns:
Grad-CAM heatmap (same size as input if resize_to_input=True)
- signxai.torch_signxai.methods_impl.grad_cam.calculate_grad_cam_relevancemap(model, input_tensor, target_layer=None, target_class=None, layer_name=None, **kwargs)[source]
Calculate Grad-CAM relevance map for images.
This function provides a convenient interface compatible with grad_cam.py.
- Parameters:
model – PyTorch model
input_tensor – Input tensor
target_layer – Target layer for Grad-CAM (None to auto-detect)
target_class – Target class index (None for argmax)
layer_name – Alternative name for target_layer (for compatibility)
**kwargs – Additional parameters (ignored)
- Returns:
Grad-CAM relevance map as numpy array
- signxai.torch_signxai.methods_impl.grad_cam.calculate_grad_cam_relevancemap_timeseries(model, input_tensor, target_layer=None, target_class=None)[source]
Calculate Grad-CAM relevance map for time series data.
This function provides compatibility with grad_cam.py’s timeseries function.
- Parameters:
model – PyTorch model
input_tensor – Input tensor (B, C, T)
target_layer – Target layer for Grad-CAM (None to auto-detect)
target_class – Target class index (None for argmax)
- Returns:
Grad-CAM relevance map as numpy array
- signxai.torch_signxai.methods_impl.grad_cam.find_target_layer(model)
signxai.torch_signxai.methods_impl.guided module
PyTorch implementation of Guided Backpropagation and DeconvNet methods.
- class signxai.torch_signxai.methods_impl.guided.GuidedBackpropReLU(*args, **kwargs)[source]
Bases:
FunctionGuided Backpropagation ReLU activation.
This modified ReLU only passes positive gradients during backpropagation. It combines the backpropagation rules of DeconvNet and vanilla backpropagation.
The TensorFlow implementation is: @tf.custom_gradient def guidedRelu(x):
- def grad(dy):
return tf.cast(dy > 0, tf.float32) * tf.cast(x > 0, tf.float32) * dy
return tf.nn.relu(x), grad
- static forward(ctx, input_tensor)[source]
Define the forward of the custom autograd Function.
This function is to be overridden by all subclasses. There are two ways to define forward:
Usage 1 (Combined forward and ctx):
@staticmethod def forward(ctx: Any, *args: Any, **kwargs: Any) -> Any: pass
It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).
See Combined or separate forward() and setup_context() for more details
Usage 2 (Separate forward and ctx):
@staticmethod def forward(*args: Any, **kwargs: Any) -> Any: pass @staticmethod def setup_context(ctx: Any, inputs: Tuple[Any, ...], output: Any) -> None: pass
The forward no longer accepts a ctx argument.
Instead, you must also override the
torch.autograd.Function.setup_context()staticmethod to handle setting up thectxobject.outputis the output of the forward,inputsare a Tuple of inputs to the forward.See Extending torch.autograd for more details
The context can be used to store arbitrary data that can be then retrieved during the backward pass. Tensors should not be stored directly on ctx (though this is not currently enforced for backward compatibility). Instead, tensors should be saved either with
ctx.save_for_backward()if they are intended to be used inbackward(equivalently,vjp) orctx.save_for_forward()if they are intended to be used for injvp.
- static backward(ctx, grad_output)[source]
Define a formula for differentiating the operation with backward mode automatic differentiation.
This function is to be overridden by all subclasses. (Defining this function is equivalent to defining the
vjpfunction.)It must accept a context
ctxas the first argument, followed by as many outputs as theforward()returned (None will be passed in for non tensor outputs of the forward function), and it should return as many tensors, as there were inputs toforward(). Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input. If an input is not a Tensor or is a Tensor not requiring grads, you can just pass None as a gradient for that input.The context can be used to retrieve tensors saved during the forward pass. It also has an attribute
ctx.needs_input_gradas a tuple of booleans representing whether each input needs gradient. E.g.,backward()will havectx.needs_input_grad[0] = Trueif the first input toforward()needs gradient computed w.r.t. the output.
- class signxai.torch_signxai.methods_impl.guided.GuidedBackpropReLUModule(*args, **kwargs)[source]
Bases:
ModuleModule wrapper for the GuidedBackpropReLU function.
- forward(x)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- signxai.torch_signxai.methods_impl.guided.replace_relu_with_guided_relu(model)[source]
Replace all ReLU activations with GuidedBackpropReLU.
- Parameters:
model – PyTorch model
- Returns:
Modified model with guided ReLU activations
- signxai.torch_signxai.methods_impl.guided.build_guided_model(model)[source]
Build a guided backpropagation model by replacing ReLU activations.
- Parameters:
model – PyTorch model
- Returns:
Guided model for backpropagation
- signxai.torch_signxai.methods_impl.guided.guided_backprop(model, input_tensor, target_class=None)[source]
Generate guided backpropagation attribution map.
- Parameters:
model – PyTorch model
input_tensor – Input tensor
target_class – Target class index (None for argmax)
- Returns:
Gradient attribution map
- class signxai.torch_signxai.methods_impl.guided.GuidedBackprop(model)[source]
Bases:
objectClass-based implementation of Guided Backpropagation.
signxai.torch_signxai.methods_impl.integrated module
Integrated Gradients implementation and variants for PyTorch.
Implements the method described in “Axiomatic Attribution for Deep Networks” (https://arxiv.org/abs/1703.01365).
- class signxai.torch_signxai.methods_impl.integrated.IntegratedGradients(model, steps: int = 50, baseline_type: str = 'zero')[source]
Bases:
BaseGradientIntegrated Gradients attribution method.
- __init__(model, steps: int = 50, baseline_type: str = 'zero')[source]
Initialize with a PyTorch model.
- Parameters:
model – PyTorch model for which to calculate gradients
steps – Number of interpolation steps (default 50)
baseline_type – Type of baseline to use (default “zero”)
- attribute(inputs: Tensor, target: int | Tensor | None = None, baseline: Tensor | None = None, steps: int | None = None, baselines: Tensor | None = None) Tensor[source]
Calculate integrated gradients attribution.
- Parameters:
inputs – Input tensor
target – Target class index or tensor (None uses argmax)
baseline – Baseline tensor (if None, created based on baseline_type)
baselines – Alternative spelling for baseline (for compatibility)
steps – Number of interpolation steps (if None, use self.steps)
- Returns:
Attribution tensor of the same shape as inputs
- class signxai.torch_signxai.methods_impl.integrated.IntegratedGradientsXInput(model, steps: int = 50, baseline_type: str = 'zero')[source]
Bases:
IntegratedGradientsIntegrated Gradients times Input attribution method.
- attribute(inputs: Tensor, target: int | Tensor | None = None, baseline: Tensor | None = None, steps: int | None = None, baselines: Tensor | None = None) Tensor[source]
Calculate integrated gradients times input attribution.
- Parameters:
inputs – Input tensor
target – Target class index or tensor (None uses argmax)
baseline – Baseline tensor (if None, created based on baseline_type)
baselines – Alternative spelling for baseline (for compatibility)
steps – Number of interpolation steps (if None, use self.steps)
- Returns:
Attribution tensor of the same shape as inputs
- class signxai.torch_signxai.methods_impl.integrated.IntegratedGradientsXSign(model, steps: int = 50, baseline_type: str = 'zero', mu: float = 0.0)[source]
Bases:
IntegratedGradientsIntegrated Gradients times Sign attribution method.
- __init__(model, steps: int = 50, baseline_type: str = 'zero', mu: float = 0.0)[source]
Initialize with a PyTorch model.
- Parameters:
model – PyTorch model for which to calculate gradients
steps – Number of interpolation steps (default 50)
baseline_type – Type of baseline to use (default “zero”)
mu – Threshold for sign determination (default 0.0)
- attribute(inputs: Tensor, target: int | Tensor | None = None, baseline: Tensor | None = None, steps: int | None = None, baselines: Tensor | None = None) Tensor[source]
Calculate integrated gradients times sign attribution.
- Parameters:
inputs – Input tensor
target – Target class index or tensor (None uses argmax)
baseline – Baseline tensor (if None, created based on baseline_type)
baselines – Alternative spelling for baseline (for compatibility)
steps – Number of interpolation steps (if None, use self.steps)
- Returns:
Attribution tensor of the same shape as inputs
- signxai.torch_signxai.methods_impl.integrated.integrated_gradients(model, inputs, target=None, baselines=None, steps=50)[source]
Calculate Integrated Gradients attribution (functional API).
- Parameters:
model – PyTorch model
inputs – Input tensor
target – Target class index (None for argmax)
baselines – Baseline tensor (if None, created with zeros)
steps – Number of integration steps
- Returns:
Attribution tensor of the same shape as inputs
signxai.torch_signxai.methods_impl.signed module
Implementation of SIGN thresholding methods for PyTorch.
- signxai.torch_signxai.methods_impl.signed.calculate_sign_mu(relevance_map, mu=0.0, vlow=-1, vhigh=1)[source]
Calculate binary sign-based relevance map to match TensorFlow behavior.
- Parameters:
relevance_map – Relevance map tensor or numpy array
mu – Threshold for considering a value positive/negative (default 0.0)
vlow – Value for elements below threshold (default -1)
vhigh – Value for elements at or above threshold (default 1)
- Returns:
Sign-based relevance map with TensorFlow-compatible behavior
signxai.torch_signxai.methods_impl.smoothgrad module
PyTorch implementation of SmoothGrad.
- class signxai.torch_signxai.methods_impl.smoothgrad.SmoothGrad(model, num_samples=16, noise_scale=1.0)[source]
Bases:
objectSmoothGrad attribution method.
Implements SmoothGrad as described in the original paper: “SmoothGrad: removing noise by adding noise” https://arxiv.org/abs/1706.03825
- __init__(model, num_samples=16, noise_scale=1.0)[source]
Initialize SmoothGrad.
- Parameters:
model – PyTorch model
num_samples – Number of noisy samples to use (matches TF default 16)
noise_scale – Standard deviation of noise to add (matches TF behavior, default 1.0)
- attribute(inputs, target=None, num_samples=None, noise_scale=None)[source]
Calculate SmoothGrad attribution.
- Parameters:
inputs – Input tensor
target – Target class index (None for argmax)
num_samples – Override the number of samples (optional)
noise_scale – Override the noise scale (optional)
- Returns:
Attribution tensor of the same shape as inputs
- class signxai.torch_signxai.methods_impl.smoothgrad.SmoothGradXInput(model, num_samples=16, noise_scale=1.0)[source]
Bases:
SmoothGradSmoothGrad × Input attribution method.
Implements SmoothGrad multiplied by the input, which can produce more visually appealing attributions by focusing on the important input features.
- attribute(inputs, target=None, num_samples=None, noise_scale=None)[source]
Calculate SmoothGrad × Input attribution.
- Parameters:
inputs – Input tensor
target – Target class index (None for argmax)
num_samples – Override the number of samples (optional)
noise_scale – Override the noise scale (optional)
- Returns:
Attribution tensor of the same shape as inputs
- class signxai.torch_signxai.methods_impl.smoothgrad.SmoothGradXSign(model, num_samples=16, noise_scale=1.0, mu=0.0)[source]
Bases:
SmoothGradSmoothGrad × Sign attribution method.
Implements SmoothGrad multiplied by the sign of (input - threshold), which can emphasize both positive and negative contributions.
- __init__(model, num_samples=16, noise_scale=1.0, mu=0.0)[source]
Initialize SmoothGradXSign.
- Parameters:
model – PyTorch model
num_samples – Number of noisy samples to use (matches TF default 16)
noise_scale – Standard deviation of noise to add (matches TF behavior, default 1.0)
mu – Threshold value for the sign function
- attribute(inputs, target=None, num_samples=None, noise_scale=None, mu=None)[source]
Calculate SmoothGrad × Sign attribution.
- Parameters:
inputs – Input tensor
target – Target class index (None for argmax)
num_samples – Override the number of samples (optional)
noise_scale – Override the noise scale (optional)
mu – Override the threshold value (optional)
- Returns:
Attribution tensor of the same shape as inputs
- signxai.torch_signxai.methods_impl.smoothgrad.smoothgrad(model, inputs, target=None, num_samples=16, noise_scale=1.0)[source]
Calculate SmoothGrad attribution (functional API).
- Parameters:
model – PyTorch model
inputs – Input tensor
target – Target class index (None for argmax)
num_samples – Number of noisy samples to use
noise_scale – Standard deviation of noise to add
- Returns:
Attribution tensor of the same shape as inputs
- signxai.torch_signxai.methods_impl.smoothgrad.smoothgrad_x_input(model, inputs, target=None, num_samples=16, noise_scale=1.0)[source]
Calculate SmoothGrad × Input attribution (functional API).
- Parameters:
model – PyTorch model
inputs – Input tensor
target – Target class index (None for argmax)
num_samples – Number of noisy samples to use
noise_scale – Standard deviation of noise to add
- Returns:
Attribution tensor of the same shape as inputs
- signxai.torch_signxai.methods_impl.smoothgrad.smoothgrad_x_sign(model, inputs, target=None, num_samples=16, noise_scale=1.0, mu=0.0)[source]
Calculate SmoothGrad × Sign attribution (functional API).
- Parameters:
model – PyTorch model
inputs – Input tensor
target – Target class index (None for argmax)
num_samples – Number of noisy samples to use
noise_scale – Standard deviation of noise to add
mu – Threshold value for the sign function
- Returns:
Attribution tensor of the same shape as inputs
signxai.torch_signxai.methods_impl.vargrad module
VarGrad implementation and variants for PyTorch.
- class signxai.torch_signxai.methods_impl.vargrad.VarGrad(model, noise_scale: float = 1.0, num_samples: int = 16)[source]
Bases:
BaseGradientVarGrad attribution method.
- __init__(model, noise_scale: float = 1.0, num_samples: int = 16)[source]
Initialize with a PyTorch model.
- Parameters:
model – PyTorch model for which to calculate gradients
noise_scale – Standard deviation of noise to add (matches TF behavior, default 1.0)
num_samples – Number of samples to average (matches TF default 16)
- attribute(inputs: Tensor, target: int | Tensor | None = None, noise_scale: float | None = None, num_samples: int | None = None) Tensor[source]
Calculate VarGrad attribution.
- Parameters:
inputs – Input tensor
target – Target class index or tensor (None uses argmax)
noise_scale – Standard deviation of noise to add (if None, use self.noise_scale)
num_samples – Number of samples to average (if None, use self.num_samples)
- Returns:
Attribution tensor of the same shape as inputs
- class signxai.torch_signxai.methods_impl.vargrad.VarGradXInput(model, noise_scale: float = 1.0, num_samples: int = 16)[source]
Bases:
VarGradVarGrad times Input attribution method.
- attribute(inputs: Tensor, target: int | Tensor | None = None, noise_scale: float | None = None, num_samples: int | None = None) Tensor[source]
Calculate VarGrad times input attribution.
- Parameters:
inputs – Input tensor
target – Target class index or tensor (None uses argmax)
noise_scale – Standard deviation of noise to add (if None, use self.noise_scale)
num_samples – Number of samples to average (if None, use self.num_samples)
- Returns:
Attribution tensor of the same shape as inputs
- class signxai.torch_signxai.methods_impl.vargrad.VarGradXSign(model, noise_scale: float = 1.0, num_samples: int = 16, mu: float = 0.0)[source]
Bases:
VarGradVarGrad times Sign attribution method.
- __init__(model, noise_scale: float = 1.0, num_samples: int = 16, mu: float = 0.0)[source]
Initialize with a PyTorch model.
- Parameters:
model – PyTorch model for which to calculate gradients
noise_scale – Standard deviation of noise to add (default 1.0)
num_samples – Number of samples to average (default 16)
mu – Threshold for sign determination (default 0.0)
- attribute(inputs: Tensor, target: int | Tensor | None = None, noise_scale: float | None = None, num_samples: int | None = None) Tensor[source]
Calculate VarGrad times sign attribution.
- Parameters:
inputs – Input tensor
target – Target class index or tensor (None uses argmax)
noise_scale – Standard deviation of noise to add (if None, use self.noise_scale)
num_samples – Number of samples to average (if None, use self.num_samples)
- Returns:
Attribution tensor of the same shape as inputs