signxai.torch_signxai.methods_impl package

Subpackages

Submodules

signxai.torch_signxai.methods_impl.base module

Base gradient attribution methods for PyTorch.

class signxai.torch_signxai.methods_impl.base.BaseGradient(model)[source]

Bases: object

Base gradient attribution method.

__init__(model)[source]

Initialize with a PyTorch model.

Parameters:

model – PyTorch model for which to calculate gradients

attribute(inputs: Tensor, target: int | Tensor | None = None) Tensor[source]

Calculate gradient attribution.

Parameters:
  • inputs – Input tensor

  • target – Target class index or tensor (None uses argmax)

Returns:

Gradient tensor of the same shape as inputs

class signxai.torch_signxai.methods_impl.base.InputXGradient(model)[source]

Bases: BaseGradient

Input times gradient attribution method.

attribute(inputs: Tensor, target: int | Tensor | None = None) Tensor[source]

Calculate input times gradient attribution.

Parameters:
  • inputs – Input tensor

  • target – Target class index or tensor (None uses argmax)

Returns:

Attribution tensor of the same shape as inputs

class signxai.torch_signxai.methods_impl.base.GradientXSign(model, mu: float = 0.0)[source]

Bases: BaseGradient

Gradient times sign attribution method.

__init__(model, mu: float = 0.0)[source]

Initialize with a PyTorch model and threshold.

Parameters:
  • model – PyTorch model for which to calculate gradients

  • mu – Threshold for sign determination (default 0.0)

attribute(inputs: Tensor, target: int | Tensor | None = None) Tensor[source]

Calculate gradient times sign attribution.

Parameters:
  • inputs – Input tensor

  • target – Target class index or tensor (None uses argmax)

Returns:

Attribution tensor of the same shape as inputs

signxai.torch_signxai.methods_impl.deconvnet module

PyTorch implementation of DeconvNet.

class signxai.torch_signxai.methods_impl.deconvnet.DeconvNetReLU(*args, **kwargs)[source]

Bases: Function

DeconvNet ReLU activation.

This modified ReLU passes the gradient if the gradient from the next layer is positive, regardless of the input value.

static forward(ctx, input_tensor)[source]

Define the forward of the custom autograd Function.

This function is to be overridden by all subclasses. There are two ways to define forward:

Usage 1 (Combined forward and ctx):

@staticmethod
def forward(ctx: Any, *args: Any, **kwargs: Any) -> Any:
    pass

Usage 2 (Separate forward and ctx):

@staticmethod
def forward(*args: Any, **kwargs: Any) -> Any:
    pass

@staticmethod
def setup_context(ctx: Any, inputs: Tuple[Any, ...], output: Any) -> None:
    pass
  • The forward no longer accepts a ctx argument.

  • Instead, you must also override the torch.autograd.Function.setup_context() staticmethod to handle setting up the ctx object. output is the output of the forward, inputs are a Tuple of inputs to the forward.

  • See Extending torch.autograd for more details

The context can be used to store arbitrary data that can be then retrieved during the backward pass. Tensors should not be stored directly on ctx (though this is not currently enforced for backward compatibility). Instead, tensors should be saved either with ctx.save_for_backward() if they are intended to be used in backward (equivalently, vjp) or ctx.save_for_forward() if they are intended to be used for in jvp.

static backward(ctx, grad_output)[source]

Define a formula for differentiating the operation with backward mode automatic differentiation.

This function is to be overridden by all subclasses. (Defining this function is equivalent to defining the vjp function.)

It must accept a context ctx as the first argument, followed by as many outputs as the forward() returned (None will be passed in for non tensor outputs of the forward function), and it should return as many tensors, as there were inputs to forward(). Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input. If an input is not a Tensor or is a Tensor not requiring grads, you can just pass None as a gradient for that input.

The context can be used to retrieve tensors saved during the forward pass. It also has an attribute ctx.needs_input_grad as a tuple of booleans representing whether each input needs gradient. E.g., backward() will have ctx.needs_input_grad[0] = True if the first input to forward() needs gradient computed w.r.t. the output.

class signxai.torch_signxai.methods_impl.deconvnet.DeconvNetReLUModule(*args, **kwargs)[source]

Bases: Module

Module wrapper for the DeconvNetReLU function.

forward(x)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

signxai.torch_signxai.methods_impl.deconvnet.replace_relu_with_deconvnet_relu(model)[source]

Replace all ReLU activations with DeconvNetReLU.

Parameters:

model – PyTorch model

Returns:

Modified model with DeconvNet ReLU activations

signxai.torch_signxai.methods_impl.deconvnet.build_deconvnet_model(model)[source]

Build a DeconvNet model by replacing ReLU activations.

Parameters:

model – PyTorch model

Returns:

DeconvNet model for backpropagation

signxai.torch_signxai.methods_impl.deconvnet.deconvnet(model, input_tensor, target_class=None)[source]

Generate DeconvNet attribution map.

Parameters:
  • model – PyTorch model

  • input_tensor – Input tensor (requires_grad=True)

  • target_class – Target class index (None for argmax)

Returns:

Gradient attribution map

signxai.torch_signxai.methods_impl.grad_cam module

Unified PyTorch implementation of Grad-CAM combining the best features from both implementations.

class signxai.torch_signxai.methods_impl.grad_cam.GradCAM(model, target_layer=None)[source]

Bases: object

Unified Grad-CAM implementation for PyTorch models.

Combines the automatic layer detection from gradcam.py with the TensorFlow-compatible behavior from grad_cam.py.

Grad-CAM uses the gradients of a target concept flowing into the final convolutional layer to produce a coarse localization map highlighting important regions in the image for prediction.

__init__(model, target_layer=None)[source]

Initialize GradCAM.

Parameters:
  • model – PyTorch model

  • target_layer – Target layer for Grad-CAM. If None, will try to automatically find the last convolutional layer.

forward(x, target_class=None)[source]

Generate Grad-CAM attribution map using the TensorFlow-compatible approach.

Parameters:
  • x – Input tensor

  • target_class – Target class index (None for argmax)

Returns:

Grad-CAM attribution map

attribute(inputs, target=None, resize_to_input=True)[source]

Generate Grad-CAM heatmap (compatible with gradcam.py interface).

Parameters:
  • inputs – Input tensor

  • target – Target class index (None for argmax)

  • resize_to_input – Whether to resize heatmap to input size

Returns:

Grad-CAM heatmap (same size as input if resize_to_input=True)

signxai.torch_signxai.methods_impl.grad_cam.calculate_grad_cam_relevancemap(model, input_tensor, target_layer=None, target_class=None, layer_name=None, **kwargs)[source]

Calculate Grad-CAM relevance map for images.

This function provides a convenient interface compatible with grad_cam.py.

Parameters:
  • model – PyTorch model

  • input_tensor – Input tensor

  • target_layer – Target layer for Grad-CAM (None to auto-detect)

  • target_class – Target class index (None for argmax)

  • layer_name – Alternative name for target_layer (for compatibility)

  • **kwargs – Additional parameters (ignored)

Returns:

Grad-CAM relevance map as numpy array

signxai.torch_signxai.methods_impl.grad_cam.calculate_grad_cam_relevancemap_timeseries(model, input_tensor, target_layer=None, target_class=None)[source]

Calculate Grad-CAM relevance map for time series data.

This function provides compatibility with grad_cam.py’s timeseries function.

Parameters:
  • model – PyTorch model

  • input_tensor – Input tensor (B, C, T)

  • target_layer – Target layer for Grad-CAM (None to auto-detect)

  • target_class – Target class index (None for argmax)

Returns:

Grad-CAM relevance map as numpy array

signxai.torch_signxai.methods_impl.grad_cam.find_target_layer(model)

signxai.torch_signxai.methods_impl.guided module

PyTorch implementation of Guided Backpropagation and DeconvNet methods.

class signxai.torch_signxai.methods_impl.guided.GuidedBackpropReLU(*args, **kwargs)[source]

Bases: Function

Guided Backpropagation ReLU activation.

This modified ReLU only passes positive gradients during backpropagation. It combines the backpropagation rules of DeconvNet and vanilla backpropagation.

The TensorFlow implementation is: @tf.custom_gradient def guidedRelu(x):

def grad(dy):

return tf.cast(dy > 0, tf.float32) * tf.cast(x > 0, tf.float32) * dy

return tf.nn.relu(x), grad

static forward(ctx, input_tensor)[source]

Define the forward of the custom autograd Function.

This function is to be overridden by all subclasses. There are two ways to define forward:

Usage 1 (Combined forward and ctx):

@staticmethod
def forward(ctx: Any, *args: Any, **kwargs: Any) -> Any:
    pass

Usage 2 (Separate forward and ctx):

@staticmethod
def forward(*args: Any, **kwargs: Any) -> Any:
    pass

@staticmethod
def setup_context(ctx: Any, inputs: Tuple[Any, ...], output: Any) -> None:
    pass
  • The forward no longer accepts a ctx argument.

  • Instead, you must also override the torch.autograd.Function.setup_context() staticmethod to handle setting up the ctx object. output is the output of the forward, inputs are a Tuple of inputs to the forward.

  • See Extending torch.autograd for more details

The context can be used to store arbitrary data that can be then retrieved during the backward pass. Tensors should not be stored directly on ctx (though this is not currently enforced for backward compatibility). Instead, tensors should be saved either with ctx.save_for_backward() if they are intended to be used in backward (equivalently, vjp) or ctx.save_for_forward() if they are intended to be used for in jvp.

static backward(ctx, grad_output)[source]

Define a formula for differentiating the operation with backward mode automatic differentiation.

This function is to be overridden by all subclasses. (Defining this function is equivalent to defining the vjp function.)

It must accept a context ctx as the first argument, followed by as many outputs as the forward() returned (None will be passed in for non tensor outputs of the forward function), and it should return as many tensors, as there were inputs to forward(). Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input. If an input is not a Tensor or is a Tensor not requiring grads, you can just pass None as a gradient for that input.

The context can be used to retrieve tensors saved during the forward pass. It also has an attribute ctx.needs_input_grad as a tuple of booleans representing whether each input needs gradient. E.g., backward() will have ctx.needs_input_grad[0] = True if the first input to forward() needs gradient computed w.r.t. the output.

class signxai.torch_signxai.methods_impl.guided.GuidedBackpropReLUModule(*args, **kwargs)[source]

Bases: Module

Module wrapper for the GuidedBackpropReLU function.

forward(x)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

signxai.torch_signxai.methods_impl.guided.replace_relu_with_guided_relu(model)[source]

Replace all ReLU activations with GuidedBackpropReLU.

Parameters:

model – PyTorch model

Returns:

Modified model with guided ReLU activations

signxai.torch_signxai.methods_impl.guided.build_guided_model(model)[source]

Build a guided backpropagation model by replacing ReLU activations.

Parameters:

model – PyTorch model

Returns:

Guided model for backpropagation

signxai.torch_signxai.methods_impl.guided.guided_backprop(model, input_tensor, target_class=None)[source]

Generate guided backpropagation attribution map.

Parameters:
  • model – PyTorch model

  • input_tensor – Input tensor

  • target_class – Target class index (None for argmax)

Returns:

Gradient attribution map

class signxai.torch_signxai.methods_impl.guided.GuidedBackprop(model)[source]

Bases: object

Class-based implementation of Guided Backpropagation.

__init__(model)[source]

Initialize Guided Backpropagation with the model.

Parameters:

model – PyTorch model

attribute(inputs, target=None)[source]

Calculate attribution using Guided Backpropagation.

Parameters:
  • inputs – Input tensor

  • target – Target class index (None for argmax)

Returns:

Attribution tensor of the same shape as inputs

class signxai.torch_signxai.methods_impl.guided.DeconvNet(model)[source]

Bases: object

Class-based implementation of DeconvNet.

__init__(model)[source]

Initialize DeconvNet with the model.

Parameters:

model – PyTorch model

attribute(inputs, target=None)[source]

Calculate attribution using DeconvNet.

Parameters:
  • inputs – Input tensor

  • target – Target class index (None for argmax)

Returns:

Attribution tensor of the same shape as inputs

signxai.torch_signxai.methods_impl.integrated module

Integrated Gradients implementation and variants for PyTorch.

Implements the method described in “Axiomatic Attribution for Deep Networks” (https://arxiv.org/abs/1703.01365).

class signxai.torch_signxai.methods_impl.integrated.IntegratedGradients(model, steps: int = 50, baseline_type: str = 'zero')[source]

Bases: BaseGradient

Integrated Gradients attribution method.

__init__(model, steps: int = 50, baseline_type: str = 'zero')[source]

Initialize with a PyTorch model.

Parameters:
  • model – PyTorch model for which to calculate gradients

  • steps – Number of interpolation steps (default 50)

  • baseline_type – Type of baseline to use (default “zero”)

attribute(inputs: Tensor, target: int | Tensor | None = None, baseline: Tensor | None = None, steps: int | None = None, baselines: Tensor | None = None) Tensor[source]

Calculate integrated gradients attribution.

Parameters:
  • inputs – Input tensor

  • target – Target class index or tensor (None uses argmax)

  • baseline – Baseline tensor (if None, created based on baseline_type)

  • baselines – Alternative spelling for baseline (for compatibility)

  • steps – Number of interpolation steps (if None, use self.steps)

Returns:

Attribution tensor of the same shape as inputs

class signxai.torch_signxai.methods_impl.integrated.IntegratedGradientsXInput(model, steps: int = 50, baseline_type: str = 'zero')[source]

Bases: IntegratedGradients

Integrated Gradients times Input attribution method.

attribute(inputs: Tensor, target: int | Tensor | None = None, baseline: Tensor | None = None, steps: int | None = None, baselines: Tensor | None = None) Tensor[source]

Calculate integrated gradients times input attribution.

Parameters:
  • inputs – Input tensor

  • target – Target class index or tensor (None uses argmax)

  • baseline – Baseline tensor (if None, created based on baseline_type)

  • baselines – Alternative spelling for baseline (for compatibility)

  • steps – Number of interpolation steps (if None, use self.steps)

Returns:

Attribution tensor of the same shape as inputs

class signxai.torch_signxai.methods_impl.integrated.IntegratedGradientsXSign(model, steps: int = 50, baseline_type: str = 'zero', mu: float = 0.0)[source]

Bases: IntegratedGradients

Integrated Gradients times Sign attribution method.

__init__(model, steps: int = 50, baseline_type: str = 'zero', mu: float = 0.0)[source]

Initialize with a PyTorch model.

Parameters:
  • model – PyTorch model for which to calculate gradients

  • steps – Number of interpolation steps (default 50)

  • baseline_type – Type of baseline to use (default “zero”)

  • mu – Threshold for sign determination (default 0.0)

attribute(inputs: Tensor, target: int | Tensor | None = None, baseline: Tensor | None = None, steps: int | None = None, baselines: Tensor | None = None) Tensor[source]

Calculate integrated gradients times sign attribution.

Parameters:
  • inputs – Input tensor

  • target – Target class index or tensor (None uses argmax)

  • baseline – Baseline tensor (if None, created based on baseline_type)

  • baselines – Alternative spelling for baseline (for compatibility)

  • steps – Number of interpolation steps (if None, use self.steps)

Returns:

Attribution tensor of the same shape as inputs

signxai.torch_signxai.methods_impl.integrated.integrated_gradients(model, inputs, target=None, baselines=None, steps=50)[source]

Calculate Integrated Gradients attribution (functional API).

Parameters:
  • model – PyTorch model

  • inputs – Input tensor

  • target – Target class index (None for argmax)

  • baselines – Baseline tensor (if None, created with zeros)

  • steps – Number of integration steps

Returns:

Attribution tensor of the same shape as inputs

signxai.torch_signxai.methods_impl.signed module

Implementation of SIGN thresholding methods for PyTorch.

signxai.torch_signxai.methods_impl.signed.calculate_sign_mu(relevance_map, mu=0.0, vlow=-1, vhigh=1)[source]

Calculate binary sign-based relevance map to match TensorFlow behavior.

Parameters:
  • relevance_map – Relevance map tensor or numpy array

  • mu – Threshold for considering a value positive/negative (default 0.0)

  • vlow – Value for elements below threshold (default -1)

  • vhigh – Value for elements at or above threshold (default 1)

Returns:

Sign-based relevance map with TensorFlow-compatible behavior

signxai.torch_signxai.methods_impl.smoothgrad module

PyTorch implementation of SmoothGrad.

class signxai.torch_signxai.methods_impl.smoothgrad.SmoothGrad(model, num_samples=16, noise_scale=1.0)[source]

Bases: object

SmoothGrad attribution method.

Implements SmoothGrad as described in the original paper: “SmoothGrad: removing noise by adding noise” https://arxiv.org/abs/1706.03825

__init__(model, num_samples=16, noise_scale=1.0)[source]

Initialize SmoothGrad.

Parameters:
  • model – PyTorch model

  • num_samples – Number of noisy samples to use (matches TF default 16)

  • noise_scale – Standard deviation of noise to add (matches TF behavior, default 1.0)

attribute(inputs, target=None, num_samples=None, noise_scale=None)[source]

Calculate SmoothGrad attribution.

Parameters:
  • inputs – Input tensor

  • target – Target class index (None for argmax)

  • num_samples – Override the number of samples (optional)

  • noise_scale – Override the noise scale (optional)

Returns:

Attribution tensor of the same shape as inputs

class signxai.torch_signxai.methods_impl.smoothgrad.SmoothGradXInput(model, num_samples=16, noise_scale=1.0)[source]

Bases: SmoothGrad

SmoothGrad × Input attribution method.

Implements SmoothGrad multiplied by the input, which can produce more visually appealing attributions by focusing on the important input features.

attribute(inputs, target=None, num_samples=None, noise_scale=None)[source]

Calculate SmoothGrad × Input attribution.

Parameters:
  • inputs – Input tensor

  • target – Target class index (None for argmax)

  • num_samples – Override the number of samples (optional)

  • noise_scale – Override the noise scale (optional)

Returns:

Attribution tensor of the same shape as inputs

class signxai.torch_signxai.methods_impl.smoothgrad.SmoothGradXSign(model, num_samples=16, noise_scale=1.0, mu=0.0)[source]

Bases: SmoothGrad

SmoothGrad × Sign attribution method.

Implements SmoothGrad multiplied by the sign of (input - threshold), which can emphasize both positive and negative contributions.

__init__(model, num_samples=16, noise_scale=1.0, mu=0.0)[source]

Initialize SmoothGradXSign.

Parameters:
  • model – PyTorch model

  • num_samples – Number of noisy samples to use (matches TF default 16)

  • noise_scale – Standard deviation of noise to add (matches TF behavior, default 1.0)

  • mu – Threshold value for the sign function

attribute(inputs, target=None, num_samples=None, noise_scale=None, mu=None)[source]

Calculate SmoothGrad × Sign attribution.

Parameters:
  • inputs – Input tensor

  • target – Target class index (None for argmax)

  • num_samples – Override the number of samples (optional)

  • noise_scale – Override the noise scale (optional)

  • mu – Override the threshold value (optional)

Returns:

Attribution tensor of the same shape as inputs

signxai.torch_signxai.methods_impl.smoothgrad.smoothgrad(model, inputs, target=None, num_samples=16, noise_scale=1.0)[source]

Calculate SmoothGrad attribution (functional API).

Parameters:
  • model – PyTorch model

  • inputs – Input tensor

  • target – Target class index (None for argmax)

  • num_samples – Number of noisy samples to use

  • noise_scale – Standard deviation of noise to add

Returns:

Attribution tensor of the same shape as inputs

signxai.torch_signxai.methods_impl.smoothgrad.smoothgrad_x_input(model, inputs, target=None, num_samples=16, noise_scale=1.0)[source]

Calculate SmoothGrad × Input attribution (functional API).

Parameters:
  • model – PyTorch model

  • inputs – Input tensor

  • target – Target class index (None for argmax)

  • num_samples – Number of noisy samples to use

  • noise_scale – Standard deviation of noise to add

Returns:

Attribution tensor of the same shape as inputs

signxai.torch_signxai.methods_impl.smoothgrad.smoothgrad_x_sign(model, inputs, target=None, num_samples=16, noise_scale=1.0, mu=0.0)[source]

Calculate SmoothGrad × Sign attribution (functional API).

Parameters:
  • model – PyTorch model

  • inputs – Input tensor

  • target – Target class index (None for argmax)

  • num_samples – Number of noisy samples to use

  • noise_scale – Standard deviation of noise to add

  • mu – Threshold value for the sign function

Returns:

Attribution tensor of the same shape as inputs

signxai.torch_signxai.methods_impl.vargrad module

VarGrad implementation and variants for PyTorch.

class signxai.torch_signxai.methods_impl.vargrad.VarGrad(model, noise_scale: float = 1.0, num_samples: int = 16)[source]

Bases: BaseGradient

VarGrad attribution method.

__init__(model, noise_scale: float = 1.0, num_samples: int = 16)[source]

Initialize with a PyTorch model.

Parameters:
  • model – PyTorch model for which to calculate gradients

  • noise_scale – Standard deviation of noise to add (matches TF behavior, default 1.0)

  • num_samples – Number of samples to average (matches TF default 16)

attribute(inputs: Tensor, target: int | Tensor | None = None, noise_scale: float | None = None, num_samples: int | None = None) Tensor[source]

Calculate VarGrad attribution.

Parameters:
  • inputs – Input tensor

  • target – Target class index or tensor (None uses argmax)

  • noise_scale – Standard deviation of noise to add (if None, use self.noise_scale)

  • num_samples – Number of samples to average (if None, use self.num_samples)

Returns:

Attribution tensor of the same shape as inputs

class signxai.torch_signxai.methods_impl.vargrad.VarGradXInput(model, noise_scale: float = 1.0, num_samples: int = 16)[source]

Bases: VarGrad

VarGrad times Input attribution method.

attribute(inputs: Tensor, target: int | Tensor | None = None, noise_scale: float | None = None, num_samples: int | None = None) Tensor[source]

Calculate VarGrad times input attribution.

Parameters:
  • inputs – Input tensor

  • target – Target class index or tensor (None uses argmax)

  • noise_scale – Standard deviation of noise to add (if None, use self.noise_scale)

  • num_samples – Number of samples to average (if None, use self.num_samples)

Returns:

Attribution tensor of the same shape as inputs

class signxai.torch_signxai.methods_impl.vargrad.VarGradXSign(model, noise_scale: float = 1.0, num_samples: int = 16, mu: float = 0.0)[source]

Bases: VarGrad

VarGrad times Sign attribution method.

__init__(model, noise_scale: float = 1.0, num_samples: int = 16, mu: float = 0.0)[source]

Initialize with a PyTorch model.

Parameters:
  • model – PyTorch model for which to calculate gradients

  • noise_scale – Standard deviation of noise to add (default 1.0)

  • num_samples – Number of samples to average (default 16)

  • mu – Threshold for sign determination (default 0.0)

attribute(inputs: Tensor, target: int | Tensor | None = None, noise_scale: float | None = None, num_samples: int | None = None) Tensor[source]

Calculate VarGrad times sign attribution.

Parameters:
  • inputs – Input tensor

  • target – Target class index or tensor (None uses argmax)

  • noise_scale – Standard deviation of noise to add (if None, use self.noise_scale)

  • num_samples – Number of samples to average (if None, use self.num_samples)

Returns:

Attribution tensor of the same shape as inputs

Module contents