signxai.torch_signxai.methods_impl.zennit_impl package

Submodules

signxai.torch_signxai.methods_impl.zennit_impl.analyzers module

Zennit-based analyzers for PyTorch explanation methods.

class signxai.torch_signxai.methods_impl.zennit_impl.analyzers.AnalyzerBase(model: Module)[source]

Bases: ABC

Base class for all analyzers.

__init__(model: Module)[source]

Initialize AnalyzerBase.

Parameters:

model – PyTorch model

abstract analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]

Analyze input tensor and return attribution.

Parameters:
  • input_tensor – Input tensor

  • target_class – Target class index (None for argmax)

  • **kwargs – Additional arguments for specific analyzers

Returns:

Attribution as numpy array

class signxai.torch_signxai.methods_impl.zennit_impl.analyzers.GradientAnalyzer(model: Module)[source]

Bases: AnalyzerBase

Vanilla gradients analyzer.

analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]

Calculate gradient of model output with respect to input. :param input_tensor: Input tensor :param target_class: Target class index (None for argmax)

Returns:

Gradient with respect to input as numpy array

class signxai.torch_signxai.methods_impl.zennit_impl.analyzers.IntegratedGradientsAnalyzer(model: Module, steps: int = 50, baseline_type: str = 'zero')[source]

Bases: AnalyzerBase

Integrated gradients analyzer using basic loop, not Zennit’s direct IG.

__init__(model: Module, steps: int = 50, baseline_type: str = 'zero')[source]

Initialize AnalyzerBase.

Parameters:

model – PyTorch model

analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]

Analyze input tensor and return attribution.

Parameters:
  • input_tensor – Input tensor

  • target_class – Target class index (None for argmax)

  • **kwargs – Additional arguments for specific analyzers

Returns:

Attribution as numpy array

class signxai.torch_signxai.methods_impl.zennit_impl.analyzers.SmoothGradAnalyzer(model: Module, noise_level: float = 0.2, num_samples: int = 50, stdev_spread=None)[source]

Bases: AnalyzerBase

SmoothGrad analyzer.

__init__(model: Module, noise_level: float = 0.2, num_samples: int = 50, stdev_spread=None)[source]

Initialize AnalyzerBase.

Parameters:

model – PyTorch model

analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]

Analyze input tensor and return attribution.

Parameters:
  • input_tensor – Input tensor

  • target_class – Target class index (None for argmax)

  • **kwargs – Additional arguments for specific analyzers

Returns:

Attribution as numpy array

class signxai.torch_signxai.methods_impl.zennit_impl.analyzers.GuidedBackpropAnalyzer(model: Module)[source]

Bases: AnalyzerBase

Guided Backpropagation analyzer using Zennit’s composite.

__init__(model: Module)[source]

Initialize AnalyzerBase.

Parameters:

model – PyTorch model

analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]

Analyze input tensor and return attribution.

Parameters:
  • input_tensor – Input tensor

  • target_class – Target class index (None for argmax)

  • **kwargs – Additional arguments for specific analyzers

Returns:

Attribution as numpy array

class signxai.torch_signxai.methods_impl.zennit_impl.analyzers.DeconvNetComposite[source]

Bases: Composite

DeconvNet composite using Zennit’s built-in DeconvNet composite.

__init__()[source]
class signxai.torch_signxai.methods_impl.zennit_impl.analyzers.DeconvNetAnalyzer(model: Module)[source]

Bases: AnalyzerBase

DeconvNet Explanation Method using Zennit.

__init__(model: Module)[source]

Initialize AnalyzerBase.

Parameters:

model – PyTorch model

analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]

Analyze input tensor and return attribution.

Parameters:
  • input_tensor – Input tensor

  • target_class – Target class index (None for argmax)

  • **kwargs – Additional arguments for specific analyzers

Returns:

Attribution as numpy array

class signxai.torch_signxai.methods_impl.zennit_impl.analyzers.GradCAMAnalyzer(model: Module, target_layer: Module | None = None)[source]

Bases: AnalyzerBase

Grad-CAM analyzer.

__init__(model: Module, target_layer: Module | None = None)[source]

Initialize AnalyzerBase.

Parameters:

model – PyTorch model

analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]

Analyze input tensor and return attribution.

Parameters:
  • input_tensor – Input tensor

  • target_class – Target class index (None for argmax)

  • **kwargs – Additional arguments for specific analyzers

Returns:

Attribution as numpy array

class signxai.torch_signxai.methods_impl.zennit_impl.analyzers.LRPAnalyzer(model: Module, rule_name: str = 'epsilon', epsilon: float = 1e-06, alpha: float = 1.0, beta: float = 0.0, **rule_kwargs)[source]

Bases: AnalyzerBase

Layer-wise Relevance Propagation (LRP) analyzer using Zennit.

__init__(model: Module, rule_name: str = 'epsilon', epsilon: float = 1e-06, alpha: float = 1.0, beta: float = 0.0, **rule_kwargs)[source]

Initialize AnalyzerBase.

Parameters:

model – PyTorch model

analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]

Analyze input tensor and return attribution.

Parameters:
  • input_tensor – Input tensor

  • target_class – Target class index (None for argmax)

  • **kwargs – Additional arguments for specific analyzers

Returns:

Attribution as numpy array

class signxai.torch_signxai.methods_impl.zennit_impl.analyzers.GradientXSignAnalyzer(model: Module, mu: float = 0.0)[source]

Bases: AnalyzerBase

Gradient × Sign analyzer.

__init__(model: Module, mu: float = 0.0)[source]

Initialize AnalyzerBase.

Parameters:

model – PyTorch model

analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]

Calculate gradient × sign of model output with respect to input. :param input_tensor: Input tensor :param target_class: Target class index (None for argmax) :param mu: Threshold parameter for sign function

Returns:

Gradient × sign with respect to input as numpy array

class signxai.torch_signxai.methods_impl.zennit_impl.analyzers.GradientXInputAnalyzer(model: Module)[source]

Bases: AnalyzerBase

Gradient × Input analyzer.

analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]

Calculate gradient × input of model output with respect to input. :param input_tensor: Input tensor :param target_class: Target class index (None for argmax)

Returns:

Gradient × input with respect to input as numpy array

class signxai.torch_signxai.methods_impl.zennit_impl.analyzers.VarGradAnalyzer(model: Module, noise_level: float = 0.2, num_samples: int = 50)[source]

Bases: AnalyzerBase

VarGrad analyzer.

__init__(model: Module, noise_level: float = 0.2, num_samples: int = 50)[source]

Initialize AnalyzerBase.

Parameters:

model – PyTorch model

analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]

Analyze input tensor and return attribution.

Parameters:
  • input_tensor – Input tensor

  • target_class – Target class index (None for argmax)

  • **kwargs – Additional arguments for specific analyzers

Returns:

Attribution as numpy array

class signxai.torch_signxai.methods_impl.zennit_impl.analyzers.DeepTaylorAnalyzer(model: Module, epsilon: float = 1e-06)[source]

Bases: AnalyzerBase

Deep Taylor analyzer.

__init__(model: Module, epsilon: float = 1e-06)[source]

Initialize AnalyzerBase.

Parameters:

model – PyTorch model

analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]

Deep Taylor decomposition (simplified version using LRP-like approach).

class signxai.torch_signxai.methods_impl.zennit_impl.analyzers.AdvancedLRPAnalyzer(model: Module, variant: str = 'epsilon', **kwargs)[source]

Bases: AnalyzerBase

Advanced Layer-wise Relevance Propagation (LRP) analyzer with multiple rule variants.

__init__(model: Module, variant: str = 'epsilon', **kwargs)[source]

Initialize AnalyzerBase.

Parameters:

model – PyTorch model

analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]

Analyze input tensor and return attribution.

Parameters:
  • input_tensor – Input tensor

  • target_class – Target class index (None for argmax)

  • **kwargs – Additional arguments for specific analyzers

Returns:

Attribution as numpy array

class signxai.torch_signxai.methods_impl.zennit_impl.analyzers.LRPSequential(model: Module, first_layer_rule_name: str = 'zbox', middle_layer_rule_name: str = 'alphabeta', last_layer_rule_name: str = 'epsilon', variant: str | None = None, **kwargs)[source]

Bases: AnalyzerBase

Sequential LRP with different rules for different parts of the network. This implementation matches the TensorFlow LRPSequentialComposite variants, which apply different rules to different layers in the network.

__init__(model: Module, first_layer_rule_name: str = 'zbox', middle_layer_rule_name: str = 'alphabeta', last_layer_rule_name: str = 'epsilon', variant: str | None = None, **kwargs)[source]

Initialize AnalyzerBase.

Parameters:

model – PyTorch model

analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]

Analyze input using LRP with the configured rule variant.

Parameters:
  • input_tensor – Input tensor to analyze

  • target_class – Target class for attribution

  • **kwargs – Additional parameters

Returns:

Attribution map as numpy array

class signxai.torch_signxai.methods_impl.zennit_impl.analyzers.BoundedLRPAnalyzer(model: Module, low: float = 0.0, high: float = 1.0, rule_name: str = 'epsilon', **kwargs)[source]

Bases: AnalyzerBase

LRP analyzer that enforces input bounds with ZBox rule at the first layer and applies specified rules elsewhere.

__init__(model: Module, low: float = 0.0, high: float = 1.0, rule_name: str = 'epsilon', **kwargs)[source]

Initialize AnalyzerBase.

Parameters:

model – PyTorch model

analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]

Analyze input tensor and return attribution.

Parameters:
  • input_tensor – Input tensor

  • target_class – Target class index (None for argmax)

  • **kwargs – Additional arguments for specific analyzers

Returns:

Attribution as numpy array

class signxai.torch_signxai.methods_impl.zennit_impl.analyzers.LRPStdxEpsilonAnalyzer(model: Module, stdfactor: float = 0.25, bias: bool = True, **kwargs)[source]

Bases: AnalyzerBase

LRP analyzer that uses the standard deviation based epsilon rule.

This analyzer implements the StdxEpsilon rule where the epsilon value for stabilization is based on a factor of the standard deviation of the input.

__init__(model: Module, stdfactor: float = 0.25, bias: bool = True, **kwargs)[source]

Initialize LRPStdxEpsilonAnalyzer.

Parameters:
  • model (nn.Module) – PyTorch model to analyze.

  • stdfactor (float, optional) – Factor to multiply standard deviation by. Default: 0.25.

  • bias (bool, optional) – Whether to include bias in computation. Default: True.

  • **kwargs – Additional keyword arguments.

analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]

Analyze input using StdxEpsilon rule.

Parameters:
  • input_tensor (torch.Tensor) – Input tensor to analyze.

  • target_class (Optional[Union[int, torch.Tensor]], optional) – Target class. Default: None (uses argmax).

  • **kwargs – Additional keyword arguments.

Returns:

Attribution map.

Return type:

np.ndarray

class signxai.torch_signxai.methods_impl.zennit_impl.analyzers.DeepLiftAnalyzer(model: Module, baseline_type: str = 'zero', **kwargs)[source]

Bases: AnalyzerBase

DeepLift implementation to match TensorFlow’s implementation.

This implementation follows the DeepLIFT algorithm from “Learning Important Features Through Propagating Activation Differences” (Shrikumar et al.) and is designed to be compatible with TensorFlow’s implementation in innvestigate.

It uses the Rescale rule from the paper and implements a modified backward pass that considers the difference between activations and reference activations.

__init__(model: Module, baseline_type: str = 'zero', **kwargs)[source]

Initialize DeepLiftAnalyzer.

Parameters:
  • model – PyTorch model to analyze

  • baseline_type – Type of baseline to use (“zero”, “black”, “white”, “gaussian”)

  • **kwargs – Additional parameters

analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]

Analyze input using DeepLift approach.

Parameters:
  • input_tensor – Input tensor to analyze

  • target_class – Target class for attribution

  • **kwargs – Additional parameters

Returns:

Attribution map as numpy array

signxai.torch_signxai.methods_impl.zennit_impl.direct_hook_analyzer module

Direct hook registration analyzer that bypasses Zennit’s composite system.

class signxai.torch_signxai.methods_impl.zennit_impl.direct_hook_analyzer.DirectStdxEpsilonHook(stdfactor: float = 1.0, layer_name: str = '')[source]

Bases: object

Direct hook implementation that bypasses Zennit’s registration system.

__init__(stdfactor: float = 1.0, layer_name: str = '')[source]
__call__(module: Module, grad_input: tuple, grad_output: tuple) tuple[source]

Direct hook execution without Zennit interference.

class signxai.torch_signxai.methods_impl.zennit_impl.direct_hook_analyzer.DirectLRPStdxEpsilonAnalyzer(model: Module, stdfactor: float = 1.0, **kwargs)[source]

Bases: AnalyzerBase

LRP StdX analyzer using direct hook registration to bypass Zennit’s override system.

__init__(model: Module, stdfactor: float = 1.0, **kwargs)[source]

Initialize AnalyzerBase.

Parameters:

model – PyTorch model

analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]

Analyze input using direct hook registration.

signxai.torch_signxai.methods_impl.zennit_impl.hooks module

Fixed and cleaned TensorFlow-exact implementations of LRP methods for PyTorch.

This module contains sophisticated hook implementations that achieve high correlation with TensorFlow iNNvestigate results by implementing the exact mathematical formulations.

Key improvements: - GammaHook for proper LRP Gamma methods (fixes correlation ~0.37) - StdxEpsilonHook for StdX methods (fixes correlation as low as 0.030) - FlatHook for LRP Flat methods (fixes negative correlation -0.389) - Enhanced LRP Sign methods with proper TF-exact implementations (fixes correlation 0.033) - Removed backward compatibility code for cleaner organization - All implementations now target 100% working methods with high correlation to TensorFlow

Fixed Methods Summary: - lrp_gamma: Uses GammaHook with sophisticated 4-combination TF algorithm - lrp_flat: Uses FlatHook with enhanced SafeDivide operations - lrpsign_sequential_composite_a: Uses layered SIGN -> AlphaBeta -> Epsilon approach - All stdx methods: Use StdxEpsilonHook with proper TF standard deviation calculation - All methods with stdfactor > 0: Now use TF-exact epsilon = std(input) * stdfactor

class signxai.torch_signxai.methods_impl.zennit_impl.hooks.LrpBaseHook(is_input_layer: bool = False)[source]

Bases: Hook

Base class for TF-exact LRP hooks. It handles the common logic of storing input/output tensors and computing the gradient-like operation.

__init__(is_input_layer: bool = False)[source]
forward(module: Module, inputs: tuple, outputs: Any) Any[source]

Stores input and output tensors for the backward pass.

backward(module: Module, grad_input: tuple, grad_output: tuple) tuple[source]

Main backward hook logic to be implemented by subclasses.

class signxai.torch_signxai.methods_impl.zennit_impl.hooks.VarGradBaseAnalyzer(model: Module, noise_scale: float = 0.2, augment_by_n: int = 50)[source]

Bases: object

Base class for VarGrad methods, handling noise generation and gradient accumulation.

__init__(model: Module, noise_scale: float = 0.2, augment_by_n: int = 50)[source]
analyze(input_tensor: Tensor, target_class: int | None = None, **kwargs) ndarray[source]
class signxai.torch_signxai.methods_impl.zennit_impl.hooks.GammaHook(gamma: float = 0.5, bias: bool = True)[source]

Bases: Hook

Corrected Gamma hook that exactly matches TensorFlow iNNvestigate’s GammaRule.

TensorFlow GammaRule algorithm: 1. Separate positive and negative weights 2. Create positive-only inputs (ins_pos = ins * (ins > 0)) 3. Compute four combinations:

  • Zs_pos = positive_weights * positive_inputs

  • Zs_act = all_weights * all_inputs

  • Zs_pos_act = all_weights * positive_inputs

  • Zs_act_pos = positive_weights * all_inputs

  1. Apply gamma weighting: gamma * activator_relevances - all_relevances

__init__(gamma: float = 0.5, bias: bool = True)[source]
forward(module: Module, inputs: tuple, outputs: Any) Any[source]

Hook applied during forward-pass

backward(module: Module, grad_input: tuple, grad_output: tuple) tuple[source]

Implement TensorFlow iNNvestigate’s exact GammaRule mathematical formulation.

class signxai.torch_signxai.methods_impl.zennit_impl.hooks.StdxEpsilonHook(stdfactor: float = 0.25, bias: bool = True, use_global_std: bool = False)[source]

Bases: Hook

Enhanced TensorFlow-exact StdxEpsilon hook that matches iNNvestigate’s StdxEpsilonRule.

Key features: 1. Dynamic epsilon = std(input) * stdfactor (TF-compatible calculation) 2. TensorFlow-compatible sign handling for epsilon 3. Proper relevance conservation 4. Improved numerical stability

__init__(stdfactor: float = 0.25, bias: bool = True, use_global_std: bool = False)[source]
forward(module: Module, inputs: tuple, outputs: Any) Any[source]

Hook applied during forward-pass

backward(module: Module, grad_input: tuple, grad_output: tuple) tuple[source]

Implement TensorFlow iNNvestigate’s exact StdxEpsilonRule mathematical formulation.

class signxai.torch_signxai.methods_impl.zennit_impl.hooks.FlatHook(epsilon: float = 1e-06)[source]

Bases: Hook

Custom Flat hook that exactly matches iNNvestigate’s FlatRule implementation.

From iNNvestigate: FlatRule sets all weights to ones and no biases, then uses SafeDivide operations for relevance redistribution.

CRITICAL FIX: Handles numerical instability when flat outputs are near zero.

__init__(epsilon: float = 1e-06)[source]
forward(module: Module, inputs: tuple, outputs: Any) Any[source]

Hook applied during forward-pass

backward(module: Module, grad_input: tuple, grad_output: tuple) tuple[source]

Implement iNNvestigate’s FlatRule backward pass logic. This matches the mathematical operations in iNNvestigate’s explain_hook method.

class signxai.torch_signxai.methods_impl.zennit_impl.hooks.EpsilonHook(epsilon: float = 1e-07, is_input_layer: bool = False)[source]

Bases: LrpBaseHook

Standard TF-exact Epsilon hook.

__init__(epsilon: float = 1e-07, is_input_layer: bool = False)[source]
backward(module: Module, grad_input: tuple, grad_output: tuple) tuple[source]

Main backward hook logic to be implemented by subclasses.

class signxai.torch_signxai.methods_impl.zennit_impl.hooks.SignEpsilonHook(epsilon: float = 0.0, stdfactor: float = 0.0, mu: float = 0.0, input_layer_rule: str = 'sign', is_input_layer: bool = False)[source]

Bases: LrpBaseHook

A unified hook for all lrp.sign_epsilon variants. It handles standard epsilon, StdX epsilon, and SIGN or SIGN-mu on the input layer.

__init__(epsilon: float = 0.0, stdfactor: float = 0.0, mu: float = 0.0, input_layer_rule: str = 'sign', is_input_layer: bool = False)[source]
backward(module: Module, grad_input: tuple, grad_output: tuple) tuple[source]

Main backward hook logic to be implemented by subclasses.

class signxai.torch_signxai.methods_impl.zennit_impl.hooks.LrpSignEpsilonMuHook(epsilon: float = 0.0, mu: float = 0.0, is_input_layer: bool = False)[source]

Bases: SignEpsilonHook

Hook for LRP SIGN epsilon with mu parameter.

__init__(epsilon: float = 0.0, mu: float = 0.0, is_input_layer: bool = False)[source]
signxai.torch_signxai.methods_impl.zennit_impl.hooks.LrpSignEpsilonStdXHook

alias of SignEpsilonHook

class signxai.torch_signxai.methods_impl.zennit_impl.hooks.LrpSignEpsilonStdXMuHook(epsilon: float = 0.0, stdfactor: float = 0.0, mu: float = 0.0, is_input_layer: bool = False)[source]

Bases: SignEpsilonHook

Hook for LRP SIGN epsilon with StdX and mu parameters.

__init__(epsilon: float = 0.0, stdfactor: float = 0.0, mu: float = 0.0, is_input_layer: bool = False)[source]
class signxai.torch_signxai.methods_impl.zennit_impl.hooks.WSquareHook(epsilon: float = 1e-06)[source]

Bases: Hook

iNNvestigate-compatible W^2 hook.

__init__(epsilon: float = 1e-06)[source]
forward(module: Module, inputs: tuple, outputs: Any) Any[source]

Hook applied during forward-pass

backward(module: Module, grad_input: tuple, grad_output: tuple) tuple[source]

Hook applied during backward-pass

class signxai.torch_signxai.methods_impl.zennit_impl.hooks.SignHook[source]

Bases: Hook

Corrected SIGN hook.

__init__()[source]
forward(module: Module, inputs: tuple, outputs: Any) Any[source]

Hook applied during forward-pass

backward(module: Module, grad_input: tuple, grad_output: tuple) tuple[source]

Hook applied during backward-pass

class signxai.torch_signxai.methods_impl.zennit_impl.hooks.SignMuHook(mu: float = 0.0)[source]

Bases: Hook

Corrected SIGN-mu hook.

__init__(mu: float = 0.0)[source]
forward(module: Module, inputs: tuple, outputs: Any) Any[source]

Hook applied during forward-pass

backward(module: Module, grad_input: tuple, grad_output: tuple) tuple[source]

Hook applied during backward-pass

class signxai.torch_signxai.methods_impl.zennit_impl.hooks.VarGradAnalyzer(model: Module, noise_scale: float = 0.2, augment_by_n: int = 50)[source]

Bases: VarGradBaseAnalyzer

Standard VarGrad.

class signxai.torch_signxai.methods_impl.zennit_impl.hooks.VarGradXInputAnalyzer(model: Module, noise_scale: float = 0.2, augment_by_n: int = 50)[source]

Bases: VarGradBaseAnalyzer

VarGrad * Input.

class signxai.torch_signxai.methods_impl.zennit_impl.hooks.VarGradXSignAnalyzer(model: Module, noise_scale: float = 0.2, augment_by_n: int = 50)[source]

Bases: VarGradBaseAnalyzer

VarGrad * sign(Input).

signxai.torch_signxai.methods_impl.zennit_impl.hooks.lrp_composite(first_layer_rule: Hook | type, default_rule: Hook | type, last_layer_rule: Hook | type | None = None, first_layer_params: dict = {}, default_params: dict = {}, last_layer_params: dict = {}) Callable[source]

A generic factory to create complex LRP composites.

This function can create composites for rules like LRP-Z, W^2-LRP, and Sequential Composites by specifying different rules and parameters for the first, last, and default layers.

Parameters:
  • first_layer_rule – The zennit.rule class for the first layer (e.g., ZPlus, WSquare).

  • default_rule – The zennit.rule class for all other layers (e.g., Epsilon, AlphaBeta).

  • last_layer_rule – Optional rule for the last layers (e.g., for sequential composites).

  • ...params – Dictionaries of parameters for each rule.

Returns:

A Zennit Composite instance configured with the specified rules.

signxai.torch_signxai.methods_impl.zennit_impl.hooks.lrpsign_epsilon(epsilon: float = 0.0, stdfactor: float = 0.0, **kwargs) Callable[source]

Creates a composite for lrp.sign_epsilon variants.

signxai.torch_signxai.methods_impl.zennit_impl.hooks.lrpsign_epsilon_mu(epsilon: float = 0.0, mu: float = 0.0, **kwargs) Callable[source]

Creates a composite for LRP SIGN epsilon mu.

signxai.torch_signxai.methods_impl.zennit_impl.hooks.lrpsign_epsilon_std_x(epsilon: float = 0.0, stdfactor: float = 0.0, **kwargs) Callable[source]

Creates a composite for LRP SIGN epsilon with StdX.

signxai.torch_signxai.methods_impl.zennit_impl.hooks.lrpsign_epsilon_std_x_mu(epsilon: float = 0.0, stdfactor: float = 0.0, mu: float = 0.0, **kwargs) Callable[source]

Creates a composite for LRP SIGN epsilon with StdX and mu.

signxai.torch_signxai.methods_impl.zennit_impl.hooks.lrpz_epsilon(epsilon: float = 0.1) Composite[source]

Creates a composite for LRP-Z + Epsilon.

signxai.torch_signxai.methods_impl.zennit_impl.hooks.w2lrp_epsilon(epsilon: float = 0.1) Composite[source]

Creates a composite for W^2-LRP + Epsilon.

signxai.torch_signxai.methods_impl.zennit_impl.hooks.w2lrp_stdx_epsilon(epsilon: float = 0.1, stdfactor: float = 0.0) Composite[source]

Creates a composite for W^2-LRP + StdX Epsilon.

signxai.torch_signxai.methods_impl.zennit_impl.hooks.lrpz_stdx_epsilon(epsilon: float = 0.1, stdfactor: float = 0.0) Composite[source]

Creates a composite for LRP-Z + StdX Epsilon.

signxai.torch_signxai.methods_impl.zennit_impl.hooks.stdx_epsilon(epsilon: float = 0.1, stdfactor: float = 0.25) Callable[source]

Creates a composite for StdX Epsilon using StdxEpsilonHook.

signxai.torch_signxai.methods_impl.zennit_impl.hooks.lrpz_sequential_composite_a(epsilon: float = 0.1) Composite[source]

Creates a composite for LRP-Z + Sequential Composite A.

signxai.torch_signxai.methods_impl.zennit_impl.hooks.lrpz_sequential_composite_b(epsilon: float = 0.1) Composite[source]

Creates a composite for LRP-Z + Sequential Composite B.

signxai.torch_signxai.methods_impl.zennit_impl.hooks.w2lrp_sequential_composite_a(epsilon: float = 0.1) Composite[source]

Creates a composite for W^2-LRP + Sequential Composite A.

signxai.torch_signxai.methods_impl.zennit_impl.hooks.w2lrp_sequential_composite_b(epsilon: float = 0.1) Composite[source]

Creates a composite for W^2-LRP + Sequential Composite B.

signxai.torch_signxai.methods_impl.zennit_impl.hooks.epsilon_composite(epsilon: float = 0.1) Composite[source]

Creates a standard epsilon composite.

signxai.torch_signxai.methods_impl.zennit_impl.hooks.zplus_composite() Composite[source]

Creates ZPlus composite.

signxai.torch_signxai.methods_impl.zennit_impl.hooks.zbox_composite(low: float = -1.0, high: float = 1.0) Composite[source]

Creates ZBox composite.

signxai.torch_signxai.methods_impl.zennit_impl.hooks.wsquare_composite_standard() Composite[source]

Creates standard WSquare composite.

signxai.torch_signxai.methods_impl.zennit_impl.hooks.sequential_composite(epsilon: float = 0.1, alpha: float = 2.0, beta: float = 1.0) Composite[source]

Creates sequential composite with proper layer assignment.

signxai.torch_signxai.methods_impl.zennit_impl.hooks.alphabeta_composite(alpha: float = 2.0, beta: float = 1.0) Composite[source]

Creates alpha-beta composite.

signxai.torch_signxai.methods_impl.zennit_impl.hooks.flat_composite() Composite[source]

Creates flat composite using FlatHook.

signxai.torch_signxai.methods_impl.zennit_impl.hooks.wsquare_composite() Composite[source]

Creates WSquare composite using WSquareHook for improved correlation.

signxai.torch_signxai.methods_impl.zennit_impl.hooks.gamma_composite(gamma: float = 0.25) Composite[source]

Creates gamma composite using GammaHook.

signxai.torch_signxai.methods_impl.zennit_impl.hooks.sign_composite() Composite[source]

Creates SIGN composite.

signxai.torch_signxai.methods_impl.zennit_impl.hooks.w2lrp_composite_a(epsilon: float = 0.1) Composite[source]

Creates W^2-LRP composite A.

signxai.torch_signxai.methods_impl.zennit_impl.hooks.vargrad_analyzer(model: Module, **kwargs) VarGradAnalyzer[source]

Creates a TF-exact VarGrad analyzer.

signxai.torch_signxai.methods_impl.zennit_impl.hooks.vargrad_x_input_analyzer(model: Module, **kwargs) VarGradXInputAnalyzer[source]

Creates a TF-exact VarGrad x Input analyzer.

signxai.torch_signxai.methods_impl.zennit_impl.hooks.vargrad_x_sign_analyzer(model: Module, **kwargs) VarGradXSignAnalyzer[source]

Creates a TF-exact VarGrad x Sign analyzer.

signxai.torch_signxai.methods_impl.zennit_impl.hooks.vargrad_x_input_x_sign_analyzer(model: Module, **kwargs) VarGradXSignAnalyzer[source]

Creates a TF-exact VarGrad x Input x Sign analyzer.

signxai.torch_signxai.methods_impl.zennit_impl.hooks.lrpsign_epsilon_stdx(epsilon: float = 0.0, stdfactor: float = 0.0, **kwargs) Callable

Creates a composite for LRP SIGN epsilon with StdX.

signxai.torch_signxai.methods_impl.zennit_impl.hooks.lrpsign_epsilon_stdx_mu(epsilon: float = 0.0, stdfactor: float = 0.0, mu: float = 0.0, **kwargs) Callable

Creates a composite for LRP SIGN epsilon with StdX and mu.

signxai.torch_signxai.methods_impl.zennit_impl.sign_rule module

SIGN and SIGNmu rule implementations for Zennit and PyTorch. These custom rules implement the SIGN and SIGNmu rules from TensorFlow InvestiNNs.

class signxai.torch_signxai.methods_impl.zennit_impl.sign_rule.SIGNRule(bias=True)[source]

Bases: BasicHook

SIGN rule from the TensorFlow implementation. This rule uses the sign of the input to propagate relevance.

Parameters:

bias (bool, optional) – Whether to include bias in the computation. Default: True.

__init__(bias=True)[source]

Initialize SIGN rule.

Parameters:

bias (bool, optional) – Whether to include bias in the computation. Default: True.

forward(module, input_tensor, output_tensor)[source]

Store input and output tensors for the backward pass.

Parameters:
  • module (nn.Module) – PyTorch module for which this rule is being applied.

  • input_tensor (Tensor) – Input tensor to the module.

  • output_tensor (Tensor) – Output tensor from the module.

Returns:

The output tensor and the backward function.

Return type:

Tuple[Tensor, callable]

class signxai.torch_signxai.methods_impl.zennit_impl.sign_rule.SIGNmuRule(mu=0.0, bias=True)[source]

Bases: BasicHook

SIGNmu rule from the TensorFlow implementation. This rule uses a threshold mu to determine the sign of the input for relevance propagation.

Parameters:
  • mu (float, optional) – Threshold for SIGN function. Values >= mu will get +1, values < mu will get -1. Default: 0.0.

  • bias (bool, optional) – Whether to include bias in the computation. Default: True.

__init__(mu=0.0, bias=True)[source]

Initialize SIGNmu rule.

Parameters:
  • mu (float, optional) – Threshold for SIGN function. Values >= mu will get +1, values < mu will get -1. Default: 0.0.

  • bias (bool, optional) – Whether to include bias in the computation. Default: True.

forward(module, input_tensor, output_tensor)[source]

Store input and output tensors for the backward pass.

Parameters:
  • module (nn.Module) – PyTorch module for which this rule is being applied.

  • input_tensor (Tensor) – Input tensor to the module.

  • output_tensor (Tensor) – Output tensor from the module.

Returns:

The output tensor and the backward function.

Return type:

Tuple[Tensor, callable]

signxai.torch_signxai.methods_impl.zennit_impl.stdx_rule module

StdxEpsilon rule implementation for Zennit and PyTorch. This custom rule implements the StdxEpsilonRule from TensorFlow iNNvestigate.

class signxai.torch_signxai.methods_impl.zennit_impl.stdx_rule.StdxEpsilon(stdfactor=0.25, bias=True)[source]

Bases: Epsilon

StdxEpsilon rule from the TensorFlow iNNvestigate implementation. This rule is similar to Epsilon rule but uses a multiple of the standard deviation of the input as epsilon for stabilization.

Parameters:
  • stdfactor (float, optional) – Factor to multiply the standard deviation by. Default: 0.25.

  • bias (bool, optional) – Whether to include bias in the computation. Default: True.

__init__(stdfactor=0.25, bias=True)[source]

Initialize StdxEpsilon rule with the standard deviation factor.

Parameters:
  • stdfactor (float, optional) – Factor to multiply the standard deviation by. Default: 0.25.

  • bias (bool, optional) – Whether to include bias in the computation. Default: True.

gradient_mapper(input_tensor, output_gradient)[source]

Custom gradient mapper that calculates epsilon based on input standard deviation. Matches TensorFlow’s StdxEpsilonRule implementation exactly.

Parameters:
  • input_tensor (torch.Tensor) – Input tensor to the layer.

  • output_gradient (torch.Tensor) – Gradient from the next layer.

Returns:

Modified gradient based on StdxEpsilon rule.

Return type:

torch.Tensor

copy()[source]

Return a copy of this hook that preserves our custom attributes.

Module contents

Zennit-based implementation details for PyTorch XAI methods. This subpackage relies on the Zennit library.

signxai.torch_signxai.methods_impl.zennit_impl.calculate_relevancemap(model: Module, input_tensor: Tensor, method: str, target_class: int | Tensor | None = None, neuron_selection: int | Tensor | None = None, **kwargs: Any) ndarray[source]

Calculates a relevance map for a given input using Zennit-based methods. (Args and Returns documentation as before)

class signxai.torch_signxai.methods_impl.zennit_impl.GradientAnalyzer(model: Module)[source]

Bases: AnalyzerBase

Vanilla gradients analyzer.

analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]

Calculate gradient of model output with respect to input. :param input_tensor: Input tensor :param target_class: Target class index (None for argmax)

Returns:

Gradient with respect to input as numpy array

class signxai.torch_signxai.methods_impl.zennit_impl.SmoothGradAnalyzer(model: Module, noise_level: float = 0.2, num_samples: int = 50, stdev_spread=None)[source]

Bases: AnalyzerBase

SmoothGrad analyzer.

__init__(model: Module, noise_level: float = 0.2, num_samples: int = 50, stdev_spread=None)[source]

Initialize AnalyzerBase.

Parameters:

model – PyTorch model

analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]

Analyze input tensor and return attribution.

Parameters:
  • input_tensor – Input tensor

  • target_class – Target class index (None for argmax)

  • **kwargs – Additional arguments for specific analyzers

Returns:

Attribution as numpy array

class signxai.torch_signxai.methods_impl.zennit_impl.IntegratedGradientsAnalyzer(model: Module, steps: int = 50, baseline_type: str = 'zero')[source]

Bases: AnalyzerBase

Integrated gradients analyzer using basic loop, not Zennit’s direct IG.

__init__(model: Module, steps: int = 50, baseline_type: str = 'zero')[source]

Initialize AnalyzerBase.

Parameters:

model – PyTorch model

analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]

Analyze input tensor and return attribution.

Parameters:
  • input_tensor – Input tensor

  • target_class – Target class index (None for argmax)

  • **kwargs – Additional arguments for specific analyzers

Returns:

Attribution as numpy array

class signxai.torch_signxai.methods_impl.zennit_impl.LRPAnalyzer(model: Module, rule_name: str = 'epsilon', epsilon: float = 1e-06, alpha: float = 1.0, beta: float = 0.0, **rule_kwargs)[source]

Bases: AnalyzerBase

Layer-wise Relevance Propagation (LRP) analyzer using Zennit.

__init__(model: Module, rule_name: str = 'epsilon', epsilon: float = 1e-06, alpha: float = 1.0, beta: float = 0.0, **rule_kwargs)[source]

Initialize AnalyzerBase.

Parameters:

model – PyTorch model

analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]

Analyze input tensor and return attribution.

Parameters:
  • input_tensor – Input tensor

  • target_class – Target class index (None for argmax)

  • **kwargs – Additional arguments for specific analyzers

Returns:

Attribution as numpy array

class signxai.torch_signxai.methods_impl.zennit_impl.AdvancedLRPAnalyzer(model: Module, variant: str = 'epsilon', **kwargs)[source]

Bases: AnalyzerBase

Advanced Layer-wise Relevance Propagation (LRP) analyzer with multiple rule variants.

__init__(model: Module, variant: str = 'epsilon', **kwargs)[source]

Initialize AnalyzerBase.

Parameters:

model – PyTorch model

analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]

Analyze input tensor and return attribution.

Parameters:
  • input_tensor – Input tensor

  • target_class – Target class index (None for argmax)

  • **kwargs – Additional arguments for specific analyzers

Returns:

Attribution as numpy array

class signxai.torch_signxai.methods_impl.zennit_impl.LRPSequential(model: Module, first_layer_rule_name: str = 'zbox', middle_layer_rule_name: str = 'alphabeta', last_layer_rule_name: str = 'epsilon', variant: str | None = None, **kwargs)[source]

Bases: AnalyzerBase

Sequential LRP with different rules for different parts of the network. This implementation matches the TensorFlow LRPSequentialComposite variants, which apply different rules to different layers in the network.

__init__(model: Module, first_layer_rule_name: str = 'zbox', middle_layer_rule_name: str = 'alphabeta', last_layer_rule_name: str = 'epsilon', variant: str | None = None, **kwargs)[source]

Initialize AnalyzerBase.

Parameters:

model – PyTorch model

analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]

Analyze input using LRP with the configured rule variant.

Parameters:
  • input_tensor – Input tensor to analyze

  • target_class – Target class for attribution

  • **kwargs – Additional parameters

Returns:

Attribution map as numpy array

class signxai.torch_signxai.methods_impl.zennit_impl.BoundedLRPAnalyzer(model: Module, low: float = 0.0, high: float = 1.0, rule_name: str = 'epsilon', **kwargs)[source]

Bases: AnalyzerBase

LRP analyzer that enforces input bounds with ZBox rule at the first layer and applies specified rules elsewhere.

__init__(model: Module, low: float = 0.0, high: float = 1.0, rule_name: str = 'epsilon', **kwargs)[source]

Initialize AnalyzerBase.

Parameters:

model – PyTorch model

analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]

Analyze input tensor and return attribution.

Parameters:
  • input_tensor – Input tensor

  • target_class – Target class index (None for argmax)

  • **kwargs – Additional arguments for specific analyzers

Returns:

Attribution as numpy array

class signxai.torch_signxai.methods_impl.zennit_impl.LRPStdxEpsilonAnalyzer(model: Module, stdfactor: float = 0.25, bias: bool = True, **kwargs)[source]

Bases: AnalyzerBase

LRP analyzer that uses the standard deviation based epsilon rule.

This analyzer implements the StdxEpsilon rule where the epsilon value for stabilization is based on a factor of the standard deviation of the input.

__init__(model: Module, stdfactor: float = 0.25, bias: bool = True, **kwargs)[source]

Initialize LRPStdxEpsilonAnalyzer.

Parameters:
  • model (nn.Module) – PyTorch model to analyze.

  • stdfactor (float, optional) – Factor to multiply standard deviation by. Default: 0.25.

  • bias (bool, optional) – Whether to include bias in computation. Default: True.

  • **kwargs – Additional keyword arguments.

analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]

Analyze input using StdxEpsilon rule.

Parameters:
  • input_tensor (torch.Tensor) – Input tensor to analyze.

  • target_class (Optional[Union[int, torch.Tensor]], optional) – Target class. Default: None (uses argmax).

  • **kwargs – Additional keyword arguments.

Returns:

Attribution map.

Return type:

np.ndarray

class signxai.torch_signxai.methods_impl.zennit_impl.DeepLiftAnalyzer(model: Module, baseline_type: str = 'zero', **kwargs)[source]

Bases: AnalyzerBase

DeepLift implementation to match TensorFlow’s implementation.

This implementation follows the DeepLIFT algorithm from “Learning Important Features Through Propagating Activation Differences” (Shrikumar et al.) and is designed to be compatible with TensorFlow’s implementation in innvestigate.

It uses the Rescale rule from the paper and implements a modified backward pass that considers the difference between activations and reference activations.

__init__(model: Module, baseline_type: str = 'zero', **kwargs)[source]

Initialize DeepLiftAnalyzer.

Parameters:
  • model – PyTorch model to analyze

  • baseline_type – Type of baseline to use (“zero”, “black”, “white”, “gaussian”)

  • **kwargs – Additional parameters

analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]

Analyze input using DeepLift approach.

Parameters:
  • input_tensor – Input tensor to analyze

  • target_class – Target class for attribution

  • **kwargs – Additional parameters

Returns:

Attribution map as numpy array

class signxai.torch_signxai.methods_impl.zennit_impl.GuidedBackpropAnalyzer(model: Module)[source]

Bases: AnalyzerBase

Guided Backpropagation analyzer using Zennit’s composite.

__init__(model: Module)[source]

Initialize AnalyzerBase.

Parameters:

model – PyTorch model

analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]

Analyze input tensor and return attribution.

Parameters:
  • input_tensor – Input tensor

  • target_class – Target class index (None for argmax)

  • **kwargs – Additional arguments for specific analyzers

Returns:

Attribution as numpy array

class signxai.torch_signxai.methods_impl.zennit_impl.DeconvNetAnalyzer(model: Module)[source]

Bases: AnalyzerBase

DeconvNet Explanation Method using Zennit.

__init__(model: Module)[source]

Initialize AnalyzerBase.

Parameters:

model – PyTorch model

analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]

Analyze input tensor and return attribution.

Parameters:
  • input_tensor – Input tensor

  • target_class – Target class index (None for argmax)

  • **kwargs – Additional arguments for specific analyzers

Returns:

Attribution as numpy array

class signxai.torch_signxai.methods_impl.zennit_impl.GradCAMAnalyzer(model: Module, target_layer: Module | None = None)[source]

Bases: AnalyzerBase

Grad-CAM analyzer.

__init__(model: Module, target_layer: Module | None = None)[source]

Initialize AnalyzerBase.

Parameters:

model – PyTorch model

analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]

Analyze input tensor and return attribution.

Parameters:
  • input_tensor – Input tensor

  • target_class – Target class index (None for argmax)

  • **kwargs – Additional arguments for specific analyzers

Returns:

Attribution as numpy array

class signxai.torch_signxai.methods_impl.zennit_impl.GradientXSignAnalyzer(model: Module, mu: float = 0.0)[source]

Bases: AnalyzerBase

Gradient × Sign analyzer.

__init__(model: Module, mu: float = 0.0)[source]

Initialize AnalyzerBase.

Parameters:

model – PyTorch model

analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]

Calculate gradient × sign of model output with respect to input. :param input_tensor: Input tensor :param target_class: Target class index (None for argmax) :param mu: Threshold parameter for sign function

Returns:

Gradient × sign with respect to input as numpy array

class signxai.torch_signxai.methods_impl.zennit_impl.GradientXInputAnalyzer(model: Module)[source]

Bases: AnalyzerBase

Gradient × Input analyzer.

analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]

Calculate gradient × input of model output with respect to input. :param input_tensor: Input tensor :param target_class: Target class index (None for argmax)

Returns:

Gradient × input with respect to input as numpy array

class signxai.torch_signxai.methods_impl.zennit_impl.VarGradAnalyzer(model: Module, noise_level: float = 0.2, num_samples: int = 50)[source]

Bases: AnalyzerBase

VarGrad analyzer.

__init__(model: Module, noise_level: float = 0.2, num_samples: int = 50)[source]

Initialize AnalyzerBase.

Parameters:

model – PyTorch model

analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]

Analyze input tensor and return attribution.

Parameters:
  • input_tensor – Input tensor

  • target_class – Target class index (None for argmax)

  • **kwargs – Additional arguments for specific analyzers

Returns:

Attribution as numpy array

class signxai.torch_signxai.methods_impl.zennit_impl.DeepTaylorAnalyzer(model: Module, epsilon: float = 1e-06)[source]

Bases: AnalyzerBase

Deep Taylor analyzer.

__init__(model: Module, epsilon: float = 1e-06)[source]

Initialize AnalyzerBase.

Parameters:

model – PyTorch model

analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]

Deep Taylor decomposition (simplified version using LRP-like approach).

class signxai.torch_signxai.methods_impl.zennit_impl.AnalyzerBase(model: Module)[source]

Bases: ABC

Base class for all analyzers.

__init__(model: Module)[source]

Initialize AnalyzerBase.

Parameters:

model – PyTorch model

abstract analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]

Analyze input tensor and return attribution.

Parameters:
  • input_tensor – Input tensor

  • target_class – Target class index (None for argmax)

  • **kwargs – Additional arguments for specific analyzers

Returns:

Attribution as numpy array

class signxai.torch_signxai.methods_impl.zennit_impl.StdxEpsilon(stdfactor=0.25, bias=True)[source]

Bases: Epsilon

StdxEpsilon rule from the TensorFlow iNNvestigate implementation. This rule is similar to Epsilon rule but uses a multiple of the standard deviation of the input as epsilon for stabilization.

Parameters:
  • stdfactor (float, optional) – Factor to multiply the standard deviation by. Default: 0.25.

  • bias (bool, optional) – Whether to include bias in the computation. Default: True.

__init__(stdfactor=0.25, bias=True)[source]

Initialize StdxEpsilon rule with the standard deviation factor.

Parameters:
  • stdfactor (float, optional) – Factor to multiply the standard deviation by. Default: 0.25.

  • bias (bool, optional) – Whether to include bias in the computation. Default: True.

gradient_mapper(input_tensor, output_gradient)[source]

Custom gradient mapper that calculates epsilon based on input standard deviation. Matches TensorFlow’s StdxEpsilonRule implementation exactly.

Parameters:
  • input_tensor (torch.Tensor) – Input tensor to the layer.

  • output_gradient (torch.Tensor) – Gradient from the next layer.

Returns:

Modified gradient based on StdxEpsilon rule.

Return type:

torch.Tensor

copy()[source]

Return a copy of this hook that preserves our custom attributes.

class signxai.torch_signxai.methods_impl.zennit_impl.SIGNRule(bias=True)[source]

Bases: BasicHook

SIGN rule from the TensorFlow implementation. This rule uses the sign of the input to propagate relevance.

Parameters:

bias (bool, optional) – Whether to include bias in the computation. Default: True.

__init__(bias=True)[source]

Initialize SIGN rule.

Parameters:

bias (bool, optional) – Whether to include bias in the computation. Default: True.

forward(module, input_tensor, output_tensor)[source]

Store input and output tensors for the backward pass.

Parameters:
  • module (nn.Module) – PyTorch module for which this rule is being applied.

  • input_tensor (Tensor) – Input tensor to the module.

  • output_tensor (Tensor) – Output tensor from the module.

Returns:

The output tensor and the backward function.

Return type:

Tuple[Tensor, callable]

class signxai.torch_signxai.methods_impl.zennit_impl.SIGNmuRule(mu=0.0, bias=True)[source]

Bases: BasicHook

SIGNmu rule from the TensorFlow implementation. This rule uses a threshold mu to determine the sign of the input for relevance propagation.

Parameters:
  • mu (float, optional) – Threshold for SIGN function. Values >= mu will get +1, values < mu will get -1. Default: 0.0.

  • bias (bool, optional) – Whether to include bias in the computation. Default: True.

__init__(mu=0.0, bias=True)[source]

Initialize SIGNmu rule.

Parameters:
  • mu (float, optional) – Threshold for SIGN function. Values >= mu will get +1, values < mu will get -1. Default: 0.0.

  • bias (bool, optional) – Whether to include bias in the computation. Default: True.

forward(module, input_tensor, output_tensor)[source]

Store input and output tensors for the backward pass.

Parameters:
  • module (nn.Module) – PyTorch module for which this rule is being applied.

  • input_tensor (Tensor) – Input tensor to the module.

  • output_tensor (Tensor) – Output tensor from the module.

Returns:

The output tensor and the backward function.

Return type:

Tuple[Tensor, callable]