signxai.torch_signxai.methods_impl.zennit_impl package
Submodules
signxai.torch_signxai.methods_impl.zennit_impl.analyzers module
Zennit-based analyzers for PyTorch explanation methods.
- class signxai.torch_signxai.methods_impl.zennit_impl.analyzers.AnalyzerBase(model: Module)[source]
Bases:
ABCBase class for all analyzers.
- abstract analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]
Analyze input tensor and return attribution.
- Parameters:
input_tensor – Input tensor
target_class – Target class index (None for argmax)
**kwargs – Additional arguments for specific analyzers
- Returns:
Attribution as numpy array
- class signxai.torch_signxai.methods_impl.zennit_impl.analyzers.GradientAnalyzer(model: Module)[source]
Bases:
AnalyzerBaseVanilla gradients analyzer.
- analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]
Calculate gradient of model output with respect to input. :param input_tensor: Input tensor :param target_class: Target class index (None for argmax)
- Returns:
Gradient with respect to input as numpy array
- class signxai.torch_signxai.methods_impl.zennit_impl.analyzers.IntegratedGradientsAnalyzer(model: Module, steps: int = 50, baseline_type: str = 'zero')[source]
Bases:
AnalyzerBaseIntegrated gradients analyzer using basic loop, not Zennit’s direct IG.
- __init__(model: Module, steps: int = 50, baseline_type: str = 'zero')[source]
Initialize AnalyzerBase.
- Parameters:
model – PyTorch model
- analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]
Analyze input tensor and return attribution.
- Parameters:
input_tensor – Input tensor
target_class – Target class index (None for argmax)
**kwargs – Additional arguments for specific analyzers
- Returns:
Attribution as numpy array
- class signxai.torch_signxai.methods_impl.zennit_impl.analyzers.SmoothGradAnalyzer(model: Module, noise_level: float = 0.2, num_samples: int = 50, stdev_spread=None)[source]
Bases:
AnalyzerBaseSmoothGrad analyzer.
- __init__(model: Module, noise_level: float = 0.2, num_samples: int = 50, stdev_spread=None)[source]
Initialize AnalyzerBase.
- Parameters:
model – PyTorch model
- analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]
Analyze input tensor and return attribution.
- Parameters:
input_tensor – Input tensor
target_class – Target class index (None for argmax)
**kwargs – Additional arguments for specific analyzers
- Returns:
Attribution as numpy array
- class signxai.torch_signxai.methods_impl.zennit_impl.analyzers.GuidedBackpropAnalyzer(model: Module)[source]
Bases:
AnalyzerBaseGuided Backpropagation analyzer using Zennit’s composite.
- analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]
Analyze input tensor and return attribution.
- Parameters:
input_tensor – Input tensor
target_class – Target class index (None for argmax)
**kwargs – Additional arguments for specific analyzers
- Returns:
Attribution as numpy array
- class signxai.torch_signxai.methods_impl.zennit_impl.analyzers.DeconvNetComposite[source]
Bases:
CompositeDeconvNet composite using Zennit’s built-in DeconvNet composite.
- class signxai.torch_signxai.methods_impl.zennit_impl.analyzers.DeconvNetAnalyzer(model: Module)[source]
Bases:
AnalyzerBaseDeconvNet Explanation Method using Zennit.
- analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]
Analyze input tensor and return attribution.
- Parameters:
input_tensor – Input tensor
target_class – Target class index (None for argmax)
**kwargs – Additional arguments for specific analyzers
- Returns:
Attribution as numpy array
- class signxai.torch_signxai.methods_impl.zennit_impl.analyzers.GradCAMAnalyzer(model: Module, target_layer: Module | None = None)[source]
Bases:
AnalyzerBaseGrad-CAM analyzer.
- __init__(model: Module, target_layer: Module | None = None)[source]
Initialize AnalyzerBase.
- Parameters:
model – PyTorch model
- analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]
Analyze input tensor and return attribution.
- Parameters:
input_tensor – Input tensor
target_class – Target class index (None for argmax)
**kwargs – Additional arguments for specific analyzers
- Returns:
Attribution as numpy array
- class signxai.torch_signxai.methods_impl.zennit_impl.analyzers.LRPAnalyzer(model: Module, rule_name: str = 'epsilon', epsilon: float = 1e-06, alpha: float = 1.0, beta: float = 0.0, **rule_kwargs)[source]
Bases:
AnalyzerBaseLayer-wise Relevance Propagation (LRP) analyzer using Zennit.
- __init__(model: Module, rule_name: str = 'epsilon', epsilon: float = 1e-06, alpha: float = 1.0, beta: float = 0.0, **rule_kwargs)[source]
Initialize AnalyzerBase.
- Parameters:
model – PyTorch model
- analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]
Analyze input tensor and return attribution.
- Parameters:
input_tensor – Input tensor
target_class – Target class index (None for argmax)
**kwargs – Additional arguments for specific analyzers
- Returns:
Attribution as numpy array
- class signxai.torch_signxai.methods_impl.zennit_impl.analyzers.GradientXSignAnalyzer(model: Module, mu: float = 0.0)[source]
Bases:
AnalyzerBaseGradient × Sign analyzer.
- __init__(model: Module, mu: float = 0.0)[source]
Initialize AnalyzerBase.
- Parameters:
model – PyTorch model
- analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]
Calculate gradient × sign of model output with respect to input. :param input_tensor: Input tensor :param target_class: Target class index (None for argmax) :param mu: Threshold parameter for sign function
- Returns:
Gradient × sign with respect to input as numpy array
- class signxai.torch_signxai.methods_impl.zennit_impl.analyzers.GradientXInputAnalyzer(model: Module)[source]
Bases:
AnalyzerBaseGradient × Input analyzer.
- analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]
Calculate gradient × input of model output with respect to input. :param input_tensor: Input tensor :param target_class: Target class index (None for argmax)
- Returns:
Gradient × input with respect to input as numpy array
- class signxai.torch_signxai.methods_impl.zennit_impl.analyzers.VarGradAnalyzer(model: Module, noise_level: float = 0.2, num_samples: int = 50)[source]
Bases:
AnalyzerBaseVarGrad analyzer.
- __init__(model: Module, noise_level: float = 0.2, num_samples: int = 50)[source]
Initialize AnalyzerBase.
- Parameters:
model – PyTorch model
- analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]
Analyze input tensor and return attribution.
- Parameters:
input_tensor – Input tensor
target_class – Target class index (None for argmax)
**kwargs – Additional arguments for specific analyzers
- Returns:
Attribution as numpy array
- class signxai.torch_signxai.methods_impl.zennit_impl.analyzers.DeepTaylorAnalyzer(model: Module, epsilon: float = 1e-06)[source]
Bases:
AnalyzerBaseDeep Taylor analyzer.
- class signxai.torch_signxai.methods_impl.zennit_impl.analyzers.AdvancedLRPAnalyzer(model: Module, variant: str = 'epsilon', **kwargs)[source]
Bases:
AnalyzerBaseAdvanced Layer-wise Relevance Propagation (LRP) analyzer with multiple rule variants.
- __init__(model: Module, variant: str = 'epsilon', **kwargs)[source]
Initialize AnalyzerBase.
- Parameters:
model – PyTorch model
- analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]
Analyze input tensor and return attribution.
- Parameters:
input_tensor – Input tensor
target_class – Target class index (None for argmax)
**kwargs – Additional arguments for specific analyzers
- Returns:
Attribution as numpy array
- class signxai.torch_signxai.methods_impl.zennit_impl.analyzers.LRPSequential(model: Module, first_layer_rule_name: str = 'zbox', middle_layer_rule_name: str = 'alphabeta', last_layer_rule_name: str = 'epsilon', variant: str | None = None, **kwargs)[source]
Bases:
AnalyzerBaseSequential LRP with different rules for different parts of the network. This implementation matches the TensorFlow LRPSequentialComposite variants, which apply different rules to different layers in the network.
- __init__(model: Module, first_layer_rule_name: str = 'zbox', middle_layer_rule_name: str = 'alphabeta', last_layer_rule_name: str = 'epsilon', variant: str | None = None, **kwargs)[source]
Initialize AnalyzerBase.
- Parameters:
model – PyTorch model
- analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]
Analyze input using LRP with the configured rule variant.
- Parameters:
input_tensor – Input tensor to analyze
target_class – Target class for attribution
**kwargs – Additional parameters
- Returns:
Attribution map as numpy array
- class signxai.torch_signxai.methods_impl.zennit_impl.analyzers.BoundedLRPAnalyzer(model: Module, low: float = 0.0, high: float = 1.0, rule_name: str = 'epsilon', **kwargs)[source]
Bases:
AnalyzerBaseLRP analyzer that enforces input bounds with ZBox rule at the first layer and applies specified rules elsewhere.
- __init__(model: Module, low: float = 0.0, high: float = 1.0, rule_name: str = 'epsilon', **kwargs)[source]
Initialize AnalyzerBase.
- Parameters:
model – PyTorch model
- analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]
Analyze input tensor and return attribution.
- Parameters:
input_tensor – Input tensor
target_class – Target class index (None for argmax)
**kwargs – Additional arguments for specific analyzers
- Returns:
Attribution as numpy array
- class signxai.torch_signxai.methods_impl.zennit_impl.analyzers.LRPStdxEpsilonAnalyzer(model: Module, stdfactor: float = 0.25, bias: bool = True, **kwargs)[source]
Bases:
AnalyzerBaseLRP analyzer that uses the standard deviation based epsilon rule.
This analyzer implements the StdxEpsilon rule where the epsilon value for stabilization is based on a factor of the standard deviation of the input.
- __init__(model: Module, stdfactor: float = 0.25, bias: bool = True, **kwargs)[source]
Initialize LRPStdxEpsilonAnalyzer.
- analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]
Analyze input using StdxEpsilon rule.
- Parameters:
input_tensor (torch.Tensor) – Input tensor to analyze.
target_class (Optional[Union[int, torch.Tensor]], optional) – Target class. Default: None (uses argmax).
**kwargs – Additional keyword arguments.
- Returns:
Attribution map.
- Return type:
np.ndarray
- class signxai.torch_signxai.methods_impl.zennit_impl.analyzers.DeepLiftAnalyzer(model: Module, baseline_type: str = 'zero', **kwargs)[source]
Bases:
AnalyzerBaseDeepLift implementation to match TensorFlow’s implementation.
This implementation follows the DeepLIFT algorithm from “Learning Important Features Through Propagating Activation Differences” (Shrikumar et al.) and is designed to be compatible with TensorFlow’s implementation in innvestigate.
It uses the Rescale rule from the paper and implements a modified backward pass that considers the difference between activations and reference activations.
- __init__(model: Module, baseline_type: str = 'zero', **kwargs)[source]
Initialize DeepLiftAnalyzer.
- Parameters:
model – PyTorch model to analyze
baseline_type – Type of baseline to use (“zero”, “black”, “white”, “gaussian”)
**kwargs – Additional parameters
- analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]
Analyze input using DeepLift approach.
- Parameters:
input_tensor – Input tensor to analyze
target_class – Target class for attribution
**kwargs – Additional parameters
- Returns:
Attribution map as numpy array
signxai.torch_signxai.methods_impl.zennit_impl.direct_hook_analyzer module
Direct hook registration analyzer that bypasses Zennit’s composite system.
- class signxai.torch_signxai.methods_impl.zennit_impl.direct_hook_analyzer.DirectStdxEpsilonHook(stdfactor: float = 1.0, layer_name: str = '')[source]
Bases:
objectDirect hook implementation that bypasses Zennit’s registration system.
- class signxai.torch_signxai.methods_impl.zennit_impl.direct_hook_analyzer.DirectLRPStdxEpsilonAnalyzer(model: Module, stdfactor: float = 1.0, **kwargs)[source]
Bases:
AnalyzerBaseLRP StdX analyzer using direct hook registration to bypass Zennit’s override system.
signxai.torch_signxai.methods_impl.zennit_impl.hooks module
Fixed and cleaned TensorFlow-exact implementations of LRP methods for PyTorch.
This module contains sophisticated hook implementations that achieve high correlation with TensorFlow iNNvestigate results by implementing the exact mathematical formulations.
Key improvements: - GammaHook for proper LRP Gamma methods (fixes correlation ~0.37) - StdxEpsilonHook for StdX methods (fixes correlation as low as 0.030) - FlatHook for LRP Flat methods (fixes negative correlation -0.389) - Enhanced LRP Sign methods with proper TF-exact implementations (fixes correlation 0.033) - Removed backward compatibility code for cleaner organization - All implementations now target 100% working methods with high correlation to TensorFlow
Fixed Methods Summary: - lrp_gamma: Uses GammaHook with sophisticated 4-combination TF algorithm - lrp_flat: Uses FlatHook with enhanced SafeDivide operations - lrpsign_sequential_composite_a: Uses layered SIGN -> AlphaBeta -> Epsilon approach - All stdx methods: Use StdxEpsilonHook with proper TF standard deviation calculation - All methods with stdfactor > 0: Now use TF-exact epsilon = std(input) * stdfactor
- class signxai.torch_signxai.methods_impl.zennit_impl.hooks.LrpBaseHook(is_input_layer: bool = False)[source]
Bases:
HookBase class for TF-exact LRP hooks. It handles the common logic of storing input/output tensors and computing the gradient-like operation.
- class signxai.torch_signxai.methods_impl.zennit_impl.hooks.VarGradBaseAnalyzer(model: Module, noise_scale: float = 0.2, augment_by_n: int = 50)[source]
Bases:
objectBase class for VarGrad methods, handling noise generation and gradient accumulation.
- class signxai.torch_signxai.methods_impl.zennit_impl.hooks.GammaHook(gamma: float = 0.5, bias: bool = True)[source]
Bases:
HookCorrected Gamma hook that exactly matches TensorFlow iNNvestigate’s GammaRule.
TensorFlow GammaRule algorithm: 1. Separate positive and negative weights 2. Create positive-only inputs (ins_pos = ins * (ins > 0)) 3. Compute four combinations:
Zs_pos = positive_weights * positive_inputs
Zs_act = all_weights * all_inputs
Zs_pos_act = all_weights * positive_inputs
Zs_act_pos = positive_weights * all_inputs
Apply gamma weighting: gamma * activator_relevances - all_relevances
- class signxai.torch_signxai.methods_impl.zennit_impl.hooks.StdxEpsilonHook(stdfactor: float = 0.25, bias: bool = True, use_global_std: bool = False)[source]
Bases:
HookEnhanced TensorFlow-exact StdxEpsilon hook that matches iNNvestigate’s StdxEpsilonRule.
Key features: 1. Dynamic epsilon = std(input) * stdfactor (TF-compatible calculation) 2. TensorFlow-compatible sign handling for epsilon 3. Proper relevance conservation 4. Improved numerical stability
- class signxai.torch_signxai.methods_impl.zennit_impl.hooks.FlatHook(epsilon: float = 1e-06)[source]
Bases:
HookCustom Flat hook that exactly matches iNNvestigate’s FlatRule implementation.
From iNNvestigate: FlatRule sets all weights to ones and no biases, then uses SafeDivide operations for relevance redistribution.
CRITICAL FIX: Handles numerical instability when flat outputs are near zero.
- class signxai.torch_signxai.methods_impl.zennit_impl.hooks.EpsilonHook(epsilon: float = 1e-07, is_input_layer: bool = False)[source]
Bases:
LrpBaseHookStandard TF-exact Epsilon hook.
- class signxai.torch_signxai.methods_impl.zennit_impl.hooks.SignEpsilonHook(epsilon: float = 0.0, stdfactor: float = 0.0, mu: float = 0.0, input_layer_rule: str = 'sign', is_input_layer: bool = False)[source]
Bases:
LrpBaseHookA unified hook for all lrp.sign_epsilon variants. It handles standard epsilon, StdX epsilon, and SIGN or SIGN-mu on the input layer.
- class signxai.torch_signxai.methods_impl.zennit_impl.hooks.LrpSignEpsilonMuHook(epsilon: float = 0.0, mu: float = 0.0, is_input_layer: bool = False)[source]
Bases:
SignEpsilonHookHook for LRP SIGN epsilon with mu parameter.
- signxai.torch_signxai.methods_impl.zennit_impl.hooks.LrpSignEpsilonStdXHook
alias of
SignEpsilonHook
- class signxai.torch_signxai.methods_impl.zennit_impl.hooks.LrpSignEpsilonStdXMuHook(epsilon: float = 0.0, stdfactor: float = 0.0, mu: float = 0.0, is_input_layer: bool = False)[source]
Bases:
SignEpsilonHookHook for LRP SIGN epsilon with StdX and mu parameters.
- class signxai.torch_signxai.methods_impl.zennit_impl.hooks.WSquareHook(epsilon: float = 1e-06)[source]
Bases:
HookiNNvestigate-compatible W^2 hook.
- class signxai.torch_signxai.methods_impl.zennit_impl.hooks.SignHook[source]
Bases:
HookCorrected SIGN hook.
- class signxai.torch_signxai.methods_impl.zennit_impl.hooks.SignMuHook(mu: float = 0.0)[source]
Bases:
HookCorrected SIGN-mu hook.
- class signxai.torch_signxai.methods_impl.zennit_impl.hooks.VarGradAnalyzer(model: Module, noise_scale: float = 0.2, augment_by_n: int = 50)[source]
Bases:
VarGradBaseAnalyzerStandard VarGrad.
- class signxai.torch_signxai.methods_impl.zennit_impl.hooks.VarGradXInputAnalyzer(model: Module, noise_scale: float = 0.2, augment_by_n: int = 50)[source]
Bases:
VarGradBaseAnalyzerVarGrad * Input.
- class signxai.torch_signxai.methods_impl.zennit_impl.hooks.VarGradXSignAnalyzer(model: Module, noise_scale: float = 0.2, augment_by_n: int = 50)[source]
Bases:
VarGradBaseAnalyzerVarGrad * sign(Input).
- signxai.torch_signxai.methods_impl.zennit_impl.hooks.lrp_composite(first_layer_rule: Hook | type, default_rule: Hook | type, last_layer_rule: Hook | type | None = None, first_layer_params: dict = {}, default_params: dict = {}, last_layer_params: dict = {}) Callable[source]
A generic factory to create complex LRP composites.
This function can create composites for rules like LRP-Z, W^2-LRP, and Sequential Composites by specifying different rules and parameters for the first, last, and default layers.
- Parameters:
first_layer_rule – The zennit.rule class for the first layer (e.g., ZPlus, WSquare).
default_rule – The zennit.rule class for all other layers (e.g., Epsilon, AlphaBeta).
last_layer_rule – Optional rule for the last layers (e.g., for sequential composites).
...params – Dictionaries of parameters for each rule.
- Returns:
A Zennit Composite instance configured with the specified rules.
- signxai.torch_signxai.methods_impl.zennit_impl.hooks.lrpsign_epsilon(epsilon: float = 0.0, stdfactor: float = 0.0, **kwargs) Callable[source]
Creates a composite for lrp.sign_epsilon variants.
- signxai.torch_signxai.methods_impl.zennit_impl.hooks.lrpsign_epsilon_mu(epsilon: float = 0.0, mu: float = 0.0, **kwargs) Callable[source]
Creates a composite for LRP SIGN epsilon mu.
- signxai.torch_signxai.methods_impl.zennit_impl.hooks.lrpsign_epsilon_std_x(epsilon: float = 0.0, stdfactor: float = 0.0, **kwargs) Callable[source]
Creates a composite for LRP SIGN epsilon with StdX.
- signxai.torch_signxai.methods_impl.zennit_impl.hooks.lrpsign_epsilon_std_x_mu(epsilon: float = 0.0, stdfactor: float = 0.0, mu: float = 0.0, **kwargs) Callable[source]
Creates a composite for LRP SIGN epsilon with StdX and mu.
- signxai.torch_signxai.methods_impl.zennit_impl.hooks.lrpz_epsilon(epsilon: float = 0.1) Composite[source]
Creates a composite for LRP-Z + Epsilon.
- signxai.torch_signxai.methods_impl.zennit_impl.hooks.w2lrp_epsilon(epsilon: float = 0.1) Composite[source]
Creates a composite for W^2-LRP + Epsilon.
- signxai.torch_signxai.methods_impl.zennit_impl.hooks.w2lrp_stdx_epsilon(epsilon: float = 0.1, stdfactor: float = 0.0) Composite[source]
Creates a composite for W^2-LRP + StdX Epsilon.
- signxai.torch_signxai.methods_impl.zennit_impl.hooks.lrpz_stdx_epsilon(epsilon: float = 0.1, stdfactor: float = 0.0) Composite[source]
Creates a composite for LRP-Z + StdX Epsilon.
- signxai.torch_signxai.methods_impl.zennit_impl.hooks.stdx_epsilon(epsilon: float = 0.1, stdfactor: float = 0.25) Callable[source]
Creates a composite for StdX Epsilon using StdxEpsilonHook.
- signxai.torch_signxai.methods_impl.zennit_impl.hooks.lrpz_sequential_composite_a(epsilon: float = 0.1) Composite[source]
Creates a composite for LRP-Z + Sequential Composite A.
- signxai.torch_signxai.methods_impl.zennit_impl.hooks.lrpz_sequential_composite_b(epsilon: float = 0.1) Composite[source]
Creates a composite for LRP-Z + Sequential Composite B.
- signxai.torch_signxai.methods_impl.zennit_impl.hooks.w2lrp_sequential_composite_a(epsilon: float = 0.1) Composite[source]
Creates a composite for W^2-LRP + Sequential Composite A.
- signxai.torch_signxai.methods_impl.zennit_impl.hooks.w2lrp_sequential_composite_b(epsilon: float = 0.1) Composite[source]
Creates a composite for W^2-LRP + Sequential Composite B.
- signxai.torch_signxai.methods_impl.zennit_impl.hooks.epsilon_composite(epsilon: float = 0.1) Composite[source]
Creates a standard epsilon composite.
- signxai.torch_signxai.methods_impl.zennit_impl.hooks.zplus_composite() Composite[source]
Creates ZPlus composite.
- signxai.torch_signxai.methods_impl.zennit_impl.hooks.zbox_composite(low: float = -1.0, high: float = 1.0) Composite[source]
Creates ZBox composite.
- signxai.torch_signxai.methods_impl.zennit_impl.hooks.wsquare_composite_standard() Composite[source]
Creates standard WSquare composite.
- signxai.torch_signxai.methods_impl.zennit_impl.hooks.sequential_composite(epsilon: float = 0.1, alpha: float = 2.0, beta: float = 1.0) Composite[source]
Creates sequential composite with proper layer assignment.
- signxai.torch_signxai.methods_impl.zennit_impl.hooks.alphabeta_composite(alpha: float = 2.0, beta: float = 1.0) Composite[source]
Creates alpha-beta composite.
- signxai.torch_signxai.methods_impl.zennit_impl.hooks.flat_composite() Composite[source]
Creates flat composite using FlatHook.
- signxai.torch_signxai.methods_impl.zennit_impl.hooks.wsquare_composite() Composite[source]
Creates WSquare composite using WSquareHook for improved correlation.
- signxai.torch_signxai.methods_impl.zennit_impl.hooks.gamma_composite(gamma: float = 0.25) Composite[source]
Creates gamma composite using GammaHook.
- signxai.torch_signxai.methods_impl.zennit_impl.hooks.sign_composite() Composite[source]
Creates SIGN composite.
- signxai.torch_signxai.methods_impl.zennit_impl.hooks.w2lrp_composite_a(epsilon: float = 0.1) Composite[source]
Creates W^2-LRP composite A.
- signxai.torch_signxai.methods_impl.zennit_impl.hooks.vargrad_analyzer(model: Module, **kwargs) VarGradAnalyzer[source]
Creates a TF-exact VarGrad analyzer.
- signxai.torch_signxai.methods_impl.zennit_impl.hooks.vargrad_x_input_analyzer(model: Module, **kwargs) VarGradXInputAnalyzer[source]
Creates a TF-exact VarGrad x Input analyzer.
- signxai.torch_signxai.methods_impl.zennit_impl.hooks.vargrad_x_sign_analyzer(model: Module, **kwargs) VarGradXSignAnalyzer[source]
Creates a TF-exact VarGrad x Sign analyzer.
- signxai.torch_signxai.methods_impl.zennit_impl.hooks.vargrad_x_input_x_sign_analyzer(model: Module, **kwargs) VarGradXSignAnalyzer[source]
Creates a TF-exact VarGrad x Input x Sign analyzer.
signxai.torch_signxai.methods_impl.zennit_impl.sign_rule module
SIGN and SIGNmu rule implementations for Zennit and PyTorch. These custom rules implement the SIGN and SIGNmu rules from TensorFlow InvestiNNs.
- class signxai.torch_signxai.methods_impl.zennit_impl.sign_rule.SIGNRule(bias=True)[source]
Bases:
BasicHookSIGN rule from the TensorFlow implementation. This rule uses the sign of the input to propagate relevance.
- Parameters:
bias (bool, optional) – Whether to include bias in the computation. Default: True.
- __init__(bias=True)[source]
Initialize SIGN rule.
- Parameters:
bias (bool, optional) – Whether to include bias in the computation. Default: True.
- forward(module, input_tensor, output_tensor)[source]
Store input and output tensors for the backward pass.
- Parameters:
module (nn.Module) – PyTorch module for which this rule is being applied.
input_tensor (Tensor) – Input tensor to the module.
output_tensor (Tensor) – Output tensor from the module.
- Returns:
The output tensor and the backward function.
- Return type:
Tuple[Tensor, callable]
- class signxai.torch_signxai.methods_impl.zennit_impl.sign_rule.SIGNmuRule(mu=0.0, bias=True)[source]
Bases:
BasicHookSIGNmu rule from the TensorFlow implementation. This rule uses a threshold mu to determine the sign of the input for relevance propagation.
- Parameters:
- forward(module, input_tensor, output_tensor)[source]
Store input and output tensors for the backward pass.
- Parameters:
module (nn.Module) – PyTorch module for which this rule is being applied.
input_tensor (Tensor) – Input tensor to the module.
output_tensor (Tensor) – Output tensor from the module.
- Returns:
The output tensor and the backward function.
- Return type:
Tuple[Tensor, callable]
signxai.torch_signxai.methods_impl.zennit_impl.stdx_rule module
StdxEpsilon rule implementation for Zennit and PyTorch. This custom rule implements the StdxEpsilonRule from TensorFlow iNNvestigate.
- class signxai.torch_signxai.methods_impl.zennit_impl.stdx_rule.StdxEpsilon(stdfactor=0.25, bias=True)[source]
Bases:
EpsilonStdxEpsilon rule from the TensorFlow iNNvestigate implementation. This rule is similar to Epsilon rule but uses a multiple of the standard deviation of the input as epsilon for stabilization.
- Parameters:
- __init__(stdfactor=0.25, bias=True)[source]
Initialize StdxEpsilon rule with the standard deviation factor.
- gradient_mapper(input_tensor, output_gradient)[source]
Custom gradient mapper that calculates epsilon based on input standard deviation. Matches TensorFlow’s StdxEpsilonRule implementation exactly.
- Parameters:
input_tensor (torch.Tensor) – Input tensor to the layer.
output_gradient (torch.Tensor) – Gradient from the next layer.
- Returns:
Modified gradient based on StdxEpsilon rule.
- Return type:
Module contents
Zennit-based implementation details for PyTorch XAI methods. This subpackage relies on the Zennit library.
- signxai.torch_signxai.methods_impl.zennit_impl.calculate_relevancemap(model: Module, input_tensor: Tensor, method: str, target_class: int | Tensor | None = None, neuron_selection: int | Tensor | None = None, **kwargs: Any) ndarray[source]
Calculates a relevance map for a given input using Zennit-based methods. (Args and Returns documentation as before)
- class signxai.torch_signxai.methods_impl.zennit_impl.GradientAnalyzer(model: Module)[source]
Bases:
AnalyzerBaseVanilla gradients analyzer.
- analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]
Calculate gradient of model output with respect to input. :param input_tensor: Input tensor :param target_class: Target class index (None for argmax)
- Returns:
Gradient with respect to input as numpy array
- class signxai.torch_signxai.methods_impl.zennit_impl.SmoothGradAnalyzer(model: Module, noise_level: float = 0.2, num_samples: int = 50, stdev_spread=None)[source]
Bases:
AnalyzerBaseSmoothGrad analyzer.
- __init__(model: Module, noise_level: float = 0.2, num_samples: int = 50, stdev_spread=None)[source]
Initialize AnalyzerBase.
- Parameters:
model – PyTorch model
- analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]
Analyze input tensor and return attribution.
- Parameters:
input_tensor – Input tensor
target_class – Target class index (None for argmax)
**kwargs – Additional arguments for specific analyzers
- Returns:
Attribution as numpy array
- class signxai.torch_signxai.methods_impl.zennit_impl.IntegratedGradientsAnalyzer(model: Module, steps: int = 50, baseline_type: str = 'zero')[source]
Bases:
AnalyzerBaseIntegrated gradients analyzer using basic loop, not Zennit’s direct IG.
- __init__(model: Module, steps: int = 50, baseline_type: str = 'zero')[source]
Initialize AnalyzerBase.
- Parameters:
model – PyTorch model
- analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]
Analyze input tensor and return attribution.
- Parameters:
input_tensor – Input tensor
target_class – Target class index (None for argmax)
**kwargs – Additional arguments for specific analyzers
- Returns:
Attribution as numpy array
- class signxai.torch_signxai.methods_impl.zennit_impl.LRPAnalyzer(model: Module, rule_name: str = 'epsilon', epsilon: float = 1e-06, alpha: float = 1.0, beta: float = 0.0, **rule_kwargs)[source]
Bases:
AnalyzerBaseLayer-wise Relevance Propagation (LRP) analyzer using Zennit.
- __init__(model: Module, rule_name: str = 'epsilon', epsilon: float = 1e-06, alpha: float = 1.0, beta: float = 0.0, **rule_kwargs)[source]
Initialize AnalyzerBase.
- Parameters:
model – PyTorch model
- analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]
Analyze input tensor and return attribution.
- Parameters:
input_tensor – Input tensor
target_class – Target class index (None for argmax)
**kwargs – Additional arguments for specific analyzers
- Returns:
Attribution as numpy array
- class signxai.torch_signxai.methods_impl.zennit_impl.AdvancedLRPAnalyzer(model: Module, variant: str = 'epsilon', **kwargs)[source]
Bases:
AnalyzerBaseAdvanced Layer-wise Relevance Propagation (LRP) analyzer with multiple rule variants.
- __init__(model: Module, variant: str = 'epsilon', **kwargs)[source]
Initialize AnalyzerBase.
- Parameters:
model – PyTorch model
- analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]
Analyze input tensor and return attribution.
- Parameters:
input_tensor – Input tensor
target_class – Target class index (None for argmax)
**kwargs – Additional arguments for specific analyzers
- Returns:
Attribution as numpy array
- class signxai.torch_signxai.methods_impl.zennit_impl.LRPSequential(model: Module, first_layer_rule_name: str = 'zbox', middle_layer_rule_name: str = 'alphabeta', last_layer_rule_name: str = 'epsilon', variant: str | None = None, **kwargs)[source]
Bases:
AnalyzerBaseSequential LRP with different rules for different parts of the network. This implementation matches the TensorFlow LRPSequentialComposite variants, which apply different rules to different layers in the network.
- __init__(model: Module, first_layer_rule_name: str = 'zbox', middle_layer_rule_name: str = 'alphabeta', last_layer_rule_name: str = 'epsilon', variant: str | None = None, **kwargs)[source]
Initialize AnalyzerBase.
- Parameters:
model – PyTorch model
- analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]
Analyze input using LRP with the configured rule variant.
- Parameters:
input_tensor – Input tensor to analyze
target_class – Target class for attribution
**kwargs – Additional parameters
- Returns:
Attribution map as numpy array
- class signxai.torch_signxai.methods_impl.zennit_impl.BoundedLRPAnalyzer(model: Module, low: float = 0.0, high: float = 1.0, rule_name: str = 'epsilon', **kwargs)[source]
Bases:
AnalyzerBaseLRP analyzer that enforces input bounds with ZBox rule at the first layer and applies specified rules elsewhere.
- __init__(model: Module, low: float = 0.0, high: float = 1.0, rule_name: str = 'epsilon', **kwargs)[source]
Initialize AnalyzerBase.
- Parameters:
model – PyTorch model
- analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]
Analyze input tensor and return attribution.
- Parameters:
input_tensor – Input tensor
target_class – Target class index (None for argmax)
**kwargs – Additional arguments for specific analyzers
- Returns:
Attribution as numpy array
- class signxai.torch_signxai.methods_impl.zennit_impl.LRPStdxEpsilonAnalyzer(model: Module, stdfactor: float = 0.25, bias: bool = True, **kwargs)[source]
Bases:
AnalyzerBaseLRP analyzer that uses the standard deviation based epsilon rule.
This analyzer implements the StdxEpsilon rule where the epsilon value for stabilization is based on a factor of the standard deviation of the input.
- __init__(model: Module, stdfactor: float = 0.25, bias: bool = True, **kwargs)[source]
Initialize LRPStdxEpsilonAnalyzer.
- analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]
Analyze input using StdxEpsilon rule.
- Parameters:
input_tensor (torch.Tensor) – Input tensor to analyze.
target_class (Optional[Union[int, torch.Tensor]], optional) – Target class. Default: None (uses argmax).
**kwargs – Additional keyword arguments.
- Returns:
Attribution map.
- Return type:
np.ndarray
- class signxai.torch_signxai.methods_impl.zennit_impl.DeepLiftAnalyzer(model: Module, baseline_type: str = 'zero', **kwargs)[source]
Bases:
AnalyzerBaseDeepLift implementation to match TensorFlow’s implementation.
This implementation follows the DeepLIFT algorithm from “Learning Important Features Through Propagating Activation Differences” (Shrikumar et al.) and is designed to be compatible with TensorFlow’s implementation in innvestigate.
It uses the Rescale rule from the paper and implements a modified backward pass that considers the difference between activations and reference activations.
- __init__(model: Module, baseline_type: str = 'zero', **kwargs)[source]
Initialize DeepLiftAnalyzer.
- Parameters:
model – PyTorch model to analyze
baseline_type – Type of baseline to use (“zero”, “black”, “white”, “gaussian”)
**kwargs – Additional parameters
- analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]
Analyze input using DeepLift approach.
- Parameters:
input_tensor – Input tensor to analyze
target_class – Target class for attribution
**kwargs – Additional parameters
- Returns:
Attribution map as numpy array
- class signxai.torch_signxai.methods_impl.zennit_impl.GuidedBackpropAnalyzer(model: Module)[source]
Bases:
AnalyzerBaseGuided Backpropagation analyzer using Zennit’s composite.
- analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]
Analyze input tensor and return attribution.
- Parameters:
input_tensor – Input tensor
target_class – Target class index (None for argmax)
**kwargs – Additional arguments for specific analyzers
- Returns:
Attribution as numpy array
- class signxai.torch_signxai.methods_impl.zennit_impl.DeconvNetAnalyzer(model: Module)[source]
Bases:
AnalyzerBaseDeconvNet Explanation Method using Zennit.
- analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]
Analyze input tensor and return attribution.
- Parameters:
input_tensor – Input tensor
target_class – Target class index (None for argmax)
**kwargs – Additional arguments for specific analyzers
- Returns:
Attribution as numpy array
- class signxai.torch_signxai.methods_impl.zennit_impl.GradCAMAnalyzer(model: Module, target_layer: Module | None = None)[source]
Bases:
AnalyzerBaseGrad-CAM analyzer.
- __init__(model: Module, target_layer: Module | None = None)[source]
Initialize AnalyzerBase.
- Parameters:
model – PyTorch model
- analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]
Analyze input tensor and return attribution.
- Parameters:
input_tensor – Input tensor
target_class – Target class index (None for argmax)
**kwargs – Additional arguments for specific analyzers
- Returns:
Attribution as numpy array
- class signxai.torch_signxai.methods_impl.zennit_impl.GradientXSignAnalyzer(model: Module, mu: float = 0.0)[source]
Bases:
AnalyzerBaseGradient × Sign analyzer.
- __init__(model: Module, mu: float = 0.0)[source]
Initialize AnalyzerBase.
- Parameters:
model – PyTorch model
- analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]
Calculate gradient × sign of model output with respect to input. :param input_tensor: Input tensor :param target_class: Target class index (None for argmax) :param mu: Threshold parameter for sign function
- Returns:
Gradient × sign with respect to input as numpy array
- class signxai.torch_signxai.methods_impl.zennit_impl.GradientXInputAnalyzer(model: Module)[source]
Bases:
AnalyzerBaseGradient × Input analyzer.
- analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]
Calculate gradient × input of model output with respect to input. :param input_tensor: Input tensor :param target_class: Target class index (None for argmax)
- Returns:
Gradient × input with respect to input as numpy array
- class signxai.torch_signxai.methods_impl.zennit_impl.VarGradAnalyzer(model: Module, noise_level: float = 0.2, num_samples: int = 50)[source]
Bases:
AnalyzerBaseVarGrad analyzer.
- __init__(model: Module, noise_level: float = 0.2, num_samples: int = 50)[source]
Initialize AnalyzerBase.
- Parameters:
model – PyTorch model
- analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]
Analyze input tensor and return attribution.
- Parameters:
input_tensor – Input tensor
target_class – Target class index (None for argmax)
**kwargs – Additional arguments for specific analyzers
- Returns:
Attribution as numpy array
- class signxai.torch_signxai.methods_impl.zennit_impl.DeepTaylorAnalyzer(model: Module, epsilon: float = 1e-06)[source]
Bases:
AnalyzerBaseDeep Taylor analyzer.
- class signxai.torch_signxai.methods_impl.zennit_impl.AnalyzerBase(model: Module)[source]
Bases:
ABCBase class for all analyzers.
- abstract analyze(input_tensor: Tensor, target_class: int | Tensor | None = None, **kwargs) ndarray[source]
Analyze input tensor and return attribution.
- Parameters:
input_tensor – Input tensor
target_class – Target class index (None for argmax)
**kwargs – Additional arguments for specific analyzers
- Returns:
Attribution as numpy array
- class signxai.torch_signxai.methods_impl.zennit_impl.StdxEpsilon(stdfactor=0.25, bias=True)[source]
Bases:
EpsilonStdxEpsilon rule from the TensorFlow iNNvestigate implementation. This rule is similar to Epsilon rule but uses a multiple of the standard deviation of the input as epsilon for stabilization.
- Parameters:
- __init__(stdfactor=0.25, bias=True)[source]
Initialize StdxEpsilon rule with the standard deviation factor.
- gradient_mapper(input_tensor, output_gradient)[source]
Custom gradient mapper that calculates epsilon based on input standard deviation. Matches TensorFlow’s StdxEpsilonRule implementation exactly.
- Parameters:
input_tensor (torch.Tensor) – Input tensor to the layer.
output_gradient (torch.Tensor) – Gradient from the next layer.
- Returns:
Modified gradient based on StdxEpsilon rule.
- Return type:
- class signxai.torch_signxai.methods_impl.zennit_impl.SIGNRule(bias=True)[source]
Bases:
BasicHookSIGN rule from the TensorFlow implementation. This rule uses the sign of the input to propagate relevance.
- Parameters:
bias (bool, optional) – Whether to include bias in the computation. Default: True.
- __init__(bias=True)[source]
Initialize SIGN rule.
- Parameters:
bias (bool, optional) – Whether to include bias in the computation. Default: True.
- forward(module, input_tensor, output_tensor)[source]
Store input and output tensors for the backward pass.
- Parameters:
module (nn.Module) – PyTorch module for which this rule is being applied.
input_tensor (Tensor) – Input tensor to the module.
output_tensor (Tensor) – Output tensor from the module.
- Returns:
The output tensor and the backward function.
- Return type:
Tuple[Tensor, callable]
- class signxai.torch_signxai.methods_impl.zennit_impl.SIGNmuRule(mu=0.0, bias=True)[source]
Bases:
BasicHookSIGNmu rule from the TensorFlow implementation. This rule uses a threshold mu to determine the sign of the input for relevance propagation.
- Parameters:
- forward(module, input_tensor, output_tensor)[source]
Store input and output tensors for the backward pass.
- Parameters:
module (nn.Module) – PyTorch module for which this rule is being applied.
input_tensor (Tensor) – Input tensor to the module.
output_tensor (Tensor) – Output tensor from the module.
- Returns:
The output tensor and the backward function.
- Return type:
Tuple[Tensor, callable]