signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based package
Submodules
signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_analyzer module
- class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_analyzer.LRP(model, *args, **kwargs)[source]
Bases:
ReverseAnalyzerBaseBase class for LRP-based model analyzers
- Parameters:
model – A Keras model.
rule – A rule can be a string or a Rule object, lists thereof or a list of conditions [(Condition, Rule), … ] gradient.
input_layer_rule – either a Rule object, atuple of (low, high) the min/max pixel values of the inputs
bn_layer_rule – either a Rule object or None. None means dedicated BN rule will be applied.
- class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_analyzer.LRPZ(model, *args, **kwargs)[source]
Bases:
_LRPFixedParamsLRP-analyzer that uses the LRP-Z rule
- class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_analyzer.LRPZIgnoreBias(model, *args, **kwargs)[source]
Bases:
_LRPFixedParamsLRP-analyzer that uses the LRP-Z-ignore-bias rule
- class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_analyzer.LRPEpsilon(model, epsilon=1e-07, bias=True, *args, **kwargs)[source]
Bases:
_LRPFixedParamsLRP-analyzer that uses the LRP-Epsilon rule
- class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_analyzer.LRPEpsilonIgnoreBias(model, epsilon=1e-07, *args, **kwargs)[source]
Bases:
LRPEpsilonLRP-analyzer that uses the LRP-Epsilon-ignore-bias rule
- class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_analyzer.LRPStdxEpsilon(model, epsilon=1e-07, stdfactor=0.25, bias=True, *args, **kwargs)[source]
Bases:
_LRPFixedParamsLRP-analyzer that uses the Std(x) LRP-Epsilon rule
- class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_analyzer.LRPWSquare(model, *args, **kwargs)[source]
Bases:
_LRPFixedParamsLRP-analyzer that uses the DeepTaylor W**2 rule
- class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_analyzer.LRPFlat(model, *args, **kwargs)[source]
Bases:
_LRPFixedParamsLRP-analyzer that uses the LRP-Flat rule
- class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_analyzer.LRPAlphaBeta(model, alpha=None, beta=None, bias=True, *args, **kwargs)[source]
Bases:
LRPBase class for LRP AlphaBeta
- class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_analyzer.LRPAlpha2Beta1(model, *args, **kwargs)[source]
Bases:
_LRPAlphaBetaFixedParamsLRP-analyzer that uses the LRP-alpha-beta rule with a=2,b=1
- class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_analyzer.LRPAlpha2Beta1IgnoreBias(model, *args, **kwargs)[source]
Bases:
_LRPAlphaBetaFixedParamsLRP-analyzer that uses the LRP-alpha-beta-ignbias rule with a=2,b=1
- class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_analyzer.LRPAlpha1Beta0(model, *args, **kwargs)[source]
Bases:
_LRPAlphaBetaFixedParamsLRP-analyzer that uses the LRP-alpha-beta rule with a=1,b=0
- class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_analyzer.LRPAlpha1Beta0IgnoreBias(model, *args, **kwargs)[source]
Bases:
_LRPAlphaBetaFixedParamsLRP-analyzer that uses the LRP-alpha-beta-ignbias rule with a=1,b=0
- class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_analyzer.LRPZPlus(model, *args, **kwargs)[source]
Bases:
LRPAlpha1Beta0IgnoreBiasLRP-analyzer that uses the LRP-alpha-beta rule with a=1,b=0
- class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_analyzer.LRPZPlusFast(model, *args, **kwargs)[source]
Bases:
_LRPFixedParamsThe ZPlus rule is a special case of the AlphaBetaRule for alpha=1, beta=0 and assumes inputs x >= 0.
- class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_analyzer.LRPGamma(model, *args, gamma=0.5, bias=True, **kwargs)[source]
Bases:
LRPBase class for LRP Gamma
- class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_analyzer.LRPSequentialCompositeA(model, epsilon=0.1, *args, **kwargs)[source]
Bases:
_LRPFixedParamsSpecial LRP-configuration for ConvNets
- class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_analyzer.LRPSequentialCompositeB(model, epsilon=0.1, *args, **kwargs)[source]
Bases:
_LRPFixedParamsSpecial LRP-configuration for ConvNets
- class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_analyzer.LRPSequentialCompositeAFlat(model, *args, **kwargs)[source]
Bases:
LRPSequentialCompositeASpecial LRP-configuration for ConvNets
- class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_analyzer.LRPSequentialCompositeBFlat(model, *args, **kwargs)[source]
Bases:
LRPSequentialCompositeBSpecial LRP-configuration for ConvNets
- class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_analyzer.LRPRuleUntilIndex(model, *args, **kwargs)[source]
Bases:
objectRelatively dynamic rule wrapper
Applies the rule specified by until_index_rule to all layers up until and including the layer with the specified index (counted in direction input –> output)
For all other layers, the specified LRP-configuration is applied.
signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_rule_base module
- class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_rule_base.BatchNormalizationReverseRule(layer, *args, **kwargs)[source]
Bases:
ReplacementLayerSpecial BN handler that applies the Z-Rule
- wrap_hook(ins, neuron_selection, stop_mapping_at_layers, r_init)[source]
hook that wraps and applies the layer function. E.g., by defining a GradientTape * should contain a call to self._neuron_select. * may define any wrappers around
- Parameters:
ins – input(s) of this layer
neuron_selection – neuron_selection parameter (see try_apply)
stop_mapping_at_layers – None or stop_mapping_at_layers parameter (see try_apply)
r_init – reverse initialization value. Value with with explanation is initialized (i.e., head_mapping).
:returns output of layer function + any wrappers that were defined and are needed in explain_hook
To be extended for specific XAI methods
- explain_hook(ins, reversed_outs, args)[source]
hook that computes the explanations. * Core XAI functionality
- Parameters:
ins – input(s) of this layer
args – outputs of wrap_hook (any parameters that may be needed to compute explanation)
reversed_outs – either backpropagated explanation(s) of child layers, or None if this is the last layer
:returns explanation, or tensor of multiple explanations if the layer has multiple inputs (one for each)
To be extended for specific XAI methods
- class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_rule_base.AddReverseRule(layer, *args, **kwargs)[source]
Bases:
ReplacementLayerSpecial Add layer handler that applies the Z-Rule
- wrap_hook(ins, neuron_selection, stop_mapping_at_layers, r_init)[source]
hook that wraps and applies the layer function. E.g., by defining a GradientTape * should contain a call to self._neuron_select. * may define any wrappers around
- Parameters:
ins – input(s) of this layer
neuron_selection – neuron_selection parameter (see try_apply)
stop_mapping_at_layers – None or stop_mapping_at_layers parameter (see try_apply)
r_init – reverse initialization value. Value with with explanation is initialized (i.e., head_mapping).
:returns output of layer function + any wrappers that were defined and are needed in explain_hook
To be extended for specific XAI methods
- explain_hook(ins, reversed_outs, args)[source]
hook that computes the explanations. * Core XAI functionality
- Parameters:
ins – input(s) of this layer
args – outputs of wrap_hook (any parameters that may be needed to compute explanation)
reversed_outs – either backpropagated explanation(s) of child layers, or None if this is the last layer
:returns explanation, or tensor of multiple explanations if the layer has multiple inputs (one for each)
To be extended for specific XAI methods
- class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_rule_base.AveragePoolingReverseRule(layer, *args, **kwargs)[source]
Bases:
ReplacementLayerSpecial AveragePooling handler that applies the Z-Rule
- wrap_hook(ins, neuron_selection, stop_mapping_at_layers, r_init)[source]
hook that wraps and applies the layer function. E.g., by defining a GradientTape * should contain a call to self._neuron_select. * may define any wrappers around
- Parameters:
ins – input(s) of this layer
neuron_selection – neuron_selection parameter (see try_apply)
stop_mapping_at_layers – None or stop_mapping_at_layers parameter (see try_apply)
r_init – reverse initialization value. Value with with explanation is initialized (i.e., head_mapping).
:returns output of layer function + any wrappers that were defined and are needed in explain_hook
To be extended for specific XAI methods
- explain_hook(ins, reversed_outs, args)[source]
hook that computes the explanations. * Core XAI functionality
- Parameters:
ins – input(s) of this layer
args – outputs of wrap_hook (any parameters that may be needed to compute explanation)
reversed_outs – either backpropagated explanation(s) of child layers, or None if this is the last layer
:returns explanation, or tensor of multiple explanations if the layer has multiple inputs (one for each)
To be extended for specific XAI methods
- class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_rule_base.ZRule(layer, *args, **kwargs)[source]
Bases:
ReplacementLayer- wrap_hook(ins, neuron_selection, stop_mapping_at_layers, r_init)[source]
hook that wraps and applies the layer function. E.g., by defining a GradientTape * should contain a call to self._neuron_select. * may define any wrappers around
- Parameters:
ins – input(s) of this layer
neuron_selection – neuron_selection parameter (see try_apply)
stop_mapping_at_layers – None or stop_mapping_at_layers parameter (see try_apply)
r_init – reverse initialization value. Value with with explanation is initialized (i.e., head_mapping).
:returns output of layer function + any wrappers that were defined and are needed in explain_hook
To be extended for specific XAI methods
- explain_hook(ins, reversed_outs, args)[source]
hook that computes the explanations. * Core XAI functionality
- Parameters:
ins – input(s) of this layer
args – outputs of wrap_hook (any parameters that may be needed to compute explanation)
reversed_outs – either backpropagated explanation(s) of child layers, or None if this is the last layer
:returns explanation, or tensor of multiple explanations if the layer has multiple inputs (one for each)
To be extended for specific XAI methods
- class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_rule_base.ZIgnoreBiasRule(*args, **kwargs)[source]
Bases:
ZRuleBasic LRP decomposition rule, ignoring the bias neuron
- class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_rule_base.EpsilonRule(layer, *args, **kwargs)[source]
Bases:
ReplacementLayerSimilar to ZRule. The only difference is the addition of a numerical stabilizer term epsilon to the decomposition function’s denominator. the sign of epsilon depends on the sign of the output activation 0 is considered to be positive, ie sign(0) = 1
- wrap_hook(ins, neuron_selection, stop_mapping_at_layers, r_init)[source]
hook that wraps and applies the layer function. E.g., by defining a GradientTape * should contain a call to self._neuron_select. * may define any wrappers around
- Parameters:
ins – input(s) of this layer
neuron_selection – neuron_selection parameter (see try_apply)
stop_mapping_at_layers – None or stop_mapping_at_layers parameter (see try_apply)
r_init – reverse initialization value. Value with with explanation is initialized (i.e., head_mapping).
:returns output of layer function + any wrappers that were defined and are needed in explain_hook
To be extended for specific XAI methods
- explain_hook(ins, reversed_outs, args)[source]
hook that computes the explanations. * Core XAI functionality
- Parameters:
ins – input(s) of this layer
args – outputs of wrap_hook (any parameters that may be needed to compute explanation)
reversed_outs – either backpropagated explanation(s) of child layers, or None if this is the last layer
:returns explanation, or tensor of multiple explanations if the layer has multiple inputs (one for each)
To be extended for specific XAI methods
- class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_rule_base.EpsilonIgnoreBiasRule(*args, **kwargs)[source]
Bases:
EpsilonRuleSame as EpsilonRule but ignores the bias.
- class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_rule_base.SIGNRule(layer, *args, **kwargs)[source]
Bases:
ReplacementLayer- wrap_hook(ins, neuron_selection, stop_mapping_at_layers, r_init)[source]
hook that wraps and applies the layer function. E.g., by defining a GradientTape * should contain a call to self._neuron_select. * may define any wrappers around
- Parameters:
ins – input(s) of this layer
neuron_selection – neuron_selection parameter (see try_apply)
stop_mapping_at_layers – None or stop_mapping_at_layers parameter (see try_apply)
r_init – reverse initialization value. Value with with explanation is initialized (i.e., head_mapping).
:returns output of layer function + any wrappers that were defined and are needed in explain_hook
To be extended for specific XAI methods
- explain_hook(ins, reversed_outs, args)[source]
hook that computes the explanations. * Core XAI functionality
- Parameters:
ins – input(s) of this layer
args – outputs of wrap_hook (any parameters that may be needed to compute explanation)
reversed_outs – either backpropagated explanation(s) of child layers, or None if this is the last layer
:returns explanation, or tensor of multiple explanations if the layer has multiple inputs (one for each)
To be extended for specific XAI methods
- class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_rule_base.SIGNmuRule(layer, *args, **kwargs)[source]
Bases:
ReplacementLayer- wrap_hook(ins, neuron_selection, stop_mapping_at_layers, r_init)[source]
hook that wraps and applies the layer function. E.g., by defining a GradientTape * should contain a call to self._neuron_select. * may define any wrappers around
- Parameters:
ins – input(s) of this layer
neuron_selection – neuron_selection parameter (see try_apply)
stop_mapping_at_layers – None or stop_mapping_at_layers parameter (see try_apply)
r_init – reverse initialization value. Value with with explanation is initialized (i.e., head_mapping).
:returns output of layer function + any wrappers that were defined and are needed in explain_hook
To be extended for specific XAI methods
- explain_hook(ins, reversed_outs, args)[source]
hook that computes the explanations. * Core XAI functionality
- Parameters:
ins – input(s) of this layer
args – outputs of wrap_hook (any parameters that may be needed to compute explanation)
reversed_outs – either backpropagated explanation(s) of child layers, or None if this is the last layer
:returns explanation, or tensor of multiple explanations if the layer has multiple inputs (one for each)
To be extended for specific XAI methods
- class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_rule_base.WSquareRule(layer, *args, **kwargs)[source]
Bases:
ReplacementLayerW**2 rule from Deep Taylor Decomposition
- wrap_hook(ins, neuron_selection, stop_mapping_at_layers, r_init)[source]
hook that wraps and applies the layer function. E.g., by defining a GradientTape * should contain a call to self._neuron_select. * may define any wrappers around
- Parameters:
ins – input(s) of this layer
neuron_selection – neuron_selection parameter (see try_apply)
stop_mapping_at_layers – None or stop_mapping_at_layers parameter (see try_apply)
r_init – reverse initialization value. Value with with explanation is initialized (i.e., head_mapping).
:returns output of layer function + any wrappers that were defined and are needed in explain_hook
To be extended for specific XAI methods
- explain_hook(ins, reversed_outs, args)[source]
hook that computes the explanations. * Core XAI functionality
- Parameters:
ins – input(s) of this layer
args – outputs of wrap_hook (any parameters that may be needed to compute explanation)
reversed_outs – either backpropagated explanation(s) of child layers, or None if this is the last layer
:returns explanation, or tensor of multiple explanations if the layer has multiple inputs (one for each)
To be extended for specific XAI methods
- class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_rule_base.FlatRule(layer, *args, **kwargs)[source]
Bases:
WSquareRuleSame as W**2 rule but sets all weights to ones.
- class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_rule_base.AlphaBetaRule(layer, *args, alpha=None, beta=None, bias=True, copy_weights=False, **kwargs)[source]
Bases:
ReplacementLayerThis decomposition rule handles the positive forward activations (x*w > 0) and negative forward activations (w * x < 0) independently, reducing the risk of zero divisions considerably. In fact, the only case where divisions by zero can happen is if there are either no positive or no negative parts to the activation at all. Corresponding parameterization of this rule implement methods such as Excitation Backpropagation with alpha=1, beta=0 s.t. alpha - beta = 1 (after current param. scheme.) and alpha > 1 beta > 0
- wrap_hook(ins, neuron_selection, stop_mapping_at_layers, r_init)[source]
hook that wraps and applies the layer function. E.g., by defining a GradientTape * should contain a call to self._neuron_select. * may define any wrappers around
- Parameters:
ins – input(s) of this layer
neuron_selection – neuron_selection parameter (see try_apply)
stop_mapping_at_layers – None or stop_mapping_at_layers parameter (see try_apply)
r_init – reverse initialization value. Value with with explanation is initialized (i.e., head_mapping).
:returns output of layer function + any wrappers that were defined and are needed in explain_hook
To be extended for specific XAI methods
- explain_hook(ins, reversed_outs, args)[source]
hook that computes the explanations. * Core XAI functionality
- Parameters:
ins – input(s) of this layer
args – outputs of wrap_hook (any parameters that may be needed to compute explanation)
reversed_outs – either backpropagated explanation(s) of child layers, or None if this is the last layer
:returns explanation, or tensor of multiple explanations if the layer has multiple inputs (one for each)
To be extended for specific XAI methods
- class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_rule_base.AlphaBetaIgnoreBiasRule(*args, **kwargs)[source]
Bases:
AlphaBetaRuleSame as AlphaBetaRule but ignores biases.
- class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_rule_base.Alpha2Beta1Rule(*args, **kwargs)[source]
Bases:
AlphaBetaRuleAlphaBetaRule with alpha=2, beta=1
- class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_rule_base.Alpha2Beta1IgnoreBiasRule(*args, **kwargs)[source]
Bases:
AlphaBetaRuleAlphaBetaRule with alpha=2, beta=1 and ignores biases
- class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_rule_base.Alpha1Beta0Rule(*args, **kwargs)[source]
Bases:
AlphaBetaRuleAlphaBetaRule with alpha=1, beta=0
- class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_rule_base.Alpha1Beta0IgnoreBiasRule(*args, **kwargs)[source]
Bases:
AlphaBetaRuleAlphaBetaRule with alpha=1, beta=0 and ignores biases
- class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_rule_base.AlphaBetaXRule(layer, *args, alpha=None, beta=None, bias=True, copy_weights=False, **kwargs)[source]
Bases:
ReplacementLayerAlphaBeta advanced as proposed by Alexander Binder.
- wrap_hook(ins, neuron_selection, stop_mapping_at_layers, r_init)[source]
hook that wraps and applies the layer function. E.g., by defining a GradientTape * should contain a call to self._neuron_select. * may define any wrappers around
- Parameters:
ins – input(s) of this layer
neuron_selection – neuron_selection parameter (see try_apply)
stop_mapping_at_layers – None or stop_mapping_at_layers parameter (see try_apply)
r_init – reverse initialization value. Value with with explanation is initialized (i.e., head_mapping).
:returns output of layer function + any wrappers that were defined and are needed in explain_hook
To be extended for specific XAI methods
- explain_hook(ins, reversed_outs, args)[source]
hook that computes the explanations. * Core XAI functionality
- Parameters:
ins – input(s) of this layer
args – outputs of wrap_hook (any parameters that may be needed to compute explanation)
reversed_outs – either backpropagated explanation(s) of child layers, or None if this is the last layer
:returns explanation, or tensor of multiple explanations if the layer has multiple inputs (one for each)
To be extended for specific XAI methods
- class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_rule_base.AlphaBetaX1000Rule(*args, **kwargs)[source]
Bases:
AlphaBetaXRule
- class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_rule_base.AlphaBetaX1010Rule(*args, **kwargs)[source]
Bases:
AlphaBetaXRule
- class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_rule_base.AlphaBetaX1001Rule(*args, **kwargs)[source]
Bases:
AlphaBetaXRule
- class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_rule_base.AlphaBetaX2m100Rule(*args, **kwargs)[source]
Bases:
AlphaBetaXRule
- class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_rule_base.ZPlusRule(*args, **kwargs)[source]
Bases:
Alpha1Beta0IgnoreBiasRuleThe ZPlus rule is a special case of the AlphaBetaRule for alpha=1, beta=0, which assumes inputs x >= 0 and ignores the bias. CAUTION! Results differ from Alpha=1, Beta=0 if inputs are not strictly >= 0
- class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_rule_base.ZPlusFastRule(layer, *args, copy_weights=False, **kwargs)[source]
Bases:
ReplacementLayerThe ZPlus rule is a special case of the AlphaBetaRule for alpha=1, beta=0 and assumes inputs x >= 0.
- wrap_hook(ins, neuron_selection, stop_mapping_at_layers, r_init)[source]
hook that wraps and applies the layer function. E.g., by defining a GradientTape * should contain a call to self._neuron_select. * may define any wrappers around
- Parameters:
ins – input(s) of this layer
neuron_selection – neuron_selection parameter (see try_apply)
stop_mapping_at_layers – None or stop_mapping_at_layers parameter (see try_apply)
r_init – reverse initialization value. Value with with explanation is initialized (i.e., head_mapping).
:returns output of layer function + any wrappers that were defined and are needed in explain_hook
To be extended for specific XAI methods
- explain_hook(ins, reversed_outs, args)[source]
hook that computes the explanations. * Core XAI functionality
- Parameters:
ins – input(s) of this layer
args – outputs of wrap_hook (any parameters that may be needed to compute explanation)
reversed_outs – either backpropagated explanation(s) of child layers, or None if this is the last layer
:returns explanation, or tensor of multiple explanations if the layer has multiple inputs (one for each)
To be extended for specific XAI methods
- class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_rule_base.BoundedRule(layer, *args, copy_weights=False, **kwargs)[source]
Bases:
ReplacementLayerZ_B rule from the Deep Taylor Decomposition
- wrap_hook(ins, neuron_selection, stop_mapping_at_layers, r_init)[source]
hook that wraps and applies the layer function. E.g., by defining a GradientTape * should contain a call to self._neuron_select. * may define any wrappers around
- Parameters:
ins – input(s) of this layer
neuron_selection – neuron_selection parameter (see try_apply)
stop_mapping_at_layers – None or stop_mapping_at_layers parameter (see try_apply)
r_init – reverse initialization value. Value with with explanation is initialized (i.e., head_mapping).
:returns output of layer function + any wrappers that were defined and are needed in explain_hook
To be extended for specific XAI methods
- explain_hook(ins, reversed_outs, args)[source]
hook that computes the explanations. * Core XAI functionality
- Parameters:
ins – input(s) of this layer
args – outputs of wrap_hook (any parameters that may be needed to compute explanation)
reversed_outs – either backpropagated explanation(s) of child layers, or None if this is the last layer
:returns explanation, or tensor of multiple explanations if the layer has multiple inputs (one for each)
To be extended for specific XAI methods
signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.utils module
- signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.utils.assert_lrp_epsilon_param(epsilon, caller)[source]
Function for asserting epsilon parameter choice passed to constructors inheriting from EpsilonRule and LRPEpsilon. The following conditions can not be met:
epsilon > 1
- Parameters:
epsilon – the epsilon parameter.
caller – the class instance calling this assertion function
- signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.utils.assert_infer_lrp_alpha_beta_param(alpha, beta, caller)[source]
Function for asserting parameter choices for alpha and beta passed to constructors inheriting from AlphaBetaRule and LRPAlphaBeta.
since alpha - beta are subjected to sum to 1, it is sufficient for only one of the parameters to be passed to a corresponding class constructor. this method will cause an assertion error if both are None or the following conditions can not be met
alpha >= 1 beta >= 0 alpha - beta = 1
- Parameters:
alpha – the alpha parameter.
beta – the beta parameter
caller – the class instance calling this assertion function