signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based package

Submodules

signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_analyzer module

class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_analyzer.LRP(model, *args, **kwargs)[source]

Bases: ReverseAnalyzerBase

Base class for LRP-based model analyzers

Parameters:
  • model – A Keras model.

  • rule – A rule can be a string or a Rule object, lists thereof or a list of conditions [(Condition, Rule), … ] gradient.

  • input_layer_rule – either a Rule object, atuple of (low, high) the min/max pixel values of the inputs

  • bn_layer_rule – either a Rule object or None. None means dedicated BN rule will be applied.

__init__(model, *args, **kwargs)[source]
create_rule_mapping(layer)[source]
class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_analyzer.LRPZ(model, *args, **kwargs)[source]

Bases: _LRPFixedParams

LRP-analyzer that uses the LRP-Z rule

__init__(model, *args, **kwargs)[source]
class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_analyzer.LRPZIgnoreBias(model, *args, **kwargs)[source]

Bases: _LRPFixedParams

LRP-analyzer that uses the LRP-Z-ignore-bias rule

__init__(model, *args, **kwargs)[source]
class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_analyzer.LRPEpsilon(model, epsilon=1e-07, bias=True, *args, **kwargs)[source]

Bases: _LRPFixedParams

LRP-analyzer that uses the LRP-Epsilon rule

__init__(model, epsilon=1e-07, bias=True, *args, **kwargs)[source]
class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_analyzer.LRPEpsilonIgnoreBias(model, epsilon=1e-07, *args, **kwargs)[source]

Bases: LRPEpsilon

LRP-analyzer that uses the LRP-Epsilon-ignore-bias rule

__init__(model, epsilon=1e-07, *args, **kwargs)[source]
class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_analyzer.LRPStdxEpsilon(model, epsilon=1e-07, stdfactor=0.25, bias=True, *args, **kwargs)[source]

Bases: _LRPFixedParams

LRP-analyzer that uses the Std(x) LRP-Epsilon rule

__init__(model, epsilon=1e-07, stdfactor=0.25, bias=True, *args, **kwargs)[source]
class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_analyzer.LRPWSquare(model, *args, **kwargs)[source]

Bases: _LRPFixedParams

LRP-analyzer that uses the DeepTaylor W**2 rule

__init__(model, *args, **kwargs)[source]
class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_analyzer.LRPFlat(model, *args, **kwargs)[source]

Bases: _LRPFixedParams

LRP-analyzer that uses the LRP-Flat rule

__init__(model, *args, **kwargs)[source]
class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_analyzer.LRPAlphaBeta(model, alpha=None, beta=None, bias=True, *args, **kwargs)[source]

Bases: LRP

Base class for LRP AlphaBeta

__init__(model, alpha=None, beta=None, bias=True, *args, **kwargs)[source]
class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_analyzer.LRPAlpha2Beta1(model, *args, **kwargs)[source]

Bases: _LRPAlphaBetaFixedParams

LRP-analyzer that uses the LRP-alpha-beta rule with a=2,b=1

__init__(model, *args, **kwargs)[source]
class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_analyzer.LRPAlpha2Beta1IgnoreBias(model, *args, **kwargs)[source]

Bases: _LRPAlphaBetaFixedParams

LRP-analyzer that uses the LRP-alpha-beta-ignbias rule with a=2,b=1

__init__(model, *args, **kwargs)[source]
class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_analyzer.LRPAlpha1Beta0(model, *args, **kwargs)[source]

Bases: _LRPAlphaBetaFixedParams

LRP-analyzer that uses the LRP-alpha-beta rule with a=1,b=0

__init__(model, *args, **kwargs)[source]
class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_analyzer.LRPAlpha1Beta0IgnoreBias(model, *args, **kwargs)[source]

Bases: _LRPAlphaBetaFixedParams

LRP-analyzer that uses the LRP-alpha-beta-ignbias rule with a=1,b=0

__init__(model, *args, **kwargs)[source]
class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_analyzer.LRPZPlus(model, *args, **kwargs)[source]

Bases: LRPAlpha1Beta0IgnoreBias

LRP-analyzer that uses the LRP-alpha-beta rule with a=1,b=0

__init__(model, *args, **kwargs)[source]
class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_analyzer.LRPZPlusFast(model, *args, **kwargs)[source]

Bases: _LRPFixedParams

The ZPlus rule is a special case of the AlphaBetaRule for alpha=1, beta=0 and assumes inputs x >= 0.

__init__(model, *args, **kwargs)[source]
class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_analyzer.LRPGamma(model, *args, gamma=0.5, bias=True, **kwargs)[source]

Bases: LRP

Base class for LRP Gamma

__init__(model, *args, gamma=0.5, bias=True, **kwargs)[source]
class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_analyzer.LRPSequentialCompositeA(model, epsilon=0.1, *args, **kwargs)[source]

Bases: _LRPFixedParams

Special LRP-configuration for ConvNets

__init__(model, epsilon=0.1, *args, **kwargs)[source]
class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_analyzer.LRPSequentialCompositeB(model, epsilon=0.1, *args, **kwargs)[source]

Bases: _LRPFixedParams

Special LRP-configuration for ConvNets

__init__(model, epsilon=0.1, *args, **kwargs)[source]
class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_analyzer.LRPSequentialCompositeAFlat(model, *args, **kwargs)[source]

Bases: LRPSequentialCompositeA

Special LRP-configuration for ConvNets

__init__(model, *args, **kwargs)[source]
class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_analyzer.LRPSequentialCompositeBFlat(model, *args, **kwargs)[source]

Bases: LRPSequentialCompositeB

Special LRP-configuration for ConvNets

__init__(model, *args, **kwargs)[source]
class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_analyzer.LRPRuleUntilIndex(model, *args, **kwargs)[source]

Bases: object

Relatively dynamic rule wrapper

Applies the rule specified by until_index_rule to all layers up until and including the layer with the specified index (counted in direction input –> output)

For all other layers, the specified LRP-configuration is applied.

__init__(model, *args, **kwargs)[source]
analyze(*args, **kwargs)[source]

signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_rule_base module

class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_rule_base.BatchNormalizationReverseRule(layer, *args, **kwargs)[source]

Bases: ReplacementLayer

Special BN handler that applies the Z-Rule

__init__(layer, *args, **kwargs)[source]
wrap_hook(ins, neuron_selection, stop_mapping_at_layers, r_init)[source]

hook that wraps and applies the layer function. E.g., by defining a GradientTape * should contain a call to self._neuron_select. * may define any wrappers around

Parameters:
  • ins – input(s) of this layer

  • neuron_selection – neuron_selection parameter (see try_apply)

  • stop_mapping_at_layers – None or stop_mapping_at_layers parameter (see try_apply)

  • r_init – reverse initialization value. Value with with explanation is initialized (i.e., head_mapping).

:returns output of layer function + any wrappers that were defined and are needed in explain_hook

To be extended for specific XAI methods

explain_hook(ins, reversed_outs, args)[source]

hook that computes the explanations. * Core XAI functionality

Parameters:
  • ins – input(s) of this layer

  • args – outputs of wrap_hook (any parameters that may be needed to compute explanation)

  • reversed_outs – either backpropagated explanation(s) of child layers, or None if this is the last layer

:returns explanation, or tensor of multiple explanations if the layer has multiple inputs (one for each)

To be extended for specific XAI methods

class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_rule_base.AddReverseRule(layer, *args, **kwargs)[source]

Bases: ReplacementLayer

Special Add layer handler that applies the Z-Rule

__init__(layer, *args, **kwargs)[source]
wrap_hook(ins, neuron_selection, stop_mapping_at_layers, r_init)[source]

hook that wraps and applies the layer function. E.g., by defining a GradientTape * should contain a call to self._neuron_select. * may define any wrappers around

Parameters:
  • ins – input(s) of this layer

  • neuron_selection – neuron_selection parameter (see try_apply)

  • stop_mapping_at_layers – None or stop_mapping_at_layers parameter (see try_apply)

  • r_init – reverse initialization value. Value with with explanation is initialized (i.e., head_mapping).

:returns output of layer function + any wrappers that were defined and are needed in explain_hook

To be extended for specific XAI methods

explain_hook(ins, reversed_outs, args)[source]

hook that computes the explanations. * Core XAI functionality

Parameters:
  • ins – input(s) of this layer

  • args – outputs of wrap_hook (any parameters that may be needed to compute explanation)

  • reversed_outs – either backpropagated explanation(s) of child layers, or None if this is the last layer

:returns explanation, or tensor of multiple explanations if the layer has multiple inputs (one for each)

To be extended for specific XAI methods

class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_rule_base.AveragePoolingReverseRule(layer, *args, **kwargs)[source]

Bases: ReplacementLayer

Special AveragePooling handler that applies the Z-Rule

__init__(layer, *args, **kwargs)[source]
wrap_hook(ins, neuron_selection, stop_mapping_at_layers, r_init)[source]

hook that wraps and applies the layer function. E.g., by defining a GradientTape * should contain a call to self._neuron_select. * may define any wrappers around

Parameters:
  • ins – input(s) of this layer

  • neuron_selection – neuron_selection parameter (see try_apply)

  • stop_mapping_at_layers – None or stop_mapping_at_layers parameter (see try_apply)

  • r_init – reverse initialization value. Value with with explanation is initialized (i.e., head_mapping).

:returns output of layer function + any wrappers that were defined and are needed in explain_hook

To be extended for specific XAI methods

explain_hook(ins, reversed_outs, args)[source]

hook that computes the explanations. * Core XAI functionality

Parameters:
  • ins – input(s) of this layer

  • args – outputs of wrap_hook (any parameters that may be needed to compute explanation)

  • reversed_outs – either backpropagated explanation(s) of child layers, or None if this is the last layer

:returns explanation, or tensor of multiple explanations if the layer has multiple inputs (one for each)

To be extended for specific XAI methods

class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_rule_base.ZRule(layer, *args, **kwargs)[source]

Bases: ReplacementLayer

__init__(layer, *args, **kwargs)[source]
wrap_hook(ins, neuron_selection, stop_mapping_at_layers, r_init)[source]

hook that wraps and applies the layer function. E.g., by defining a GradientTape * should contain a call to self._neuron_select. * may define any wrappers around

Parameters:
  • ins – input(s) of this layer

  • neuron_selection – neuron_selection parameter (see try_apply)

  • stop_mapping_at_layers – None or stop_mapping_at_layers parameter (see try_apply)

  • r_init – reverse initialization value. Value with with explanation is initialized (i.e., head_mapping).

:returns output of layer function + any wrappers that were defined and are needed in explain_hook

To be extended for specific XAI methods

explain_hook(ins, reversed_outs, args)[source]

hook that computes the explanations. * Core XAI functionality

Parameters:
  • ins – input(s) of this layer

  • args – outputs of wrap_hook (any parameters that may be needed to compute explanation)

  • reversed_outs – either backpropagated explanation(s) of child layers, or None if this is the last layer

:returns explanation, or tensor of multiple explanations if the layer has multiple inputs (one for each)

To be extended for specific XAI methods

class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_rule_base.ZIgnoreBiasRule(*args, **kwargs)[source]

Bases: ZRule

Basic LRP decomposition rule, ignoring the bias neuron

__init__(*args, **kwargs)[source]
class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_rule_base.EpsilonRule(layer, *args, **kwargs)[source]

Bases: ReplacementLayer

Similar to ZRule. The only difference is the addition of a numerical stabilizer term epsilon to the decomposition function’s denominator. the sign of epsilon depends on the sign of the output activation 0 is considered to be positive, ie sign(0) = 1

__init__(layer, *args, **kwargs)[source]
wrap_hook(ins, neuron_selection, stop_mapping_at_layers, r_init)[source]

hook that wraps and applies the layer function. E.g., by defining a GradientTape * should contain a call to self._neuron_select. * may define any wrappers around

Parameters:
  • ins – input(s) of this layer

  • neuron_selection – neuron_selection parameter (see try_apply)

  • stop_mapping_at_layers – None or stop_mapping_at_layers parameter (see try_apply)

  • r_init – reverse initialization value. Value with with explanation is initialized (i.e., head_mapping).

:returns output of layer function + any wrappers that were defined and are needed in explain_hook

To be extended for specific XAI methods

explain_hook(ins, reversed_outs, args)[source]

hook that computes the explanations. * Core XAI functionality

Parameters:
  • ins – input(s) of this layer

  • args – outputs of wrap_hook (any parameters that may be needed to compute explanation)

  • reversed_outs – either backpropagated explanation(s) of child layers, or None if this is the last layer

:returns explanation, or tensor of multiple explanations if the layer has multiple inputs (one for each)

To be extended for specific XAI methods

class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_rule_base.EpsilonIgnoreBiasRule(*args, **kwargs)[source]

Bases: EpsilonRule

Same as EpsilonRule but ignores the bias.

__init__(*args, **kwargs)[source]
class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_rule_base.SIGNRule(layer, *args, **kwargs)[source]

Bases: ReplacementLayer

__init__(layer, *args, **kwargs)[source]
wrap_hook(ins, neuron_selection, stop_mapping_at_layers, r_init)[source]

hook that wraps and applies the layer function. E.g., by defining a GradientTape * should contain a call to self._neuron_select. * may define any wrappers around

Parameters:
  • ins – input(s) of this layer

  • neuron_selection – neuron_selection parameter (see try_apply)

  • stop_mapping_at_layers – None or stop_mapping_at_layers parameter (see try_apply)

  • r_init – reverse initialization value. Value with with explanation is initialized (i.e., head_mapping).

:returns output of layer function + any wrappers that were defined and are needed in explain_hook

To be extended for specific XAI methods

explain_hook(ins, reversed_outs, args)[source]

hook that computes the explanations. * Core XAI functionality

Parameters:
  • ins – input(s) of this layer

  • args – outputs of wrap_hook (any parameters that may be needed to compute explanation)

  • reversed_outs – either backpropagated explanation(s) of child layers, or None if this is the last layer

:returns explanation, or tensor of multiple explanations if the layer has multiple inputs (one for each)

To be extended for specific XAI methods

class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_rule_base.SIGNmuRule(layer, *args, **kwargs)[source]

Bases: ReplacementLayer

__init__(layer, *args, **kwargs)[source]
wrap_hook(ins, neuron_selection, stop_mapping_at_layers, r_init)[source]

hook that wraps and applies the layer function. E.g., by defining a GradientTape * should contain a call to self._neuron_select. * may define any wrappers around

Parameters:
  • ins – input(s) of this layer

  • neuron_selection – neuron_selection parameter (see try_apply)

  • stop_mapping_at_layers – None or stop_mapping_at_layers parameter (see try_apply)

  • r_init – reverse initialization value. Value with with explanation is initialized (i.e., head_mapping).

:returns output of layer function + any wrappers that were defined and are needed in explain_hook

To be extended for specific XAI methods

explain_hook(ins, reversed_outs, args)[source]

hook that computes the explanations. * Core XAI functionality

Parameters:
  • ins – input(s) of this layer

  • args – outputs of wrap_hook (any parameters that may be needed to compute explanation)

  • reversed_outs – either backpropagated explanation(s) of child layers, or None if this is the last layer

:returns explanation, or tensor of multiple explanations if the layer has multiple inputs (one for each)

To be extended for specific XAI methods

class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_rule_base.WSquareRule(layer, *args, **kwargs)[source]

Bases: ReplacementLayer

W**2 rule from Deep Taylor Decomposition

__init__(layer, *args, **kwargs)[source]
wrap_hook(ins, neuron_selection, stop_mapping_at_layers, r_init)[source]

hook that wraps and applies the layer function. E.g., by defining a GradientTape * should contain a call to self._neuron_select. * may define any wrappers around

Parameters:
  • ins – input(s) of this layer

  • neuron_selection – neuron_selection parameter (see try_apply)

  • stop_mapping_at_layers – None or stop_mapping_at_layers parameter (see try_apply)

  • r_init – reverse initialization value. Value with with explanation is initialized (i.e., head_mapping).

:returns output of layer function + any wrappers that were defined and are needed in explain_hook

To be extended for specific XAI methods

explain_hook(ins, reversed_outs, args)[source]

hook that computes the explanations. * Core XAI functionality

Parameters:
  • ins – input(s) of this layer

  • args – outputs of wrap_hook (any parameters that may be needed to compute explanation)

  • reversed_outs – either backpropagated explanation(s) of child layers, or None if this is the last layer

:returns explanation, or tensor of multiple explanations if the layer has multiple inputs (one for each)

To be extended for specific XAI methods

class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_rule_base.FlatRule(layer, *args, **kwargs)[source]

Bases: WSquareRule

Same as W**2 rule but sets all weights to ones.

__init__(layer, *args, **kwargs)[source]
class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_rule_base.AlphaBetaRule(layer, *args, alpha=None, beta=None, bias=True, copy_weights=False, **kwargs)[source]

Bases: ReplacementLayer

This decomposition rule handles the positive forward activations (x*w > 0) and negative forward activations (w * x < 0) independently, reducing the risk of zero divisions considerably. In fact, the only case where divisions by zero can happen is if there are either no positive or no negative parts to the activation at all. Corresponding parameterization of this rule implement methods such as Excitation Backpropagation with alpha=1, beta=0 s.t. alpha - beta = 1 (after current param. scheme.) and alpha > 1 beta > 0

__init__(layer, *args, alpha=None, beta=None, bias=True, copy_weights=False, **kwargs)[source]
wrap_hook(ins, neuron_selection, stop_mapping_at_layers, r_init)[source]

hook that wraps and applies the layer function. E.g., by defining a GradientTape * should contain a call to self._neuron_select. * may define any wrappers around

Parameters:
  • ins – input(s) of this layer

  • neuron_selection – neuron_selection parameter (see try_apply)

  • stop_mapping_at_layers – None or stop_mapping_at_layers parameter (see try_apply)

  • r_init – reverse initialization value. Value with with explanation is initialized (i.e., head_mapping).

:returns output of layer function + any wrappers that were defined and are needed in explain_hook

To be extended for specific XAI methods

explain_hook(ins, reversed_outs, args)[source]

hook that computes the explanations. * Core XAI functionality

Parameters:
  • ins – input(s) of this layer

  • args – outputs of wrap_hook (any parameters that may be needed to compute explanation)

  • reversed_outs – either backpropagated explanation(s) of child layers, or None if this is the last layer

:returns explanation, or tensor of multiple explanations if the layer has multiple inputs (one for each)

To be extended for specific XAI methods

class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_rule_base.AlphaBetaIgnoreBiasRule(*args, **kwargs)[source]

Bases: AlphaBetaRule

Same as AlphaBetaRule but ignores biases.

__init__(*args, **kwargs)[source]
class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_rule_base.Alpha2Beta1Rule(*args, **kwargs)[source]

Bases: AlphaBetaRule

AlphaBetaRule with alpha=2, beta=1

__init__(*args, **kwargs)[source]
class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_rule_base.Alpha2Beta1IgnoreBiasRule(*args, **kwargs)[source]

Bases: AlphaBetaRule

AlphaBetaRule with alpha=2, beta=1 and ignores biases

__init__(*args, **kwargs)[source]
class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_rule_base.Alpha1Beta0Rule(*args, **kwargs)[source]

Bases: AlphaBetaRule

AlphaBetaRule with alpha=1, beta=0

__init__(*args, **kwargs)[source]
class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_rule_base.Alpha1Beta0IgnoreBiasRule(*args, **kwargs)[source]

Bases: AlphaBetaRule

AlphaBetaRule with alpha=1, beta=0 and ignores biases

__init__(*args, **kwargs)[source]
class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_rule_base.AlphaBetaXRule(layer, *args, alpha=None, beta=None, bias=True, copy_weights=False, **kwargs)[source]

Bases: ReplacementLayer

AlphaBeta advanced as proposed by Alexander Binder.

__init__(layer, *args, alpha=None, beta=None, bias=True, copy_weights=False, **kwargs)[source]
wrap_hook(ins, neuron_selection, stop_mapping_at_layers, r_init)[source]

hook that wraps and applies the layer function. E.g., by defining a GradientTape * should contain a call to self._neuron_select. * may define any wrappers around

Parameters:
  • ins – input(s) of this layer

  • neuron_selection – neuron_selection parameter (see try_apply)

  • stop_mapping_at_layers – None or stop_mapping_at_layers parameter (see try_apply)

  • r_init – reverse initialization value. Value with with explanation is initialized (i.e., head_mapping).

:returns output of layer function + any wrappers that were defined and are needed in explain_hook

To be extended for specific XAI methods

explain_hook(ins, reversed_outs, args)[source]

hook that computes the explanations. * Core XAI functionality

Parameters:
  • ins – input(s) of this layer

  • args – outputs of wrap_hook (any parameters that may be needed to compute explanation)

  • reversed_outs – either backpropagated explanation(s) of child layers, or None if this is the last layer

:returns explanation, or tensor of multiple explanations if the layer has multiple inputs (one for each)

To be extended for specific XAI methods

class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_rule_base.AlphaBetaX1000Rule(*args, **kwargs)[source]

Bases: AlphaBetaXRule

__init__(*args, **kwargs)[source]
class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_rule_base.AlphaBetaX1010Rule(*args, **kwargs)[source]

Bases: AlphaBetaXRule

__init__(*args, **kwargs)[source]
class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_rule_base.AlphaBetaX1001Rule(*args, **kwargs)[source]

Bases: AlphaBetaXRule

__init__(*args, **kwargs)[source]
class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_rule_base.AlphaBetaX2m100Rule(*args, **kwargs)[source]

Bases: AlphaBetaXRule

__init__(*args, **kwargs)[source]
class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_rule_base.ZPlusRule(*args, **kwargs)[source]

Bases: Alpha1Beta0IgnoreBiasRule

The ZPlus rule is a special case of the AlphaBetaRule for alpha=1, beta=0, which assumes inputs x >= 0 and ignores the bias. CAUTION! Results differ from Alpha=1, Beta=0 if inputs are not strictly >= 0

__init__(*args, **kwargs)[source]
class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_rule_base.ZPlusFastRule(layer, *args, copy_weights=False, **kwargs)[source]

Bases: ReplacementLayer

The ZPlus rule is a special case of the AlphaBetaRule for alpha=1, beta=0 and assumes inputs x >= 0.

__init__(layer, *args, copy_weights=False, **kwargs)[source]
apply(ins, neuron_selection)[source]
wrap_hook(ins, neuron_selection, stop_mapping_at_layers, r_init)[source]

hook that wraps and applies the layer function. E.g., by defining a GradientTape * should contain a call to self._neuron_select. * may define any wrappers around

Parameters:
  • ins – input(s) of this layer

  • neuron_selection – neuron_selection parameter (see try_apply)

  • stop_mapping_at_layers – None or stop_mapping_at_layers parameter (see try_apply)

  • r_init – reverse initialization value. Value with with explanation is initialized (i.e., head_mapping).

:returns output of layer function + any wrappers that were defined and are needed in explain_hook

To be extended for specific XAI methods

explain_hook(ins, reversed_outs, args)[source]

hook that computes the explanations. * Core XAI functionality

Parameters:
  • ins – input(s) of this layer

  • args – outputs of wrap_hook (any parameters that may be needed to compute explanation)

  • reversed_outs – either backpropagated explanation(s) of child layers, or None if this is the last layer

:returns explanation, or tensor of multiple explanations if the layer has multiple inputs (one for each)

To be extended for specific XAI methods

class signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.relevance_rule_base.BoundedRule(layer, *args, copy_weights=False, **kwargs)[source]

Bases: ReplacementLayer

Z_B rule from the Deep Taylor Decomposition

__init__(layer, *args, copy_weights=False, **kwargs)[source]
wrap_hook(ins, neuron_selection, stop_mapping_at_layers, r_init)[source]

hook that wraps and applies the layer function. E.g., by defining a GradientTape * should contain a call to self._neuron_select. * may define any wrappers around

Parameters:
  • ins – input(s) of this layer

  • neuron_selection – neuron_selection parameter (see try_apply)

  • stop_mapping_at_layers – None or stop_mapping_at_layers parameter (see try_apply)

  • r_init – reverse initialization value. Value with with explanation is initialized (i.e., head_mapping).

:returns output of layer function + any wrappers that were defined and are needed in explain_hook

To be extended for specific XAI methods

explain_hook(ins, reversed_outs, args)[source]

hook that computes the explanations. * Core XAI functionality

Parameters:
  • ins – input(s) of this layer

  • args – outputs of wrap_hook (any parameters that may be needed to compute explanation)

  • reversed_outs – either backpropagated explanation(s) of child layers, or None if this is the last layer

:returns explanation, or tensor of multiple explanations if the layer has multiple inputs (one for each)

To be extended for specific XAI methods

signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.utils module

signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.utils.assert_lrp_epsilon_param(epsilon, caller)[source]

Function for asserting epsilon parameter choice passed to constructors inheriting from EpsilonRule and LRPEpsilon. The following conditions can not be met:

epsilon > 1

Parameters:
  • epsilon – the epsilon parameter.

  • caller – the class instance calling this assertion function

signxai.tf_signxai.methods_impl.innvestigate.analyzer.relevance_based.utils.assert_infer_lrp_alpha_beta_param(alpha, beta, caller)[source]

Function for asserting parameter choices for alpha and beta passed to constructors inheriting from AlphaBetaRule and LRPAlphaBeta.

since alpha - beta are subjected to sum to 1, it is sufficient for only one of the parameters to be passed to a corresponding class constructor. this method will cause an assertion error if both are None or the following conditions can not be met

alpha >= 1 beta >= 0 alpha - beta = 1

Parameters:
  • alpha – the alpha parameter.

  • beta – the beta parameter

  • caller – the class instance calling this assertion function

Module contents