Sensitivity Analysis

Modules

Prosper_nn provides implementations for specialized time series forecasting neural networks and related utility functions.

Copyright (C) 2022 Nico Beck, Julia Schemm, Henning Frechen, Jacob Fidorra,

Denni Schmidt, Sai Kiran Srivatsav Gollapalli

This file is part of Propser_nn.

Propser_nn is free software: you can redistribute it and/or modify

it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with this program. If not, see <http://www.gnu.org/licenses/>.

prosper_nn.utils.sensitivity_analysis.analyse_temporal_sensitivity(model: Module, data: Tensor | Tuple[Tensor, ...], task_nodes: List[int], n_future_steps: int, past_horizon: int, n_features: int, features: List[str] | None = None, xlabel: str | None = 'Forecast Step', ylabel: str | None = 'Features', title: str | None = None, top_k: int = 1, save_at: str | None = None) None[source]

Function to analyze the influence of the present on the future variables in HCNNs and other recurrent type models. For each task variable the sensitivity of all features is plotted for different times in the future. It is necessary that the batchsize of the data is 1.

Parameters:
  • model (torch.Module) – The PyTorch model for whose output the sensitivity analysis is done. The model should be an ensemble with output shape = (n_models, past_horizon + forecast_horizon, batchsize=1, n_features_Y).

  • data (torch.Tensor or tuple of torch.Tensors) – The data input for the model for which the sensitivity analysis is performed. E.g., for an ensemble of HCNNs this might be a tensor and for an ensemble of ECNNs this might be a tuple of tensors (in the right order to give to the ECNN). Each input tensor for the model is assumed to be of the shape=(n_batches, past_horizon, batchsize, n_features). If your model needs input of a different shape, you might have to adapt the code.

  • task_nodes (List[int]) – The indexes of (target) variables for which the temporal analysis should be done.

  • n_future_steps (int) – The number of forecasting steps that are investigated by the analysis.

  • past_horizon (int) – The past horizon gives the number of time steps into the past that are used for forecasting.

  • n_features (int) – The size of the data/number of features in each time step.

  • features (list[str], optional) – The names of the features in the data.

  • xlabel (str, optional) – Set the label for the x-axis.

  • ylabel (str, optional) – Set the label for the y-axis.

  • title (list[str], optional) – Set a title for the plot.

  • top_k (int) – The number of features with the highest absolute/monotonie that are highlighted.

Return type:

None

prosper_nn.utils.sensitivity_analysis.calculate_sensitivity_analysis(model: Module, *data: Tuple[Tensor, ...], output_neuron: tuple = (0,), batchsize: int = 1) Tensor[source]

Calculates the sensitivity matrix. The function differentiates the target node with respect to the input for all observation.

Parameters:
  • model (torch.nn.Module) – The PyTorch model for whose output the sensitivity analysis is done.

  • data (tuple of PyTorch tensors) – The data input for the model for which the sensitivity analysis is done. Should be a tuple, even if it only has one element.

  • output_neuron (tuple) – Choose the output node for which the sensitivity analysis should be performed. The tuple is used to navigate in the model output to the wished output node. For example a tuple (0, 1, 3) is applied on the model output in the following way: wished_output_neuron = model_output[0][1][3]. If there is a batch dimension in the data, insert “slice(0, batchsize)” in the corresponding position of the tuple. Otherwise the values should be natural non-negative numbers.

  • batchsize – The batchsize of the model and the data.

Returns:

A torch tensor with the value of the model output differentiated with respect to the model input, evaluated for all observations in data. So the output shape of the returned torch tensor is dependent on data.shape() (if the model only takes one input, these shapes are equal).

Return type:

torch.tensor

prosper_nn.utils.sensitivity_analysis.plot_analyse_temporal_sensitivity(sensis: Tensor, target_var: List[str], features: List[str], n_future_steps: int, path: str | None = None, title: dict | str | None = None, xticks: dict | str | None = None, yticks: dict | str | None = None, xlabel: dict | str | None = None, ylabel: dict | str | None = None, figsize: List[float] = [12.4, 5.8]) None[source]

Plots a sensitivity analysis and creates a table with monotonie and total heat on the right side for each task variable.

prosper_nn.utils.sensitivity_analysis.plot_sensitivity_curve(sensitivity: Tensor, output_neuron: int = 1, xlabel: str = 'Observations', ylabel: str = '$\\frac{\\partial output}{\\partial input}', title: str = 'Sensitivity analysis of one output node') None[source]

A plotting function for a two dimensional matrix. It can be used to plot the sensitivity for one individual output node as a graph.

Parameters:
  • sensitivity (torch.Tensor) – In general a two dimensional torch tensor with float values. Here in particular the sensitivity matrix for a neural network and the corresponding observations.

  • output_neuron (int) – The index in the output layer for which the sensitivity should be plotted.

  • xlabel (dict) – Set the label for the x-axis.

  • ylabel (dict) – Set the label for the y-axis.

  • title (dict) – Set a title for the axes.

Return type:

None

prosper_nn.utils.sensitivity_analysis.sensitivity_analysis(model: Module, data: Tensor | Tuple[Tensor, ...], output_neuron: Tuple[int, ...], batchsize: int, xlabel: str = 'Observations', ylabel: str = 'Input Node', title: str = 'Sensitivity-analysis heatmap for one output neuron', cbar_kws: dict = {'label': 'd output / d input'}, save_at: str | None = None) Tensor[source]

Sensitivity for feed-forward models and other not-recurrent models. The function differentiates the target node with respect to the input for all observation. In this way the influence of each input neuron on the selected output neuron can be investigated. Combines the calculation of the sensitivity matrix and the visualization in a heatmap in one function.

Parameters:
  • model (torch.nn.Module) – The PyTorch model for whose output the sensitivity analysis is done.

  • data (torch.Tensor or tuple of torch.Tensors) – The data input for the model for which the sensitivity analysis is performed. Depending on how many inputs the model takes for the forward pass, data is either a tensor or a tuple of tensors. It’s assumed that every input tensor for the model has the shape=(n_batches, batchsize, n_features). For MLPs you can also use data in the shape of (n_observations, batchsize, n_features). If your model needs input of a different shape, you might have to adapt the code.

  • output_neuron (tuple) – Choose the output node for which the sensitivity analysis should be performed. The tuple is used to navigate in the model output to the wished output node. For example a tuple (0, 1, 3) is applied on the model output in the following way: wished_output_neuron = model_output[0][1][3]. If there is a batch dimension in the data, insert “slice(0, batchsize)” in the corresponding position of the tuple. All the other values should be natural non-negative numbers.

  • batchsize (int) – The batchsize of the model and the data.

  • xlabel (str) – Set the label for the x-axis.

  • ylabel (str) – Set the label for the y-axis.

  • title (str) – Set a title for the axes.

  • cbar_kws (dict) – Keyword arguments for matplotlib.figure.Figure.colorbar().

  • save_at (str) – Where to save the figure

Returns:

A torch tensor with the value of the model output differentiated with respect to the model input, evaluated for all observations in data. The output shape of the returned torch tensor is dependent on data.shape() (if the model only takes one input, these shapes are equal).

Return type:

torch.Tensor

Example

X = torch.rand([10, 100]) # shape=(n_observations, input_dim)
net = prosper_nn.models.feedforward.FFNN(100, 5, 1)

# with trained model
sensitivity_analysis(net, X)