Fuzzy Neural Network
Fuzzification
Modules
Different membership functions are defined, which can be used in membership_block.
Prosper_nn provides implementations for specialized time series forecasting neural networks and related utility functions.
- Copyright (C) 2022 Nico Beck, Julia Schemm, Henning Frechen, Jacob Fidorra,
Denni Schmidt, Sai Kiran Srivatsav Gollapalli
This file is part of Propser_nn.
- Propser_nn is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with this program. If not, see <http://www.gnu.org/licenses/>.
- class prosper_nn.models.fuzzy.membership_functions.GaussianMembership(sigma: float = 1.0)[source]
Bases:
Module
Gaussian member function
fixed mean at 0
deviation of the curve as learnable parameter sigma
- Parameters:
sigma_initializer (torch.nn.Initializer)
- class prosper_nn.models.fuzzy.membership_functions.GaussianMembershipFct(*args, **kwargs)[source]
Bases:
Function
- Gaussian autograd function.
Fixed mean
Variable sigma parameter
Class inherits from torch.autograd.Function. It has to reimplement the static methods forward and backward. See https://pytorch.org/tutorials/beginner/examples_autograd/two_layer_net_custom_function.html for more information and examples.
- static backward(ctx: Any, grad_outputs: Tensor) Tuple[Tensor, Tensor] [source]
backward pass of the gaussian function
- Parameters:
ctx (torch.Tensor) – context vector with saved variables of the forward pass
grad_outputs (torch.Tensor) – Loss vector passed backward from previous layer
- Returns:
(grad_input, grad_sigma) – gradients w.r.t. inputs and w.r.t. sigma
- Return type:
Tuple(torch.Tensor, torch.Tensor)
- static forward(ctx: Any, inputs: Tensor, sigma: Tensor) Tensor [source]
forward activation of the gaussian function
- Parameters:
ctx (torch.Tensor) – context vector to save variables for the backward pass
inputs (torch.Tensor) – input vector of shape (batchsize, 1)
sigma (torch.Tensor (torch.nn.Parameter)) – learnable deviation parameter sigma
- Returns:
output – output of the gaussian activation function
- Return type:
torch.Tensor
- class prosper_nn.models.fuzzy.membership_functions.NormlogMembership(negative: bool = False, slope_initializer: Callable | None = None)[source]
Bases:
Module
Norm logistic function * fixed around zero * slope of the curve as trainable parameter slope
- Parameters:
negative (bool) – negates the slope of the function to be able to reuse the layer for falling values
slope_initializer (torch.nn.Initializer)
- class prosper_nn.models.fuzzy.membership_functions.NormlogMembershipFct(*args, **kwargs)[source]
Bases:
Function
- Normlog autograd function.
Fixed middle point at zero
Variable slope parameter
Class inherits from torch.autograd.Function. It has to reimplement the static methods forward and backward. See https://pytorch.org/tutorials/beginner/examples_autograd/two_layer_net_custom_function.html for more information and examples.
- static backward(ctx: Any, grad_outputs: Tensor) Tuple[Tensor, Tensor] [source]
backward pass of the normlog function
- Parameters:
ctx (torch.Tensor) – context vector with saved variables of the forward pass
grad_outputs (torch.Tensor) – Loss vector passed backward from previous layer
- Returns:
(grad_input, grad_sigma, None) – gradients w.r.t. inputs and w.r.t. sigma
- Return type:
Tuple(torch.Tensor, torch.Tensor, None)
- static forward(ctx: Any, inputs: Tensor, slope: Tensor, negative: bool) Tensor [source]
forward activation of the normlog function
- Parameters:
ctx (torch.Tensor) – context vector to save variables for the backward pass
inputs (torch.Tensor) – input vector of shape (batchsize, 1)
slope (torch.Tensor (torch.nn.Parameter)) – learnable slope parameter slope
negative (bool) – determines if slope is negative or positive
- Returns:
output (torch.Tensor) – output of the gaussian activation function
slope parameter is restricted to the interval [0, inf] for negative=False
and [-inf, 0] for negative=True
The membership_block class uses the functions defined in membership_functions and applies them to a single input.
- class prosper_nn.models.fuzzy.membership_block.MembershipBlock(membership_fcts: dict, block_name: str | None = None)[source]
Bases:
Module
Block of a given set of membership functions Input/Output = 1 / number of membership functions
- Parameters:
membership_fcts (Dict[str : torch.nn.Module]) – set of membership functions
block_name (str) – name of the block
- forward(inputs: Tensor) Tensor [source]
Forward pass of the block. Giving the same input to all membership functions.
- Parameters:
inputs (torch.Tensor) – input vector of size (batchsize, 1)
- Returns:
output – stacked outputs of all membership functions. Shape (batchsize, len(membership_fcts))
- Return type:
torch.Tensor
The final member_layer results by setting a membership_block for each input of the layer.
- class prosper_nn.models.fuzzy.fuzzification.Fuzzification(n_features_input: int, membership_fcts: dict, input_names: dict | None = None)[source]
Bases:
Module
Dynamically creates a layer of parallel MembershipBlocks. The number of blocks is determined by n_features_input, because it creates a MembershipBlock for each input.
- Parameters:
n_features_input (int) – Number of inputs.
membership_fcts (Dict[str : torch.nn.Module]) – Set of member functions. Each MembershipBlock will contain these functions.
input_names (List[str]) – Blocks will be named after this list for better debugging.
- forward(inputs: Tensor) Tensor [source]
Forward pass through all MemberBlocks. :param inputs: input vector of shape (batchsize, number inputs) :type inputs: torch.Tensor
- Returns:
output – Tensor of stacked block outputs. Shape (batchsize, n_features_input, n_membership_fcts)
- Return type:
torch.Tensor
Example
batchsize = 20
n_features_inputs = 11
inputs = torch.randn(batchsize, n_features_input)
membership_fcts = {
"decrease": NormlogMembership(negative=True),
"constant": GaussianMembership(),
"increase": NormlogMembership(),
}
fuzzification = Fuzzification(
n_features_input=n_features_input, membership_fcts=membership_fcts
)
output = fuzzification(inputs)
Fuzzy Inference
Module
The FuzzyInference gets the output of the Fuzzification. It is a Dense Layer that has a predefined weights matrix rule_matrix. Use the RuleManager to create the rule matrix and classification matrix.
Prosper_nn provides implementations for specialized time series forecasting neural networks and related utility functions.
- Copyright (C) 2022 Nico Beck, Julia Schemm, Henning Frechen, Jacob Fidorra,
Denni Schmidt, Sai Kiran Srivatsav Gollapalli
This file is part of Propser_nn.
- Propser_nn is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with this program. If not, see <http://www.gnu.org/licenses/>.
- class prosper_nn.models.fuzzy.fuzzy_inference.FuzzyInference(n_features_input: int, n_membership_fcts: int, n_rules: int, n_output_classes: int, rule_matrix: Any | None = None, learn_conditions: bool = False, prune_weights: bool = False, softmax: Module = LogSoftmax(dim=1), classification_matrix: Any | None = None, learn_consequences: bool = True)[source]
Bases:
Module
The module performs the fuzzy inference step of a Fuzzy Neural Network. First, the conditions are modeled in a dense layer with rule constraints. Weights defined in the rule_matrix are initialized with magnitude 1. Second, a dense Layer with rule consequences is defined. Only weights defined in the classification_matrix are allowed to change, starting with magnitude 1. The weights are constrained to always be positive and sum up to 1 in the second dimension. That means that the weights connected to the same output node sum up to 1. Hence, the weights can be interpreted as the belief in this rule to lead to a certain output.
A 1D convolutional layer is used as a 2D dense layer to handle the multi-dimensionality of the previous layers. This allows an easier assignment of rules to the edge weights.
- Parameters:
n_features_input (int) – Number of inputs before the Fuzzification layer.
n_membership_fcts (int) – Number of membership functions for each input.
n_rules (int) – Number of rules.
n_output_classes (int) – Number of output classes.
rule_matrix (np.Array) – Mask matrix determining how the weights are initialized. rule_matrix has to be of shape (n_rules, in_features, num_member_functions) If learn_conditions=True this matrix determines which weights are trained
learn_conditions (bool) – Determines if the weights are allowed to be trained or should stay as initialized
prune_weights (bool) – Determines if the weights are pruned to only allow the weights where rule_matrix == 1 are allowed to be trained.
softmax (torch.nn.Module) – softmax function that is used. Default: LogSoftmax (better performance with NLLLoss and Cross-Entropy Loss than Softmax)
classification_matrix (np.Array) – Mask matrix determining how the weights are initialized. This models the consequences. classification_matrix has to be of shape (n_input_features, n_output_features)
learn_consequences (bool) – Determines if the weights are allowed to be trained or should stay as initialized
- forward(inputs: Tensor) Tensor [source]
Forward pass of the FuzzyInference. First, a log transformation to the input is applied and followed by a Conv1D Layer. The result is flattened and an exp function is applied. Afterwards, the input is passed through a dense layer that models the consequences and finally a Softmax function is used in the end.
- Parameters:
inputs (torch.Tensor) – input vector. Shape = (batchsize, n_features_input, n_membership_fcts)
- Returns:
output – Interpretable prediction in classes. Shape = (batchsize, n_rules)
- Return type:
torch.Tensor
Example
batchsize = 20
n_features_input = 11
n_output_classes = 3
n_rules = 4
n_membership_fcts = 3
inputs = torch.randn(batchsize, n_features_input, n_membership_fcts)
dummy_rule_matrix = np.ones(n_features_output, n_features_input, n_members)
dummy_classification_matrix = np.ones(n_features_input, n_features_output)
fuzzy_inference = FuzzyInference(
n_features_input=n_features_input,
n_rules=n_rules,
n_output_classes=n_output_classes,
n_membership_fcts=n_membership_fcts,
rule_matrix=dummy_rule_matrix,
classification_matrix=dummy_classification_matrix,
)
output = fuzzy_inference(inputs)
Defuzzification
Module
The Defuzzification turns the interpretable class prediction into a numerical prediction.
- class prosper_nn.models.fuzzy.defuzzification.Defuzzification(n_output_classes: int, n_features_output: int = 1)[source]
Bases:
Module
Defuzzification step in a Fuzzy Neural Network. It translates back the interpretable prediction into a numerical prediction. This is done by a affine transformation (torch.nn.Linear layer).
- Parameters:
n_output_classes (int) – The number of output classes in the previous FuzzyInference step.
n_features_output (int) – Number of target features. Default: 1
Example
n_output_classes = 3
batchsize = 20
inputs = torch.randn(batchsize, n_output_classes)
defuzzification = Defuzzification(n_output_classes)
output = defuzzification(input)
Fuzzy Recurrent Neural Networks
Module
The Fuzzy Recurrent Neural Network (FRNN) uses a special RNN infront of a Fuzzy Neural Net. The RNN is used to calculate the inputs’ change over time. The RNN is pruned to not mix different inputs. The FNN analyses the inputs’ changes afterwards. The fuzzification, fuzzy inference and defuzzification layers are used to build a FRNN Neural Network.
- class prosper_nn.models.fuzzy.frnn.FRNN(n_features_input: int, n_output_classes: int, n_rules: int, n_membership_fcts: int, membership_fcts: dict, rule_matrix: array, classification_matrix: Any | None = None, n_layers: int = 1, batch_first: bool = True, learn_conditions: bool = False, pruning: bool = True)[source]
Bases:
Module
Fuzzy Recurrent Neural Network for classification. Wraps Fuzzy Layer in RNN.
- Parameters:
n_features_input (int) – Number of network inputs
n_output_classes (int) – Number of network outputs/classes
n_rules (int) – Number of rules
n_membership_fcts (int) – Number of membership functions
membership_fcts (dict) – Dictionary containing the membership functions
rule_matrix (np.array) – Array containing the rule weight matrix
classification_matrix (np.array = None) – Array containing the matrix stating which rule leads to which class.
n_layers (int = 1) – Number of recurrent layers in the RNN part of the network
batch_first (bool = True) – Changes RNN to accommodate to a sequence or batch first data structure.
learn_conditions (bool = False) – Set fuzzy inference learning mode
pruning (bool = True) – Set fuzzy inference pruning mode
- forward(inputs: Tensor) None [source]
Network forward pass.
- Parameters:
inputs (torch.Tensor) – Input tensor
- Returns:
output – Output tensor
- Return type:
torch.Tensor
This method generates the first hidden_state state of zeros which is used in the forward pass.
- Parameters:
batchsize (int) – number of Samples in one batch
- Returns:
hidden_state – newly initialized hidden_state state
- Return type:
torch.Tensor
Example
batchsize = 30
sequence_length = 20
n_features_input = 10
n_output_classes = 3
n_rules = 5
n_membership_fcts = 3
inputs = torch.randn(batchsize, sequence_length, n_features_input)
dummy_rule_matrix = np.ones(n_rules, n_features_input, n_membership_fcts)
dummy_classification_matrix = np.ones(n_rules, n_output_classes)
membership_fcts = {
"negative": NormlogMembership(negative=True),
"constant": GaussianMembership(),
"positive": NormlogMembership(),
}
frnn = FRNN(
n_features_input=n_features_input,
n_output_classes=n_output_classes,
n_rules=n_rules,
n_membership_fcts=n_membership_fcts,
membership_fcts=membership_fcts,
rule_matrix=dummy_rule_matrix,
classification_matrix=dummy_classification_matrix,
)
output = frnn(inputs)