Feed Forward Neural Network

Module

class prosper_nn.models.feedforward.feedforward.FFNN(input_dim: int, hidden_dim: int, output_dim: int, activation: ~typing.Type[~torch.autograd.function.Function] = <built-in method tanh of type object>)[source]

Bases: Module

The Feed Forward Neural Network is a three layer network with a non-linearity in the hidden layer.

Parameters:
  • input_dim (int) – The dimension of the input layer. It must be a positive integer.

  • hidden_dim (int) – The dimension of the hidden layer. It must be a positive integer.

  • output_dim (int) – The dimension of the output layer. It must be a positive integer.

  • activation (torch) – The activation function that is applied on the output of the hidden layer.

Return type:

None

forward(x: Tensor) Tensor[source]
Parameters:

x (torch.Tensor) – The input of the model.

Returns:

The output of the FFNN.

Return type:

torch.Tensor

Example

import torch
from prosper_nn.models.feedforward.feedforward import FFNN

# Set model and data parameter
input_dim = 10
hidden_dim = 15
output_dim = 1
n_batches = 100
batchsize = 5

# Initialise Deep Feedforward Neural Network
feedforward = FFNN(input_dim=input_dim,
                   hidden_dim=hidden_dim,
                   output_dim=output_dim)

X = torch.randn([n_batches, batchsize, input_dim])
Y = torch.randn([n_batches, batchsize, output_dim])

# Train Model
optimizer = torch.optim.Adam(feedforward.parameters())
loss_function = torch.nn.MSELoss()

for epoch in range(10):
    for x, y in zip(X, Y):
        output = feedforward(x)

        feedforward.zero_grad()
        loss = loss_function(output, y)
        loss.backward()
        optimizer.step()