FNN 1.0.0
Toolbox to use NNs in Fortran.
Loading...
Searching...
No Matches
Data Types | Functions/Subroutines
fnn_layer_dense Module Reference

Module dedicated to the class denselayer. More...

Data Types

type  denselayer
 Implements a dense (fully-connected) layer. More...
 

Functions/Subroutines

type(denselayer) function, public construct_dense_layer (input_size, output_size, batch_size, activation_name, initialisation_name)
 Manual constructor for class denselayer. Only for testing purpose. More...
 
type(denselayer) function, public dense_layer_fromfile (batch_size, unit_num)
 Constructor for class denselayer from a file. More...
 
subroutine dense_tofile (self, unit_num)
 Implements denselayer::tofile. More...
 
subroutine dense_apply_forward (self, train, member, x, y)
 Implements denselayer::apply_forward. More...
 
subroutine dense_apply_tangent_linear (self, member, dp, dx, dy)
 Implements denselayer::apply_tangent_linear. More...
 
subroutine dense_apply_adjoint (self, member, dy, dp, dx)
 Implements denselayer::apply_adjoint. More...
 

Detailed Description

Module dedicated to the class denselayer.

Function/Subroutine Documentation

◆ construct_dense_layer()

type(denselayer) function, public fnn_layer_dense::construct_dense_layer ( integer(ik), intent(in)  input_size,
integer(ik), intent(in)  output_size,
integer(ik), intent(in)  batch_size,
character(len=*), intent(in)  activation_name,
character(len=*), intent(in)  initialisation_name 
)

Manual constructor for class denselayer. Only for testing purpose.

Parameters
[in]input_sizeThe value for layer::input_size.
[in]output_sizeThe value for layer::output_size.
[in]batch_sizeThe value for layer::batch_size.
[in]activation_nameThe activation function.
[in]initialisation_nameThe initialisation for model parameters.
Returns
The constructed layer.

◆ dense_apply_adjoint()

subroutine fnn_layer_dense::dense_apply_adjoint ( class(denselayer), intent(in)  self,
integer(ik), intent(in)  member,
real(rk), dimension(:), intent(inout)  dy,
real(rk), dimension(:), intent(out)  dp,
real(rk), dimension(:), intent(out)  dx 
)
private

Implements denselayer::apply_adjoint.

Applies the adjoint of the layer.

The adjoint operator is implemented by

\[d\mathbf{y} = \mathbf{A}(\mathbf{Wx+b})^{\top}d\mathbf{y},\]

\[d\mathbf{b} = d\mathbf{y},\]

\[d\mathbf{W} = d\mathbf{yx}^{\top},\]

\[d\mathbf{x} = \mathbf{W}^{\top}d\mathbf{y}.\]

Note

This method should only be called after denselayer::apply_forward.

The value of $d\mathbf{y}$ gets overwritten in this method (bad side-effect). This could be easily solved by merging the first two algorithmic lines into

\[d\mathbf{b} = \mathbf{A}(\mathbf{Wx+b})^{\top}d\mathbf{y}.\]

For this reason, the intent of dy is declared inout.

Parameters
[in,out]selfThe layer.
[in]memberThe index inside the batch.
[in,out]dyThe output perturbation.
[out]dpThe parameter perturbation.
[out]dxThe state perturbation.

◆ dense_apply_forward()

subroutine fnn_layer_dense::dense_apply_forward ( class(denselayer), intent(inout)  self,
logical, intent(in)  train,
integer(ik), intent(in)  member,
real(rk), dimension(:), intent(in)  x,
real(rk), dimension(:), intent(out)  y 
)
private

Implements denselayer::apply_forward.

Applies and linearises the layer.

The forward function reads

\[ \mathbf{y} = \mathcal{F}(\mathbf{p}, \mathbf{x})
 = \mathcal{A}(\mathbf{Wx+b}),\]

where $\mathbf{W}$ is the kernel and $\mathbf{b}$ the bias of the layer, and where $\mathcal{A}$ is the activation function.

Note

Input parameter member should be less than layer::batch_size.

The linearisation of the regression $\mathbf{Wx+b}$ is stored in layer::forward_input, and the linearisation of the activation function is stored in layer::activation.

Because the linearisation of the layer is stored inside the layer, the intent of selfis declared inout.

Todo:
Find a way to store (internally) a view to the kernel and the bias.
Parameters
[in,out]selfThe layer.
[in]trainWhether the model is used in training mode.
[in]memberThe index inside the batch.
[in]xThe input of the layer.
[out]yThe output of the layer.

◆ dense_apply_tangent_linear()

subroutine fnn_layer_dense::dense_apply_tangent_linear ( class(denselayer), intent(in)  self,
integer(ik), intent(in)  member,
real(rk), dimension(:), intent(in)  dp,
real(rk), dimension(:), intent(in)  dx,
real(rk), dimension(:), intent(out)  dy 
)
private

Implements denselayer::apply_tangent_linear.

Applies the TL of the layer.

The TL operator reads

\[d\mathbf{y} = \mathbf{A}(\mathbf{Wx+b})[\mathbf{W}d
 \mathbf{x}+d\mathbf{Wx}+d\mathbf{b}],\]

which is implemented by

\[d\mathbf{y} = \mathbf{W}d\mathbf{x},\]

\[d\mathbf{y} = d\mathbf{y} + d\mathbf{Wx},\]

\[d\mathbf{y} = d\mathbf{y} + d\mathbf{b},\]

\[d\mathbf{y} = \mathbf{A}(\mathbf{Wx+b})d\mathbf{y}.\]

Note

This method should only be called after denselayer::apply_forward.

Parameters
[in]selfThe layer.
[in]memberThe index inside the batch.
[in]dpThe parameter perturbation.
[in]dxThe state perturbation.
[out]dyThe output perturbation.

◆ dense_layer_fromfile()

type(denselayer) function, public fnn_layer_dense::dense_layer_fromfile ( integer(ik), intent(in)  batch_size,
integer(ik), intent(in)  unit_num 
)

Constructor for class denselayer from a file.

Parameters
[in]batch_sizeThe value for layer::batch_size.
[in]unit_numThe unit number for the read statements.
Returns
The constructed layer.

◆ dense_tofile()

subroutine fnn_layer_dense::dense_tofile ( class(denselayer), intent(in)  self,
integer(ik), intent(in)  unit_num 
)
private

Implements denselayer::tofile.

Saves the layer.

Parameters
[in]selfThe layer.
[in]unit_numThe unit number for the write statement.