FNN 1.0.0
Toolbox to use NNs in Fortran.
Loading...
Searching...
No Matches
Data Types | Functions/Subroutines
fnn_layer_normalisation Module Reference

Module dedicated to the class normalisationlayer. More...

Data Types

type  normalisationlayer
 Implements a normalisation layer. More...
 

Functions/Subroutines

type(normalisationlayer) function, public norm_layer_fromfile (batch_size, unit_num)
 Constructor for class normalisationlayer from a file. More...
 
subroutine norm_tofile (self, unit_num)
 Implements normalisationlayer::tofile. More...
 
subroutine norm_apply_forward (self, train, member, x, y)
 Implements normalisationlayer::apply_forward. More...
 
subroutine norm_apply_tangent_linear (self, member, dp, dx, dy)
 Implements normalisationlayer::apply_tangent_linear. More...
 
subroutine norm_apply_adjoint (self, member, dy, dp, dx)
 Implements normalisationlayer::apply_adjoint. More...
 

Detailed Description

Module dedicated to the class normalisationlayer.

Function/Subroutine Documentation

◆ norm_apply_adjoint()

subroutine fnn_layer_normalisation::norm_apply_adjoint ( class(normalisationlayer), intent(in)  self,
integer(ik), intent(in)  member,
real(rk), dimension(:), intent(inout)  dy,
real(rk), dimension(:), intent(out)  dp,
real(rk), dimension(:), intent(out)  dx 
)
private

Implements normalisationlayer::apply_adjoint.

Applies the adjoint of the layer.

The adjoint operator reads

\[d\mathbf{x} = \alpha d\mathbf{y}.\]

Note

In principle, this method should only be called after normalisationlayer::apply_forward.

Since there is no (trainable) parameters, the parameter perturbation should be an empty array.

The intent of dy is declared inout instead of in because of other subclasses of fnn_layer::layer.

Parameters
[in,out]selfThe layer.
[in]memberThe index inside the batch.
[in,out]dyThe output perturbation.
[out]dpThe parameter perturbation.
[out]dxThe state perturbation.

◆ norm_apply_forward()

subroutine fnn_layer_normalisation::norm_apply_forward ( class(normalisationlayer), intent(inout)  self,
logical, intent(in)  train,
integer(ik), intent(in)  member,
real(rk), dimension(:), intent(in)  x,
real(rk), dimension(:), intent(out)  y 
)
private

Implements normalisationlayer::apply_forward.

Applies and linearises the layer.

The forward function reads

\[\mathbf{y} = \alpha \mathbf{x} + \beta,\]

where $\alpha$ is normalisationlayer::alpha and $\beta$ is normalisationlayer::beta.

Note

Input parameter member should be less than layer::batch_size.

The linearisation is trivial and does not require any operation. The intent of self is declared inout instead of in because of other subclasses of fnn_layer::layer.

Parameters
[in,out]selfThe layer.
[in]trainWhether the model is used in training mode.
[in]memberThe index inside the batch.
[in]xThe input of the layer.
[out]yThe output of the layer.

◆ norm_apply_tangent_linear()

subroutine fnn_layer_normalisation::norm_apply_tangent_linear ( class(normalisationlayer), intent(in)  self,
integer(ik), intent(in)  member,
real(rk), dimension(:), intent(in)  dp,
real(rk), dimension(:), intent(in)  dx,
real(rk), dimension(:), intent(out)  dy 
)
private

Implements normalisationlayer::apply_tangent_linear.

Applies the TL of the layer.

The TL operator reads

\[d\mathbf{y} = \alpha d\mathbf{x}.\]

Note

In principle, this method should only be called after normalisationlayer::apply_forward.

Since there is no (trainable) parameters, the parameter perturbation should be an empty array.

Parameters
[in]selfThe layer.
[in]memberThe index inside the batch.
[in]dpThe parameter perturbation.
[in]dxThe state perturbation.
[out]dyThe output perturbation.

◆ norm_layer_fromfile()

type(normalisationlayer) function, public fnn_layer_normalisation::norm_layer_fromfile ( integer(ik), intent(in)  batch_size,
integer(ik), intent(in)  unit_num 
)

Constructor for class normalisationlayer from a file.

Parameters
[in]batch_sizeThe value for layer::batch_size.
[in]unit_numThe unit number for the read statements.
Returns
The constructed layer.

◆ norm_tofile()

subroutine fnn_layer_normalisation::norm_tofile ( class(normalisationlayer), intent(in)  self,
integer(ik), intent(in)  unit_num 
)
private

Implements normalisationlayer::tofile.

Saves the layer.

Parameters
[in]selfThe layer.
[in]unit_numThe unit number for the write statement.