| FNN 1.0.0
    Toolbox to use NNs in Fortran. | 
Module dedicated to the class normalisationlayer. More...
| Data Types | |
| type | normalisationlayer | 
| Implements a normalisation layer.  More... | |
| Functions/Subroutines | |
| type(normalisationlayer) function, public | norm_layer_fromfile (batch_size, unit_num) | 
| Constructor for class normalisationlayer from a file.  More... | |
| subroutine | norm_tofile (self, unit_num) | 
| Implements normalisationlayer::tofile.  More... | |
| subroutine | norm_apply_forward (self, train, member, x, y) | 
| Implements normalisationlayer::apply_forward.  More... | |
| subroutine | norm_apply_tangent_linear (self, member, dp, dx, dy) | 
| Implements normalisationlayer::apply_tangent_linear.  More... | |
| subroutine | norm_apply_adjoint (self, member, dy, dp, dx) | 
| Implements normalisationlayer::apply_adjoint.  More... | |
Module dedicated to the class normalisationlayer.
| 
 | private | 
Implements normalisationlayer::apply_adjoint.
Applies the adjoint of the layer.
The adjoint operator reads
![\[d\mathbf{x} = \alpha d\mathbf{y}.\]](form_56.png)
Note
In principle, this method should only be called after normalisationlayer::apply_forward.
Since there is no (trainable) parameters, the parameter perturbation should be an empty array.
The intent of dy is declared inout instead of in because of other subclasses of fnn_layer::layer. 
| [in,out] | self | The layer. | 
| [in] | member | The index inside the batch. | 
| [in,out] | dy | The output perturbation. | 
| [out] | dp | The parameter perturbation. | 
| [out] | dx | The state perturbation. | 
| 
 | private | 
Implements normalisationlayer::apply_forward.
Applies and linearises the layer.
The forward function reads
![\[\mathbf{y} = \alpha \mathbf{x} + \beta,\]](form_52.png)
 where 

Note
Input parameter member should be less than layer::batch_size.
The linearisation is trivial and does not require any operation. The intent of self is declared inout instead of in because of other subclasses of fnn_layer::layer. 
| [in,out] | self | The layer. | 
| [in] | train | Whether the model is used in training mode. | 
| [in] | member | The index inside the batch. | 
| [in] | x | The input of the layer. | 
| [out] | y | The output of the layer. | 
| 
 | private | 
Implements normalisationlayer::apply_tangent_linear.
Applies the TL of the layer.
The TL operator reads
![\[d\mathbf{y} = \alpha d\mathbf{x}.\]](form_55.png)
Note
In principle, this method should only be called after normalisationlayer::apply_forward.
Since there is no (trainable) parameters, the parameter perturbation should be an empty array.
| [in] | self | The layer. | 
| [in] | member | The index inside the batch. | 
| [in] | dp | The parameter perturbation. | 
| [in] | dx | The state perturbation. | 
| [out] | dy | The output perturbation. | 
| type(normalisationlayer) function, public fnn_layer_normalisation::norm_layer_fromfile | ( | integer(ik), intent(in) | batch_size, | 
| integer(ik), intent(in) | unit_num | ||
| ) | 
Constructor for class normalisationlayer from a file.
| [in] | batch_size | The value for layer::batch_size. | 
| [in] | unit_num | The unit number for the read statements. | 
| 
 | private | 
Implements normalisationlayer::tofile.
Saves the layer.
| [in] | self | The layer. | 
| [in] | unit_num | The unit number for the write statement. |