|
FNN 1.0.0
Toolbox to use NNs in Fortran.
|
Module dedicated to the class layer. More...
Data Types | |
| type | layer |
| Base class for all layers. Do not instanciate. More... | |
Functions/Subroutines | |
| integer(ik) function | layer_get_num_parameters (self) |
| Implements layer::get_num_parameters. More... | |
| subroutine | layer_set_parameters (self, new_parameters) |
| Implements layer::set_parameters. More... | |
| subroutine | layer_get_parameters (self, parameters) |
| Implements layer::get_parameters. More... | |
| subroutine | layer_tofile (self, unit_num) |
| Implements layer::tofile. More... | |
| subroutine | layer_apply_forward (self, train, member, x, y) |
| Implements layer::apply_forward. More... | |
| subroutine | layer_apply_tangent_linear (self, member, dp, dx, dy) |
| Implements layer::apply_tangent_linear. More... | |
| subroutine | layer_apply_adjoint (self, member, dy, dp, dx) |
| Implements layer::apply_adjoint. More... | |
Module dedicated to the class layer.
|
private |
Implements layer::apply_adjoint.
Applies the adjoint of the layer.
The adjoint operator reads
![\[ d\mathbf{p} = [\mathbf{F}^\mathrm{p}]^\top d\mathbf{y},\]](form_17.png)
![\[ d\mathbf{x} = [\mathbf{F}^\mathrm{x}]^\top d\mathbf{y},\]](form_18.png)
where 





Note
This should be overridden by each subclass.
The intent of dy is declared inout instead of in because, for certain subclasses (e.g. fnn_layer_dense::denselayer) the value of dy gets overwritten during the process (bad side-effect).
| [in] | self | The layer. |
| [in] | member | The index inside the batch. |
| [in,out] | dy | The output perturbation. |
| [out] | dp | The parameter perturbation. |
| [out] | dx | The state perturbation. |
|
private |
Implements layer::apply_forward.
Applies and linearises the layer.
The forward function reads
![\[ \mathbf{y} = \mathcal{F}(\mathbf{p}, \mathbf{x}),\]](form_10.png)
where 
Note
This should be overridden by each subclass.
The intent for self is declared inout instead of in because, for certain subclasses (e.g. fnn_layer_dense::denselayer) the linearisation is stored inside the layer.
| [in,out] | self | The layer. |
| [in] | train | Whether the model is used in training mode. |
| [in] | member | The index inside the batch. |
| [in] | x | The input of the layer. |
| [out] | y | The output of the layer. |
|
private |
Implements layer::apply_tangent_linear.
Applies the TL of the layer.
The TL operator reads
![\[ d\mathbf{y} = \mathbf{F}^\mathrm{p}d\mathbf{p} +
\mathbf{F}^\mathrm{x}d\mathbf{x},\]](form_12.png)
where 





Note
This should be overridden by each subclass.
| [in] | self | The layer. |
| [in] | member | The index inside the batch. |
| [in] | dp | The parameter perturbation. |
| [in] | dx | The state perturbation. |
| [out] | dy | The output perturbation. |
Implements layer::get_num_parameters.
Returns the number of parameters.
| [in] | self | The layer. |
|
private |
Implements layer::get_parameters.
Getter for layer::parameters.
| [in] | self | The layer. |
| [out] | parameters | The vector of parameters. |
|
private |
Implements layer::set_parameters.
Setter for layer::parameters.
| [in,out] | self | The layer. |
| [in] | new_parameters | The new values for the parameters. |
|
private |
Implements layer::tofile.
Saves the layer.
Note
This should be overridden by each subclass.
| [in] | self | The layer. |
| [in] | unit_num | The unit number for the write statement. |