|
FNN 1.0.0
Toolbox to use NNs in Fortran.
|
Module dedicated to the class reluactivation. More...
Data Types | |
| type | reluactivation |
Implements a relu activation function. More... | |
Functions/Subroutines | |
| type(reluactivation) function, public | construct_relu_activation (self_size, batch_size) |
| Constructor for class reluactivation. More... | |
| subroutine | relu_tofile (self, unit_num) |
| Implements reluactivation::tofile. More... | |
| subroutine | relu_apply_forward (self, member, z, y) |
| Implements reluactivation::apply_forward. More... | |
Module dedicated to the class reluactivation.
| type(reluactivation) function, public fnn_activation_relu::construct_relu_activation | ( | integer(ik), intent(in) | self_size, |
| integer(ik), intent(in) | batch_size | ||
| ) |
Constructor for class reluactivation.
| [in] | self_size | The value for linearactivation::self_size. |
| [in] | batch_size | The value for linearactivation::batch_size. |
|
private |
Implements reluactivation::apply_forward.
Applies and linearises the activation function.
The activation function reads
![\[ \mathbf{y} = \mathcal{A}(\mathbf{z}) = \mathrm{relu}(\mathbf{z}),\]](form_6.png)
and the associated linearisation reads
![\[ \mathbf{A}(\mathbf{z}) = \mathrm{diag}(1-\mathrm{relu}(\mathbf{z})^2).\]](form_7.png)
Note
Input parameter member should be less than linearactivation::batch_size.
The linarisation is stored in nonlinearactivation::z_prime, which is why the intent of self is declared inout.
| [in,out] | self | The activation function. |
| [in] | member | The index inside the batch. |
| [in] | z | The input of the activation function. |
| [out] | y | The output of the activation function. |
|
private |
Implements reluactivation::tofile.
Saves the activation function.
| [in] | self | The activation function to save. |
| [in] | unit_num | The unit number for the write statement. |