Different types of ReLU functions

I’m just a little confused about the different RELU functions : F.Relu vs. nn.Relu vs. nn.Relu(inplace=True). I seen all of them used in different pytorch examples. I guess since there are no learnable parameter in Relu, you can just use it as a function with F.Relu in the forward function.

However, if I decided to use nn.Relu, do I need to create a new variable for every Relu step? Or just one for all of them? Because I seen some examples with multiple nn.Relu declared.

Another question I have is that if I use nn.Relu(inplace=true), do I still need to do x = self.relu(x)? Or just self.relu(x)? I figured since the operation is done inplace, you can just call self.relu(x), but all of the examples I seen uses x = self.relu(x). Any particular reason for this?


1 Like

Indeed, relu has no parameters and the effect is the same whether you use nn or functional. However if you use functional It wont appear in the model description meanwhile if you use nn, it will.
If you use the inplace version you don’t need to assign it. It’s a matter of readability.
As always it is recommended to use the nn version

1 Like

If I use nn, do I need a separate declaration for each ReLU?

Not really, if you check (for example) torchvision resnet it declares relu once but uses twice :slight_smile:
If you declare it once it will appear once and so on.

Ok thanks! I got a little confused because I saw an example on github where multiple ReLU are declared. I guess it is just poor coding practice like assigning output to an inplace ReLU operator.