I’m just a little confused about the different RELU functions : F.Relu vs. nn.Relu vs. nn.Relu(inplace=True). I seen all of them used in different pytorch examples. I guess since there are no learnable parameter in Relu, you can just use it as a function with F.Relu in the forward function.
However, if I decided to use nn.Relu, do I need to create a new variable for every Relu step? Or just one for all of them? Because I seen some examples with multiple nn.Relu declared.
Another question I have is that if I use nn.Relu(inplace=true), do I still need to do x = self.relu(x)? Or just self.relu(x)? I figured since the operation is done inplace, you can just call self.relu(x), but all of the examples I seen uses x = self.relu(x). Any particular reason for this?