Some problems about nn.ReLU()

Dear, All! I am new hand for PyTorch 0.4. Recently, i saw a snippet of code.
import torch
import torch.nn as nn

at_features = torch.randn(128, 196, 512)
at_hidden = torch.randn(128, 1, 512)
at_bias = nn.Parameter(torch.zeros(196))

at_full = nn.ReLU()(at_features + at_hidden + at_bias.view(1, -1, 1))
I was confused by the last line of this code.What the meaning of the ‘nn.ReLU()(at_features + at_hidden + at_bias.view(1, -1, 1))’. I think the nn.ReLU() is class, the threshold should be the parameter of the class. Why is following the nn.ReLU()? When i construct the neural networks, how to choose the ‘nn.ReLU()’ or ‘nn.functional.relu’?

The last line of code creates an instance of nn.ReLU and uses it immediately on an addition of some tensors. The relu instance won’t be saved, just the result of its computation.
You could have used F.relu() instead.

If you want to reuse the class instance, you can create it in the __init__ of your model and use it in the forward:

def __init__(self):
    self.act = nn.ReLU()
    ...

def forward(self, x):
    x = self.act(x)

Alternatively, you could skip creating the instance in __init__ and just use the functional API: x = F.relu(x).

Since this class is stateless, none of the approaches has advantages over the other in my opinion.

Thank you for your prompt reply! I can understand this problem.:grinning: