class Net(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(28*28, 300)
self.fc2 = nn.Linear(300, 100)
self.fc3 = nn.Linear(100, 10)
def forward(self, x):
x = x.view(-1, 28*28)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return F.log_softmax(x, dim=1)
If I use nn.ReLU, then I can get a ReLU layer from the named_modules(). But if I use nn.functional.relu, how can I know it’s used. Similarly, for other functional layers (dropout/sigmoid/…)
As you might know that PyTorch creates graphs dynamically when you provide data to network, you can get it’s behavior by applying an input and tracing graph construction. Here is the code: