Hi, everyone. I am new to Pytorch from Keras. I find Pytorch has sufficient existing modules and functions that I would like to use in my work. It is great.
I have a problem understanding the codes about /examples/minist main.py, where the CNN for MNIST is defined, as shown below:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
self.conv2_drop = nn.Dropout2d()
self.fc1 = nn.Linear(320, 50)
self.fc2 = nn.Linear(50, 10)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
x = x.view(-1, 320)
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return F.log_softmax(x)
I see some functions like Dropout used in both init and forward methods, however in init_ we use Dropout from torch.nn module while the latter one is from torch.nn.functional,
is there any reason for this configuration? Why not use the same dropout twice.
Also, can we use nn.relu instead of F.relu in forward part?
Hope someone can help me distinguish the same function module like convolution, dropout , activation in nn. and nn.functional. When shoud I better use the nn module and when is the nn.functional works better?
By the way, the parameters received by nn.Conv2d and F.con2d are also different, and when we use Class nn.Con2d, it uses the function F.conv2d in its forward method.