hi im new to pytorch there is something that keeps my mind busy
i see these code for classifying mnist
class Classifier(nn.Module):
def __init__(self, input_featurs, h1, h2, output_featurs):
super().__init__()
self.linear= nn.Linear(input_featurs, h1)
self.linear1 = nn.Linear(h1, h2)
self.linear2 = nn.Linear(h2, output_featurs)
def forward(self, x):
x =F.relu(self.linear(x))
x =F.relu(self.linear1(x))
x =self.linear2(x)
return x
why we dont use softmax activation function for last layer
thank you