Hello, I’m very new to machine learning and PyTorch. I’m looking at the Learning PyTorch with Examples page.
https://pytorch.org/tutorials/beginner/pytorch_with_examples.html#pytorch-custom-nn-modules
I’m confused about the use of activation functions in this example code below. It doesn’t seem like there’s any kind of activation function being used here, like a ReLU, as in previous examples. Am I completely missing something here? If I want to apply this code to my project, do I need to introduce a ReLU at some point? I see that a h_relu variable is created in the forward function, but this doesn’t seem like the same thing.
Any explanations would be greatly appreciated.
class TwoLayerNet(torch.nn.Module):
def __init__(self, D_in, H, D_out):
"""
In the constructor we instantiate two nn.Linear modules and assign them as
member variables.
"""
super(TwoLayerNet, self).__init__()
self.linear1 = torch.nn.Linear(D_in, H)
self.linear2 = torch.nn.Linear(H, D_out)
def forward(self, x):
"""
In the forward function we accept a Tensor of input data and we must return
a Tensor of output data. We can use Modules defined in the constructor as
well as arbitrary operators on Tensors.
"""
h_relu = self.linear1(x).clamp(min=0)
y_pred = self.linear2(h_relu)
return y_pred