Activation Functions in PyTorch Examples

Hello, I’m very new to machine learning and PyTorch. I’m looking at the Learning PyTorch with Examples page.

https://pytorch.org/tutorials/beginner/pytorch_with_examples.html#pytorch-custom-nn-modules

I’m confused about the use of activation functions in this example code below. It doesn’t seem like there’s any kind of activation function being used here, like a ReLU, as in previous examples. Am I completely missing something here? If I want to apply this code to my project, do I need to introduce a ReLU at some point? I see that a h_relu variable is created in the forward function, but this doesn’t seem like the same thing.

Any explanations would be greatly appreciated.

class TwoLayerNet(torch.nn.Module):
    def __init__(self, D_in, H, D_out):
        """
        In the constructor we instantiate two nn.Linear modules and assign them as
        member variables.
        """
        super(TwoLayerNet, self).__init__()
        self.linear1 = torch.nn.Linear(D_in, H)
        self.linear2 = torch.nn.Linear(H, D_out)

    def forward(self, x):
        """
        In the forward function we accept a Tensor of input data and we must return
        a Tensor of output data. We can use Modules defined in the constructor as
        well as arbitrary operators on Tensors.
        """
        h_relu = self.linear1(x).clamp(min=0)
        y_pred = self.linear2(h_relu)
        return y_pred

Hi!
The method clamp(min=0) is functionally equivalent to ReLU. All ReLU does is to set all negative values to zero and keep all positive values unchanged, which is what is being done in that example with the use of clamp set to min=0.

Here’s the documentation for torch.clamp.

1 Like

So in this case models “activation function” is ReLU?

Thank you for the response!

yup in this case it is exactly like the relu activation