How to do ReLU 1 operation?

I want to use ReLU1 non-linear activation. ReLU1 is linear in [0,1] but clamps values less than 0 to 0 and clamps values more than 1 to 1 .

So which of the following will be the best way to do, causing no problem in backpropagation.

x.clamp_(min=0.0, max=1.0) or torch.nn.functional.hardtanh_(x, min_val=0.0, max_val=1.0)

This will be just used for the final layer. For other layers I shall just use LeakyReLu.

Thankyou

Hi,

Both should work fine.
If you see issues with modifying inplace values needed for gradient computation, you can swap them to use the out of place versions.

Oh so does this mean that inplace may cause some trouble in back-propagation? I ask this almost all codes of ResNet including the Pytorch official release always set the inplace flag as true, maybe to save memory usage.
Thankyou

Hi,

You can try the inplace, if you don’t get errors, then it’s fine.
If you get an error, then you’ll have to remove the inplace.