How to compare all entries of a tensor with a scaler?

Hello, basically I’m trying to implement ReLU from scratch without using numpy. In numpy the simple implementation is np.maximum(input,x) and the torch equivalent of maximum is max. But using torch.max as torch.max(input, 0) basically computes the maximum value along the 0th axis.

One way to do is to create an all-zero matrix and then use torch.max but it’s inefficient. Any other things I can use?

Hi @numbpy,
Maybe something as this would work for you:

torch.where((input > x), input, torch.zeros_like(input))

Please find more about torch.where here.

That’s might be a better solution. I used input[input < 0] = 0 (don’t know how to use inline code) for ReLU

And input[input < 0] = 0, input[input > 0] = 1 for ReLU prime. Will check which one’s faster.

Thanks

You can inline code by adding single backquotes `as this` => as this.

Actually, you should better use clamp (better than zeros_like + where):

output = input.clamp(min=0)

Thanks, clamp works for ReLU but not for prime ReLU,
output = input.clamp(max=1) doesn’t work since most entries are between 0 and 1.
torch.where works fine but all these are still slow compared to output=numpy.maximum(input,0) by a wide margin.
I tried searching the official documentation for ReLU but the actual implementation seems to be in lua which I couldn’t understand. Also, does pytorch has an implementation for ReLU prime or is it computed internally in the autograd