Apply indicator function to pytorch tensor

Hello.
I would like to implement the indicator function of a set with pytorch (pytorch in particular because I need to use it as an activation function for one of my models).

I take the case of the derivative of Parameterised ReLU (parameterised by a real a), which is 1 for positive numbers and a elsewhere. I would like to be able to implement this derivative so that it can support batch sizes greater than 1.
Here is my example for relu.

import torch
activation_function = torch.relu
deriv_activation_function = lambda x : (x > 0).float()

why not just use the tensor product, for example, when you want to apply penalty if the sum of the output exceeds 1:

import torch

v=torch.rand(3)
penalty = torch.sum()*(torch.sum()>1)