Conditional statement for a tensor in pytorch

Hi,
Is there any suggestion about how I can implement below conditions for kc which is a tensor in pytorch?
following conditions give this error:
“Boolean value of Tensor with more than one value is ambiguous.”
net_in = torch.cat((x,y),1)
kc = net2_kc(net_in)
kc = kc.view(len(kc),-1)

if kc>0.5:
f = (kc-0.5)^2
elif 0 <= kc <= 0.5:
f = 0
else:
f = kc^2
and then I need to add f function to my loss function.

The if condition checks for a single return value, so applying it to a tensor won’t work out of the box and you could index the tensor instead.
I.e. using the posted conditions, you could apply the changes to parts of the tensor.
Something like this might work:

kc = torch.randn(10, 2)
f = torch.zeros_like(kc)

idx1 = kc > 0.5
f[idx1] = (kc[idx1] - 0.5)**2

idx2 = (0 <= kc) & (kc <= 0.5)
f[idx2] = 0 # wouldn't be necessary, as f is already initialized as zeros

used_idx = torch.zeros_like(idx1).bool()
used_idx |= idx1
used_idx |= idx2
f[~used_idx] = kc[~used_idx]**2

Thank you for the reply. I think it worked.

ptrblck,
I have another question for you. The reason why I defined function f it is to restrict outputs of kc to be positive and a value between 0 and 0.5. But it did not work for me. I tried different activation functions like sigmoid and relu to force the output to be positive but these activation functions make the gradient very small and the output of kc would be very very small value close to zero. Only activation function that gives a none zero value is swish but the output is negative. Do you have any suggestions how I can force the output to be a positive and in range (0,0.5)?