So I used out = (out > 0).float(), but the gradient could not be computed?
Reproduce code:
import torch
w = torch.FloatTensor([1.0, 2.0])
w.requires_grad = True
out = w * 2
out = (out > 0).float() # Exception occurs in this line of code
out.sum().backward()
print("print grad")
print("w has grad ", w.requires_grad)
print("w grad", w.grad)
Result error is: RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn