How can I define a new data type of tensor or set grad to bool tensor?

I want to use bool tensor with float grad. But in default, the data type of tensor’s grad is same with the tensor’s data.

x = torch.rand([8]) > 0.5
x.requires_grad_(True)

RuntimeError: only Tensors of floating point dtype can require gradients

The gradient dtype is equal to the tensor’s data type as it’s used to update the data.
Could you explain a bit what a floating point gradient on a bool value would mean and how a parameter update would work?

Thanks for your reply! In fact, I want to use a tensor with values in {0, 1}, which acts like being quantized to binary. For example:

device = 'cuda:0'
x = torch.as_tensor([0., 0., 1., 1.], device=device)
x.requires_grad_(True)
y = torch.as_tensor([0., 1., 0., 1.], device=device)
y.requires_grad_(True)
z = x * y  # z = x AND y
print(z)
z.sum().backward()
print(x.grad)
print(y.grad)

For the moment, I use float tensor to implement such a kind of tensor. But when values in tensors are binary, sotring them in float is not necessary. Sotring binary values in bool can reduce memory cost. So, I want to store values in bool, and store gradient in float.