problems occurs when i try to use loss.backward() with my diceloss function

I copy Diceloss function code online. the function need the input and its correspoding label to be one-hot encoded, so I need to transform the inputs to one-hot format at first, and feed them into the loss function.
I have experimented a more simple situation to find the problem, and the problem are exactly same as above.
my codes are below:
x = torch.tensor([[1,2],[3,4]]).float()
w = torch.tensor([1]).float()
w.requires_grad = True
x_ = x*w
x_o_h = get_one_hot(x_, 5) #转换为one-hot编码 size = (5,2,2)
res = sum(x_o_h)
res.backward()
print(w.grad)

function to get one-hot format:
def get_one_hot(label, N): #输入是一个二维的label!!!且label是long类型的数据,例如(3,3)

size = list(label.size())
label = label.view(-1)  # reshape 为向量
ones = torch.eye(N)
ones = (ones.index_select(0, label.long()))# 用上面的办法转为换one hot
size.append(N)  # 把类别输目添到size的尾后,准备reshape回原来的尺寸(3,3,9)
res = ones.view(size).permute((2,0,1))   ##size = (9, 3, 3)
return res

the error is below:
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

If I’m not mistaken, the target tensor would have to be one-hot encoded, not the prediction tensor, which seems to break the computation graph.
To do so, you could use F.one_hot. Also, this post provides an implementation in case you get stuck.

PS: you can post code snippets by wrapping them into three backticks ```, which makes debugging easier. :wink: