i try to combine a c++ programm with my neuronal network for training. Therefore i try to set a loss manually, but when i do this and print it out i get different results.
diff = part1 + part2 loss = diff.mean() tensor(0.2783, device='cuda:0', grad_fn=<MeanBackward0>) a = torch.Tensor().to("cuda") - torch.Tensor().to("cuda") tensor([122.], device='cuda:0')
I know that the programm runs through without an error but i dont know how important this grad_fn is.
Can somebody tell me more about this?
grad_fn = <MeanBackward0>