Since different operations are used in the forward calculations, Autograd will create the corresponding grad_fns. How would you like to test them?
If you would like to see the gradients, you could access it via the .grad attribute.
whatever I choose, the output will be the same?
some people use torch.ones_like(x) and some people use 1.0, I want to know whether the choice matter the output and Which is the preferable one to use in practice.
Sometimes you need to explicitly create a tensor with certain sizes, so there is nothing wrong in using torch-ones_like.
Do you have some issues with this method?