loss.backward() do not go through while training and throws an error when on multiple GPUs using torch.nn.DataParallel
grad can be implicitly created only for scalar outputs
But, the same thing trains fine when I give only deviced_ids=[0] to torch.nn.DataParallel.
Is there something I am missing here?
Addendum:
While running on two gpus, the loss function returns a vector of 2 loss values. If I run the backward only on the first element of the vector it goes fine.
How can I make the backward function work with vector containing two or more loss values?
when you do loss.backward(), it is a shortcut for loss.backward(torch.Tensor([1])). This in only valid if loss is a tensor containing a single element. DataParallel returns to you the partial loss that was computed on each gpu, so you usually want to do loss.backward(torch.Tensor([1, 1])) or loss.sum().backward(). Both will have the exact same behaviour.
When I try loss.mean.backward() or loss.sum.backward() I am getting this warning? UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector. warnings.warn('Was asked to gather along dimension 0, but all â
The gradient is computed as a vector jacobian product. So the size of the vector has to match the size of a dimension jacobian (which is the size of the output).
Sure, the number of grads needs to equal the number of variables.
What I meant was it seems weird that the âbackwardâ function is defined as on 1 variable unless otherwise stated, even though it is implemented on a vector with > 1 variables.
e.g. if I have vec, a 2 element tensor of 2 variables, and call vec.backward() it wonât work, but if vec is a 1 element tensor it will. I canât see an obvious reason why backward should be default limited to 1 variable (unless explicitly told otherwise), especially seeing as it is a method of the variable.
For example, if you want to compute the gradient for the sum of the elements in x. You can do either: x.sum().backward() or x.backward(torch.ones_like(x)).
I know that this is an old thread, but itâs what showed up when I googled my error. The way I fixed it was different, and might help somebody else. The issue is that I admittedly copied code from somewhere online and it had:
nn.BCELoss(reduction='none') # No reduction to allow masking
this line in it. My issue came from the reduction=ânoneâ part. nn.BCELoss usually takes all the losses from a batch and whatever your output is and finds the average and returns that as a scalar. The reduction=ânoneâ makes it so that doesnât happen. So instead of a scalar I was getting a tensor. Hope this helps other people like me!