I have two loss variables, A and B.
When I do A.backward() everything works fine, but when I do B.backward() i get the following error: TypeError: backward() takes 2 positional arguments but 3 were given
I expect that the difference is in the history of the variables but I cannot find it.
Any ideas or directions will be appreciated.
I have an architecture that consists largely of regular Modules and Functions, but also uses two custom Functions, each one associated with a different loss (I add their code below).
The loss functions themselves are also non-trivial but are all a series of torch functions and are probably okay so for now I will only add the code for the custom Functions to maintain brevity.
The custom Function for loss A is simply adding some noise to the input:
@staticmethod
# bias is an optional argument
def forward(ctx, input, stdev):
normal_sample = torch.normal(torch.zeros(input.size()), torch.zeros(input.size()) + stdev).cuda()
# re-center the means to the input
output = input + normal_sample
ctx.stdev = stdev
ctx.mark_dirty(input)
ctx.save_for_backward(input, output)
return output
@staticmethod
def backward(ctx, grad_output):
input, output = ctx.saved_variables
stdev = ctx.stdev
tensor_output = output.data
tensor_input = input.data
tensor_output.normal_(0, stdev[0][0])
# re-center the means to the input
tensor_output.add(tensor_input)
del ctx.stdev
return Variable(tensor_output), None
The custom function for loss B uses its input to weigh multinomial sampling and outputs the resulting indexing:
The error in the second function is that since it has two outputs from the forward, it will get two grad_ inputs in the backward method. Here you expect only one hence the error.
This might be a dumb question, but I have a similar scenario and want to give multiple outputs from my custom forward function similar to what mordith was doing. One of these will need to be differentiated but the others will not. For example,