Using Neural Network in loss function

Hey !

I am quite new to using pytorch and it seems like I can not figure out how to solve my problem.

I am implementing a custom loss function for a neural network. The network receives an input and outputs a tensor. However, the loss which I try to use to train this network incorporates the output of a second network.

def action_loss(actions):
l = Variable(torch.zeros(1),requires_grad=True)
loss = l.clone()
s = s_0
for i in range(t):
gen_inp = torch.cat((n,s),dim=0)
action = actions[i].view(-1)
gen_inp = torch.cat((gen_inp,action),dim=0)
with torch.no_grad():
g_out = gen(gen_inp)
loss += g_out[2]
s = torch.Tensor([g_out[0],g_out[1]]).to(device)
loss = -loss
return loss´

So in the gen(gen_inp) line, a second neural network is called which returns the values that I need for the loss. I do not want to change the weights of this gen network and only of the one that returns the actions vector. Currently this code does not yield any gradients and the network does not train.

How can i fix this problem ?

Hi,

when you run in torch.no_grad, then nothing will traverse that block of code.
In this case, if you just don’t want to compute gradient for that net, you remove the torch.no_grad and set its parameters not to require gradients with something like:

for p in gen.parameters():
  p.requires_grad_(False)

Hi albanD,

Thanks a lot for your help, it now works!