I am quite new to using pytorch and it seems like I can not figure out how to solve my problem.
I am implementing a custom loss function for a neural network. The network receives an input and outputs a tensor. However, the loss which I try to use to train this network incorporates the output of a second network.
l = Variable(torch.zeros(1),requires_grad=True)
loss = l.clone()
s = s_0
for i in range(t):
gen_inp = torch.cat((n,s),dim=0)
action = actions[i].view(-1)
gen_inp = torch.cat((gen_inp,action),dim=0)
g_out = gen(gen_inp)
loss += g_out
s = torch.Tensor([g_out,g_out]).to(device)
loss = -loss
So in the gen(gen_inp) line, a second neural network is called which returns the values that I need for the loss. I do not want to change the weights of this gen network and only of the one that returns the actions vector. Currently this code does not yield any gradients and the network does not train.
How can i fix this problem ?