Hi everyone!
I’m new to PyTorch and I have a problem regarding nested PyTorch nn modules. The problem as follows.
I have a main nn model lets say model A. Which in turn uses two objects created from another nn class inside a for loop that contains lstm cells(single-cell).
First, the autograd complains that the graphs need to be retained because backward is going through multiple times over and over. After making retain_graph = True it gives the following error.**
**RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [2048]] is at version 2; expected version 1 instead. **
By the way, it does the first forward pass successfully. It would be greatly helpful if one of you guys could show me a consistent way to implement a subset of neural agent objects inside a for loop in the main nn.module class .
class ModelA(nn.Module):
def __init__():
super(ModelA, self).__init__()
self.agent1 = Agent(#initialising parameters)
self.agent2 = Agent(#initialising parrameters)
out1 = torch.zeros(#required dimension)
out2 = torch.zeros(#required dimension)
def forward(self, inputs):
for i in range(25):
out1 = self.agent1(out2, #for loop ends = false)
out2 = self.agent2(out1, #for loop ends = false)
y1 = self.agent1(out2, # for loop ends = True)
x1 = self.agent2(out1, # for loop ends = True)
return x1, y1
class Agent(nn.Module):
def __init__():
super(Agent, self).__init__()
# initialize all the layers and recurrent cells here
def forward(#inputs, #whether for loop ends)
if for_loop_ends = True
return #result1.
else:
return #result2