Backpropagation with iterative variable update

Hi everyone, I have a simple training loop

for epoch in range(num_epochs):                   
    for i,data in enumerate(training_set):       
        output_layer = PIAE_model(data)
        loss = criterion(output_layer,data)

model methods look like this

   def inverter(self,x):
        x = self.up(self.act(self.bn1(self.conv1(x))))     # convolution, batch normalization, activation, upsampling
        x = self.up(self.act(self.bn2(self.conv2(x))))    
        x = self.up((self.bn3(torch.abs(self.conv3(x)))))   
        return x

   def findif(self,vel_model):
       for n in range(0,self.nt):
           self.p[1] = self.p[1]+self.q[n]               
           self.p = vel_model * self.p.detach()
           self.traces[n] = self.p[1]
        return self.traces
    def forward(self, x):
        vel_model = self.inverter(x)
        seis_model = self.findif(vel_model)
        return seis_model

Where q and p are torch tensors with require_grad(True)
Training like this gives NaN values in the loss and setting detect anomaly to True raises:
Function ‘CudnnBatchNormBackward0’ returned nan values in its 0th output

Updating the variable self.p interferes with the backpropagation. And I am sure that the error is here self.p[1] = self.p[1]+self.q[n] without it training works just fine.

This is simplified but relevant version of my code, I can of course provide the full code but it is rather convoluted with mathematical operations in loops in findif method.

Please help me rewrite this line so the gradients are calculated properly.

Solved it

test = []
for i in range(0,self.nx): 
test[1] = test[1]+ self.q[n]
self.p = torch.stack(test)

Would love to see a prettier solution tho.