Can backwark works in this way?

   Hey, I write a model to generate sequential images. My model looks like this:

  netV's input is noise and hidden state,  its output is an image, the next hidden state and next noise.

I use a loop to to use netV to generate many images,which look like this:

 images = []
 for i in range( 8 ):
      image, noise_next, hidden_next = netV(noise, hidden)
      noise = noise_next
      hidden = hidden_next
      images.append(image)  
  
  The forward is ok. However, I'm not sure can the backwark works. Can the grad backward normally? I don't know how the grad flows in the backward

I don’t know how you do the backward. Could you give an example?
AFAIK, hidden.backward() would be ok, just like RNN.

The backward is:

   netD's input is many images. And the output is 0 or 1.

  images = []
  for i in range( 8 ):
       image, noise_next, hidden_next = netV(noise, hidden)
       noise = noise_next
       hidden = hidden_next
       images.append(image)  

 real_label is a bacth * 1 tesor filled with 1. The backward like this:

 output = netD(images)
 criterion = nn.BCELoss
 error = criterion(output, real_label)
 error.backward()

In my test when the images number less than 5. Grad can backward. However when the number is more than 5. The grad disappear. BTW, I’m not sure this is right.

if your code goes like this:

images = torch.cat(images)
output = netD(images)

It will work.
Also you can have a look at the example of DCGAN

That’s different. I try to generate sequential images. The netD’s input is more like a video.