Unpack Variable during forward()

Input to network module is of type torch.autograd.variable.Variable

>>> print(type(input))
<torch.autograd.variable.Variable>

During forward() I unpacked input with .data to use torch.index_select on it.

pix = torch.index_select(input.data), 0, idx)

The output value for the forward() is torch.cuda.FloatTensor.

>>> output = model.forward(input)
>>> print(type(output))
<torch.cuda.FloatTensor>

Will unpacking input affect the flow of gradient during backward()?

Yes, you should never unpack Variables by hand when computing stuff.
Also you should not call the forward method directly on an nn.Module, you should do output = model(input).

1 Like

Can I unpack variable after backward()?

for iter in range(100):
      output = model(input)
      loss = loss_cal(input, ouput)
      loss.backward()
      save_image(ouput.data.cpu.numpy())

If you don’t need gradients to be computed for that you can.

Recently came across this NVIDIA pix2pixHD pytorch code where they have called forward directly.

fake_image = self.netG.forward(input_concat) link here

pred_fake = self.netD.forward(torch.cat((input_label, fake_image), dim=1)) link here

Why so?