Input to network module is of type torch.autograd.variable.Variable
>>> print(type(input))
<torch.autograd.variable.Variable>
During forward()
I unpacked input
with .data
to use torch.index_select on it.
pix = torch.index_select(input.data), 0, idx)
The output value for the forward()
is torch.cuda.FloatTensor
.
>>> output = model.forward(input)
>>> print(type(output))
<torch.cuda.FloatTensor>
Will unpacking input
affect the flow of gradient during backward()
?
albanD
(Alban D)
March 27, 2018, 1:38pm
2
Yes, you should never unpack Variable
s by hand when computing stuff.
Also you should not call the forward
method directly on an nn.Module
, you should do output = model(input)
.
1 Like
Can I unpack variable after backward()
?
for iter in range(100):
output = model(input)
loss = loss_cal(input, ouput)
loss.backward()
save_image(ouput.data.cpu.numpy())
albanD
(Alban D)
April 27, 2018, 4:07pm
5
If you don’t need gradients to be computed for that you can.
Recently came across this NVIDIA pix2pixHD pytorch code where they have called forward
directly.
fake_image = self.netG.forward(input_concat)
link here
pred_fake = self.netD.forward(torch.cat((input_label, fake_image), dim=1))
link here
Why so?