for param in direct_intrinsic_net.parameters():
print("param.requires_grad",param.requires_grad) # True
I’m not sure if I correctly performed backpropagation.
I performed backpropagation like this
# Initialize all gradients before I pass image data into network
# I also checked using zero_grad() after forward pass but there was no difference
optimizer.zero_grad()
# I obtained summed_loss, then, I used the following
summed_loss=Variable(summed_loss,requires_grad=True)
# Check loss value, but loss values are constant at every iteration
print(summed_loss) # tensor([4.6291], device='cuda:0')
summed_loss.backward()
optimizer.step()