Hi all. I want use the new version of PyTorch (0.4.1) to accomplish that computing the gradinet of input image using trained model. However, the answers are for old version. I paste the code below:
The core code of function “preprocess_image(img)”, and img_trans is some transform composites:
def preprocess_image(img):
img_data = torch.zeros([1, 3, 256, 256], dtype=torch.float, requires_grad=True)
with torch.no_grad():
img_data[0, …] = img_trans(img) # NOTE THAT, i need use no_grad, because it is in-place operation.
return img_data
main function
im_label_as_var = torch.from_numpy(np.asarray([target_class])).cuda()
Define loss functions
ce_loss = nn.CrossEntropyLoss().cuda()
Process image, original_image is a RGB image
processed_image = preprocess_image(original_image).cuda()
print(processed_image.requires_grad) # OUTPUT: True
self.model.zero_grad()
Forward pass
out = self.model(processed_image)
Calculate CE loss
pred_loss = ce_loss(out, im_label_as_var)
Do backward pass
pred_loss.backward()
print(processed_image.grad) # OUTPUT: None
The result is that processed_image has no gradient…
Thanks in advance.