Hi, I write a script based on pytorch that can transform a image to another one. It can work well on my pc, but since my GPU performance is too limited, I decide to run it on Google Colab. However, the same code cannot run on Colab.
Here is my code:
# Use the cuda
device = torch.device('cuda')
# Load Generator and send it to cuda
G = UNet()
G.cuda()
G.load_state_dict(premodel['HT'])
G.eval()
print("loading success")
G.to(device)
# Get the images to do transform
names = os.listdir(dataset_dir)
names = names[:]
for name in names:
im_path = os.path.join(dataset_dir, name)
# use PIL to open
img = Image.open(im_path).convert('RGB')
# print(img.size)
# Then, convert this image to a tensor, then pass it to the device
img_tensor = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.5, 0.5, 0.5],
std=[0.5, 0.5, 0.5])
])(img).to(device)
#TO Make it from 3d to 4d
img_tensor = img_tensor.view(1, img_tensor.shape[0], img_tensor.shape[1], img_tensor.shape[2])
print(img_tensor.shape)
# Next, pass this tensor to the transform net
img_out = G(img_tensor)
img_out = img_out.view(img_out.shape[1], img_out.shape[2], img_out.shape[3])
img_out = transforms.Normalize(mean=[-1, -1, -1], std=[2, 2, 2])(img_out)
# Pass the result to png format and save it to the result director
img_png = transforms.ToPILImage()(img_out.detach().cpu()).convert('RGB')
print("saving ", name)
save_path = os.path.join(save_dir, name)
img_png.save(save_path)
img.close()
img_png.close()
del img_out
del img_tensor
I think I have put all the model and tensors into the GPU, and it can run on my computer with a GPU. However, on Colab it always show the error:
Traceback (most recent call last):
File “demo_test.py”, line 85, in
img_out = transforms.Normalize(mean=[-1, -1, -1], std=[2, 2, 2])(img_out)
File “/usr/local/lib/python3.6/dist-packages/torchvision/transforms/transforms.py”, line 163, in call
return F.normalize(tensor, self.mean, self.std, self.inplace)
File “/usr/local/lib/python3.6/dist-packages/torchvision/transforms/functional.py”, line 208, in normalize
tensor.sub_(mean[:, None, None]).div_(std[:, None, None])
RuntimeError: expected backend CUDA and dtype Float but got backend CPU and dtype Float
So this runtime error comes from the line
img_out = transforms.Normalize(mean=[-1, -1, -1], std=[2, 2, 2])(img_out)
I think that error means img_out is not on the cuda, however I think I have sent it on the cuda. So can anyone tell me where does the problem come from? Thanks!!