Converting tensors to images

Hi I want to convert my output of tensor values those I’m getting from UNet to images . Is there any way to do this? Below is my code chunk where i want to do

    def test_step(self, batch, batch_nb):
        x, y = batch
        y_hat = self.forward(x)
        loss = torch.nn.MSELoss()
        op_loss = loss(y_hat, y)
        #saving tensors to images code goes here
        print(op_loss)
        return {'test_loss': op_loss}

I want to save the tensors to images to some local file path after calculating op_loss

you can convert the tensors to numpy and save them using opencv

tensor  = tensor.cpu().numpy() # make sure tensor is on cpu
cv2.imwrite(tensor, "image.png")

Thanks for the response! Can’t I do if my tensor is on GPU because I’m training my model on GPU. If No, Is there any other way to do it?

No, you have to make sure your data is on the CPU.
Also, even if you train your model on the GPU, all you have to do is shift the output tensor to the cpu and store the images. Here is a minimal Example

model.cuda()
output = model(input)
output = output.cpu().numpy()
cv2.imwrite(output, 'pic.png')

Hi,

while doing the conversion i’m getting length exception I posted the issue there:

Please give any suggestion about this

Once you have your tensor in CPU, another possibility is to apply Sigmoid to your output and estimate a threshold (the mid point for example) in order to save it as an binary image.

from torchvision.utils import save_image
img1 = torch.sigmoid(output) # output is the output tensor of your UNet, the sigmoid will center the range around 0.
# Binarize the image
threshold = (img1.min() + img1.max()) * 0.5
ima = torch.where(img1 > threshold, 0.9, 0.1)
save_image(ima, 'BIN_ima.png')

Or you could try to “greyscale” the image…

img1 = torch.sigmoid(output) 
min = img1.min()
max = img1.max()
img2 = 1./(max-min) * img1 + 1.*min / (min-max)
save_image(img2, 'GREY_img.png')
1 Like