Hi community,
I am training a custom network (with 3D conv layers) which takes a bunch of stacks of 3 images as input. Loaded from .npy files the image files have a shape of (depth=3, height=1280, width=1500, channels=3). I have a dataset class which converts the files to tensors (float32 type) and permutes the dimensions to have (3, 3, 1280, 1500) shape instead. I wanted to display example images from the validation set dataloader but the images look nothing like the originals once I revert the conversion to tensor. Here is my code:
def normalize8(I):
mn = I.min()
mx = I.max()
mx -= mn
I = ((I - mn)/mx) * 255
return I.astype(np.uint8)
samples = next(iter(val_dataset))
images, labels = samples
image_permuted = images.permute(0, 2, 3, 1)
image_detached = image_permuted.detach().cpu().numpy()
image_unit8 = normalize8(image_detached)
plt.imshow(image_unit8[0]) # First image in stack
plt.show()
I am really not sure what I am doing wrong. Any help would be much appreciated! Thanks