def save_attn_map(maps, imgs, path):
# append images with maps
img = torch.cat(imgs, maps), 0)
# making a grid of two columns, images in one and attention in the other
grid = make_grid(img,nrow=maps.size(0),padding=10)
npimg = grid.detach().cpu().numpy() # to numpy array
npimg = (npimg * 255).astype(np.uint8)
fig, ax = plt.subplots(figsize = (8,2))
ax.axis("off")
# transpose numpy array to the PIL format, i.e., Channels x W x H
ax.imshow(np.transpose(npimg, (1,2,0)))
fig.savefig("{}.pdf".format(path),bbox_inches='tight')
plt.close(fig)
I have the function above, where tensors maps and images have the same size (Batch x Channel x W x H). But when I plot the array npimg, the image I get is totally messed up. What could it be? I tried make a grid with imgs just in case, but I get the same weird thing:
Originally imgs is [0 … 255] and maps is [0 … 1]. When I load imgs to a tensor, they become [0 … 1]. I am loading cifar-10 images, so for example, I have the batch [24, 3, 32, 32] for imgs and another for maps. I tried to do i = img.permute(0,2,3,1) and plot lets say i[0], but it doesn’t seems to be right.