Well yes I just cutout moving from Numpy to a tensor. Its like whatever format is stored in that array is no longer a good image. In the other example you gave, you ran stack() after the data was loaded into an array but I am not sure that is necessary with this new format.
Here is what it looks like if I comment out moving it to the shared_array
tensor([[[ 0.5843, 0.5608, 0.5373, ..., 0.5137, 0.5137, 0.5216],
[ 0.5451, 0.5137, 0.5137, ..., 0.5294, 0.5137, 0.5294],
[ 0.5608, 0.5373, 0.5294, ..., 0.5137, 0.5137, 0.5137],
...,
[ 0.5294, 0.5451, 0.5529, ..., 0.5686, 0.5765, 0.5922],
[ 0.5216, 0.5529, 0.5608, ..., 0.5765, 0.5843, 0.5922],
[ 0.5451, 0.5843, 0.5922, ..., 0.5765, 0.5922, 0.5922]],
[[ 0.1451, 0.1216, 0.1059, ..., 0.1294, 0.1294, 0.1373],
[ 0.1059, 0.0824, 0.0745, ..., 0.1451, 0.1451, 0.1451],
[ 0.1216, 0.0980, 0.0824, ..., 0.1373, 0.1373, 0.1373],
...,
[-0.0039, 0.0118, 0.0196, ..., 0.0824, 0.0902, 0.0902],
[ 0.0118, 0.0431, 0.0510, ..., 0.0902, 0.0980, 0.1059],
[ 0.0353, 0.0745, 0.0745, ..., 0.0902, 0.1059, 0.1059]],
[[-0.0980, -0.1137, -0.1216, ..., -0.1216, -0.1216, -0.1137],
[-0.1294, -0.1608, -0.1529, ..., -0.1059, -0.1059, -0.1059],
[-0.1137, -0.1373, -0.1373, ..., -0.1216, -0.1216, -0.1137],
...,
[-0.2000, -0.1922, -0.1765, ..., -0.1608, -0.1529, -0.1451],
[-0.2000, -0.1686, -0.1373, ..., -0.1529, -0.1451, -0.1373],
[-0.1765, -0.1373, -0.1216, ..., -0.1529, -0.1373, -0.1373]]])
torch.min(images[0]), torch.mean(images[0]), torch.max(images[0])
(tensor(-1.), tensor(0.2850), tensor(0.9294))
But then if I uncomment x = self.shared_array[index]
I get this instead:
tensor([[[416.1446, 419.1934, 423.8561, ..., 390.7272, 393.7298, 395.5690],
[426.6692, 426.7228, 426.8047, ..., 393.6453, 395.1738, 395.2061],
[424.8294, 421.7952, 419.5968, ..., 393.0822, 395.8900, 393.1715],
...,
[392.0519, 397.7280, 398.4988, ..., 398.1694, 396.6755, 398.2361],
[400.2377, 396.8661, 396.8359, ..., 399.0093, 398.7518, 400.2377],
[404.3428, 404.0421, 399.0929, ..., 398.3518, 402.8570, 404.3428]],
[[327.8235, 329.5239, 333.7325, ..., 285.4269, 288.2922, 292.8281],
[340.1188, 340.1487, 340.2227, ..., 285.2634, 288.8439, 288.9236],
[338.3108, 335.2766, 333.0782, ..., 282.1608, 286.9265, 286.8257],
...,
[309.1382, 312.1176, 311.9801, ..., 313.9214, 315.5509, 317.0721],
[317.3240, 311.2557, 310.3172, ..., 314.7612, 317.1865, 317.3240],
[321.4291, 318.4316, 312.5742, ..., 314.1038, 321.2916, 321.4291]],
[[255.7246, 261.4700, 267.0410, ..., 201.9358, 208.5293, 211.7169],
[271.5615, 271.6625, 271.7604, ..., 202.7995, 209.5271, 209.5832],
[269.8168, 266.7826, 264.5842, ..., 198.9806, 206.3089, 207.5169],
...,
[244.2492, 247.2286, 247.0911, ..., 242.5794, 245.2944, 246.9341],
[252.4350, 246.3667, 245.4282, ..., 243.4193, 248.2524, 252.4350],
[256.5401, 253.5426, 247.6852, ..., 242.7618, 252.3576, 256.5401]]])
torch.min(images[0]), torch.mean(images[0]), torch.max(images[0])
(tensor(56.4313), tensor(381.2817), tensor(484.6729))
So it looks like the image I grabbed from disk, comes out of the Dataset as -1,1 normalized…and yet when I put it into the cache, somehow its just raw. Both come out as tensors, which means they are going through torchvision.transforms, but it would appear that cache read isn’t getting normalized. I have just basic transforms in this test:
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.5, 0.5, 0.5],std=[0.5, 0.5, 0.5])
])
Trying to figure out whats going on, as all I do is that one step, and it shouldn’t be changing the image to that degree.