The torchvision transformations usually work with PIL.Images or Tensors.
You could transform your numpy arrays to Tensors using a Dataset and then apply the transformations.
If you don’t want to use a Dataset, you could normalize the images using pure numpy.
class MyDataset(Dataset):
def __init__(self, numpy_arr, mean, std):
self.data = torch.from_numpy(numpy_arr)
self.mean = mean
self.std = std
#normalize here or in __getitem__
def __getitem__(self, index):
data = self.data[index]
#if not already normalized
data = data - mean
data = data / std
return data
def __len__(self):
return len(self.data)
Using this approach you need to load all the data beforehand. I think HDF5 can also lazily load the data.
Let me know, if this works for you!