I’ve never used the built-in transforms system before, but it seemed really useful, so I wanted to give it a try.
My dataloader grabs a 4D batch (batch, channels, x, y) np.ndarray from an HDF5 file, I believe this is a very common setup for many 2D networks.
However, the transforms system can only process 3D (x,y,channel) shaped PIL-images?
So to use this system I’d have to build a for loop in which np.ndarray is converted to PIL image, transpose, transform and convert back to np?
This sounds so ridiculous I can hardly believe it. I would have had expected that this torch module would at least accept torch tensors.
Some transformations are working on tensors directly (
Normalization), while most work on
PIL.Images, so you are correct in your assumption.
Alternatively, you could use e.g. opencv to directly transform your numpy arrays.
Thanks for your response, there really is a plethora of alternatives in reprocessing modules (opencv, skimage.transform, scipy.ndimage, imutils, even numpy), so I’ll go for one of those.
Interestingly, the RandomCrop Transform is supposed to work on PIL images and Tensors according to this.
But it is only working in PIL images, at least in my torch==1.6.0 and torchvision==0.7.0 environment.
Apparently, the feature got introduces just a couple of weeks ago (https://github.com/pytorch/vision/pull/2342)
and is not yet on anaconda
This functionality is available in the nightly binaries, but it seems it wasn’t picked for the
I’ve created a similar post, where I’ve tagged Francisco to take a look, if this was missed. Otherwise the docs should be updated.