I used to use keras and the image format it followed is [Height x Width x Channels x Samples]. i decided to switch to PyTorch. But i didn’t switch out my data loading schemes. So now i have numpy arrays of shape HxWxCxS, instead of SxCxHxW which is required for PyTorch. Does anyone have any idea to convert this ?
What you want to achieve here sounds more like permutation of dimensions rather than reshaping?
If that is the case, this should do:
t = torch.Tensor(np_arr) t = t.permute(3, 2, 0, 1)
But if you know that the underlying data is in
SxCxHxW but your array dims are
HxWxCxS, you can use
t = t.reshape(S, C, H, W)