Ideally, I would like to execute something like
t = torch.zeros([4, 3, 64, 64])
t[:, :, ::8, ::8].view(4, -1)
but that produces the error
RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.
Unfortunately, I can’t use
.contiguous() because of memory consumption. This code is called too often to make a copy of the tensor each time. Instead I would like to create one big tensor and slice it each time.
Is there some way to use
.transpose() or something similar in combination with the above
.view() to achieve my goal? Is there a way to get a more detailed error message to understand which dimension exactly is the problem?
Thanks a lot in advance!