As_strided on a subview has surprising offset

I was surprised that tensor.as_strided() doesn’t correct for the offset when the tensor is not at the base of the underlying storage:

import torch
matrix = torch.arange(20).view(4,5)
print(matrix)
row = matrix[2]
print(row)
patchOnRow = row.as_strided( (2,), (1,), 0)
print(patchOnRow)
patchOnMatrix = matrix.as_strided( (2,), (1,), 0)
print(patchOnMatrix)
tensor([[ 0,  1,  2,  3,  4],
        [ 5,  6,  7,  8,  9],
        [10, 11, 12, 13, 14],
        [15, 16, 17, 18, 19]])
tensor([10, 11, 12, 13, 14])
tensor([0, 1])
tensor([0, 1])

The documentation says “the offset in the underlying storage”, which is indeed what it does, but I can’t think of a time when you would not want it to be from the offset of the tensor you called it with.

Perhaps including a note in the docs that you should add tensor.storage_offset() if there is any chance you are operating on a tensor that isn’t at the start of the storage.

Yes, I agree that it’s confusing as you are now working on a view of the original tensor.
It’s also more confusing as the docs are wrong in the current 1.12 relase and show that a default offset of 0 is used.
This PR fixed it and the docs from master should show that the offset of the input will be used.

If you remove the expicit storage_offset it will do the right thing:

row.as_strided((2,), (1,), row.storage_offset())
# tensor([10, 11])

row.as_strided((2,), (1,)) # default offset is None
# tensor([10, 11])

It is still confusing when you specify an offset – two tensors that looks identical by elements, shape, and stride, would give different results for as_strided if one is an offset view. I would have expected it to just automatically add the storage_offset(), but that is probably not ok to change for compatibility reasons, so spelling out the hazard in the docs would be nice.

I think this is exactly what is happening now if you don’t explicitly specify the storage_offset or am I missing the point?

row.as_strided((2,), (1,))
# tensor([10, 11])

Without specifying storage_offset the row.storage_offset() will be used.
Let me know, if I misunderstand the issue.

Yeah, I think maybe an example with a view (maybe just stealing your example) might be helpful.

The case where I ran into this was when I was explicitly specifying an offset – I’m pulling patches out of random places inside images.

batchOfImages = torch.tensor( 128, 3, 64, 64 )
image2 = batchOfImages[2]
stride = image2.stride()
patch = image2.as_strided( (3,16,16), stride, dy * stride[1] + dx * stride[2] )

Because image2 is a view with a non-zero storage offset, this creates a patch from data that isn’t even in image2, but rather from batchOfImages[0], which is unintuitive. It can be fixed with:

patch = image2.as_strided( (3,16,16), stride, dy * stride[1] + dx * stride[2] + image2.storage_offset())

1 Like

Ah OK, that’s a good use case and I agree that it’s not intuitive at all.
We should add a warning to the docs then.