I didn’t see any formula that explained how reshape or view works, I’m mostly interested in linearly indexing an existing tensor. So for example:
Suppose you have a k dimensional tensor A, with shape (n1,n2,n3,…nk) and you reshape it to a 1 dimensional tensor B, what is the formula for associating the linear index of B to the k dimensional index of A?
I don’t really know the formula but it basically works the following way:
Assume that the number of elements of A and B are the same.
so what id does is iterating through the dimensions of A starting by the outer one and filling with that B
and so on
Note that nk-1 corresponds to the zero-index notation so that if an array has N elements the last index is N-1.
You can check numpy docs
Since the logics are the same. There exist 2 types of ordering, fortran and c. In pytorch there is only C.
So the diff is ordering from outer to inner or inner to outer.
Thanks, the numpy documentation is much better and I understand it.
Do you know if you there is a one to one correspondence between reshape and linear indexing? By this I mean suppose we reshape a tensor A to a 1d array B, and then reshape to another tensor C, does A reshape -> B -> reshape C, the same as A reshape -> C? Assuming the shape of C is the same for both?
Yes, I mean, it’s something deterministic and ordered. You may want to have a look to this.
It’s really linked to how many dimensions does each dimension has.
Think that it works in a “FIFO” way. the 1st element of B will be also the 1st element taken for C.
So in the end A–>C is same than A–>B–>C if you don’t modify the ordering of the elements in B.
Lastly think that, since it’s ordered, it respects spatial structure. What does this mean?
Imagine you have a tensor
You reshape it as
Then you apply a convolution and get
if you reshape B back to
B ( 5,2,5,5)
The spatial structure of H and W remains since the ordering of the elements is equal to the ordering of A.