Explain the concept behind this line. I guess it's adding a 3D tensor!

x = x.float()
X= torch.sum(((x[:, 0, :, :-1] - x[:, 0, :, 1:])**2 + (x[:, 1, :, :-1] - x[:, 1, :, 1:])**2)**0.5)

We need more code and context for a better explanation but what is happening in this line:

x is a tensor with dimesnion = 4 (if it’s an image we have batchsize, channels, widht,height)

(x[:, 0, :, :-1] - x[:, 0, :, 1:])**2

this subtracts x[all, 0, all, from 0 to the second last index) with x[all, 0, all from 1 to the last index] and then uses the power of two.

The same is happening here

x[:, 1, :, :-1] - x[:, 1, :, 1:])**2

But now we use in the second dimension the second index

All of that is now used with the power of 0.5 (this is the square root) and then all calculated values are summed.

In the end you have only one value

Also can you please explain:-

What’s the meaning? (x[:,:,0,:,:].shape[-2])

Sure.

x is a tensor with 5 dimensions.
We take the zero index in third dimension so x is now a tensor with 4 dimensions.

tensor.shape returns the size of each 4 dimensions as a list --> [dim1 size, dim2 size, dim3 size, dim4 size] if we use shape[-2] we take the list but only until the second last index so we get [dim1 size, dim2 size]

Hi,

You can look this up for a better explanation on slicing and indexing. Although the implementation is in numpy, it’s pretty much the same for PyTorch.