# Interpolate align_corners=False

I am having somewhat hard time understanding how to compute a linear interpolation for 1D tensor. For simplicity let’s take a small tensor of [1, 2, 3]. If we do the following:

``````import torch
from torch.nn import functional as F

a = torch.tensor([1, 2, 3], dtype=torch.float32)
a = a.unsqueeze(0).unsqueeze(0)

F.interpolate(a, scale_factor=3, mode="linear", align_corners=False)
``````

Then we get the answer of :
`tensor([[[1.0000, 1.0000, 1.3333, 1.6667, 2.0000, 2.3333, 2.6667, 3.0000, 3.0000]]])`

Now I dug through the cpp code and found out this to calculate the indices:

``````template<typename T>
static inline T linear_upsampling_compute_source_index(
T scale, int dst_index, bool align_corners) {
if (align_corners) {
return scale * dst_index;
} else {
T src_idx = scale * (dst_index + 0.5) - 0.5;
return src_idx < 0 ? T(0) : src_idx;
}
}
``````

So whenever align_corners=False then we have the following formula to calculate the x’s for interpolation.

Let’s take the above example [1, 2, 3]. So the indices we have are [0, 1, 2]. To calculate the src_index given with this formula we will have `src_idx = 1` for index 0 and `src_idx=4` for index 1. Plugging these values into interpolation formula to find the value of the y0:

``````y = y0 + (x - x0) * ((y1-y0)/(x1-x0))
y = 1 + (0 - 1) * ((2 - 1)/(4 - 1))
y = 2/3
``````

So my question is how does torch get 1.0 for the index=0 in the output I have pasted? torch version I am using is 2.1.1