Adaptive Average Pooling - Implementation

I was a bit confused about how Adaptive Average Pooling worked. Based on the explainations provided here, I tried to implement my own version:

def torch_pool(inputs, target_size):
    kernel_size = (inputs.shape[-1] + target_size - 1) // target_size
    points_float = torch.linspace(0, inputs.shape[-1] - kernel_size, target_size)
    points = torch.cat([torch.squeeze(torch.round(points_float)).int(), torch.tensor([inputs.shape[-1]], dtype=torch.int32)], 0)
    
    # Points is [0 2 3 5 7] for the call given below the function

    pooled = []
    for idx in range(points.shape[0] - 1):
        pooled.append(torch.mean(inputs[:, :, points[idx]:points[idx + 1]], dim=-1, keepdim=False))
    pooled = torch.cat(pooled, -1)
    return pooled

inps = np.array([0, 1, 2, 3, 4, 5, 6], dtype=np.float32)[None, :, None]
inps_torch = np.transpose(inps, (0, 2, 1))

x = torch_pool(torch.tensor(inps_torch), 4)
print(x)

x = nn.AdaptiveAvgPool1d(4)(torch.tensor(inps_torch))
print(x)

The first print (my code) gives the output:

tensor([[0.5000, 2.0000, 3.5000, 5.5000]])

But the built-in function (second print) gives:

tensor([[[0.5000, 2.0000, 4.0000, 5.5000]]])

Can someone help me out? Where did I go wrong?