Is it possible to create a tensor like this in a more efficient manner?
height, width = 128,128
w = torch.arange(0, width)
h = torch.arange(0, height)
tensor = torch.zeros((width, height, 2))
for vi in w:
for vj in h:
tensor[i][j] = torch.tensor([float(vi) / height - 0.5, float(vj) / width - 0.5])
tensor = tensor.reshape(width * height, 2)
albanD
(Alban D)
2
Hi,
The following will generate the same Tensor without the loop:
full_size = (width, height)
w_exp = w.unsqueeze(1).expand(full_size).true_divide(height) - 0.5
h_exp = h.unsqueeze(0).expand(full_size).true_divide(width) - 0.5
tensor = torch.stack((w_exp, h_exp), -1)
tensor = tensor.reshape(width * height, 2)
Hope this helps
It works! Thanks for the help!