Is there a way to vectorize the “for” loops in the following scenario:
The variable “A” is a 2-dim tensor of shape (N, M).
The variable “B” is a 2-dim tensor of shape (M, M).
for k in range(N):
for i in range(M):
for j in range(M):
B[i, j] = A[k, i] - A[k, j]
Based on the code it seems you are overwriting the intermediate results in B
with the last value of k
, so you could also remove the for k in range(N)
loop.
To get rid of the loop you could unsqueeze
the A
tensor and use broadcasting.
Here is a code snippet showing the same results:
N, M = 2, 3
A = torch.randn(N, M)
B = torch.randn(M, M)
for k in range(N):
for i in range(M):
for j in range(M):
B[i, j] = A[k, i] - A[k, j]
C = torch.zeros(M, M)
for i in range(M):
for j in range(M):
C[i, j] = A[-1, i] - A[-1, j]
print(B - C)
# tensor([[0., 0., 0.],
# [0., 0., 0.],
# [0., 0., 0.]])
D = A[-1].unsqueeze(1) - A[-1].unsqueeze(0)
print(B - D)
# tensor([[0., 0., 0.],
# [0., 0., 0.],
# [0., 0., 0.]])