EDIT:
see my other reply below for possible solution.
Interesting question!
Still thinking if it’s possible to get rid of the for loop, ill update if an idea comes up (currently testing it with einsum function but no solution so far)
In case ‘for loop’ will stay, I belive using sum+diagonal will be more readable:
a = torch.arange(1, 10).view(3,3)
b = torch.zeros(5) # 5 diagonals for 3x3 due to (2*3-1), for nxn, there are (2*n-1) diagonals
for i in range(5):
b[i] = torch.sum(torch.diagonal(a, offset=2-i))
Would be nice if torch.diagonal could get a list of offsets
output from code above: tensor([ 3., 8., 15., 12., 7.])
import torch
import torch.nn.functional as F
dim = 3
num_diagonls = 2*dim-1
# need to unsqueeze twice for use in conv2d
x = torch.rand(dim, dim).unsqueeze(0).unsqueeze(0)
print('x:')
print(x)
expected = torch.zeros(num_diagonls)
for i in range(num_diagonls):
expected[i] = torch.sum(torch.diagonal(x[0][0], offset=dim-1-i))
print('expected diagonal sums:')
print(expected)
# need to unsqueeze twice for use in conv2d
w = torch.eye(dim).unsqueeze(0).unsqueeze(0)
# from after conv2d result, extract inner inner dim, then take the middle column
result = F.conv2d(x, w, padding=num_diagonls//2)[0][0][:, num_diagonls//2]
print('result diagonal sums:')
print(result)