Elegant way to get a symmetric Torch Tensor over diagonal

Is there an elegant way to build a Torch.Tensor like this from a given set of values?

Here is an 3x3 example, but in my application I would have a matrix of any odd-size.

A function call gen_matrix([a, b, c, d, e, f]) should generate

enter image description here


EDIT: As of now, I implemented the following solution. A more elegant way is desirable without a for loop. Using plain torch operations is desirable.

def weights_to_symmetric(weights, N):
  assert(weights.ndim == 3)
  tensor = torch.zeros((*weights.shape[:2], N, N))

  idx = 0
  for diag in range(N):
    size = N - diag
    w = weights[:, :, idx:idx+size]

    tensor += torch.diag_embed(w, offset=diag)
    if diag > 0:
      tensor += torch.diag_embed(w, offset=-diag)

    idx += size

  return tensor

I’m not sure about pytorch, but gpytorch has something called toeplitz. Maybe that’s what you are looking for ? (look into sym_toeplitz)

Thank you for the suggestion. sym_toeplitz is a resembles want, but it is not quite the same thing.

from gpytorch import utils
c = torch.tensor([1, 6, 4, 5], dtype=torch.float)
res = utils.toeplitz.sym_toeplitz(c)
res
# tensor([[1., 6., 4., 5.],
#        [6., 1., 6., 4.],
#        [4., 6., 1., 6.],
#        [5., 4., 6., 1.]])

Here is a solution by swag2198 from stackoverflow.

>>> N = 5
>>> vals = torch.arange(N*(N+1)/2) + 1
 
>>> A = torch.zeros(N, N)
>>> i, j = torch.triu_indices(N, N)
>>> A[i, j] = vals
>>> A.T[i, j] = vals
  
>>> vals
 tensor([ 1.,  2.,  3.,  4.,  5.,  6.,  7.,  8.,  9., 10., 11., 12., 13., 14.,
         15.])
>>> A
 tensor([[ 1.,  2.,  3.,  4.,  5.],
         [ 2.,  6.,  7.,  8.,  9.],
         [ 3.,  7., 10., 11., 12.],
         [ 4.,  8., 11., 13., 14.],
         [ 5.,  9., 12., 14., 15.]])

It is possible to also use torch.nn.utils.parametrize.register_parametrization for symmetrizing the matrix on the fly, however it comes to the expense of more parameters, but if this does not induce memory problem is more elegant and faster. Example:

import torch
class SymmetricTensor(torch.nn.Module):
    def __init__(self):
        super().__init__()        
        
    def forward(self, x):
        # standard symmetrization equation of tensors. 
        return 0.5*(x+x.transpose(1,0))
    
    
class ToyModel(torch.nn.Module):
    def __init__(self,N):
        super().__init__()
        self.symmetric_param = torch.nn.Parameter(torch.rand(N,N),requires_grad=True) 
        # This makes the Parameter symmetric_param symmetric  
        torch.nn.utils.parametrize.register_parametrization(self, "symmetric_param", SymmetricTensor())
        
        
    def forward(self,input):
        # assuming input.shape B x C x  H x W, summing over channels index   
        return torch.einsum('bchw,cj->bjhw',input,self.symmetric_param)  

net = ToyModel(4)
xx = torch.rand(2,4,24,24)

out = net(xx)
out.shape
# torch.Size([2, 4, 24, 24])

net.symmetric_param

# output: 
tensor([[0.0248, 0.2363, 0.6852, 0.5177],
        [0.2363, 0.3860, 0.4304, 0.7496],
        [0.6852, 0.4304, 0.6364, 0.3511],
        [0.5177, 0.7496, 0.3511, 0.6647]], grad_fn=<MulBackward0>)